The most impactful bugs in a codebase can often be some of the smallest. We’ve all been there—a semicolon is missed or parentheses become mismatched or a single variable is swapped for another with a similar name, and the next thing you know, you just lost an hour combing through your code character-by-character to find the culprit—the inevitable consequence of the imperfect perceptions of a squishy human brain.

Context-switching only amplifies the problem, making it even easier for abnormal problems to escape your attention. Fortunately, large language models (LLMs) provide an effective interface for identifying irregularities within structured language, which is just what we need for this sort of problem.

Today, we’re continuing to iterate on our AI roadmap by announcing our new CI/CD Review AI feature! Our goal is to deliver features that reduce friction across your entire development lifecycle, and CI/CD Review AI is designed to help point you in the right direction whenever things go wrong.

What does it provide?

We provide Commands, Pipelines, and Services workflows to build out the foundation of the ideal Developer Experience your team always wished they had. Our Insights and Reports features help you identify opportunities for reducing friction as your workflow evolves. Finally, our other new Review AI features enable rapid iteration, providing you with a first-pass review of your code, and allowing your colleagues to focus on the substance of a Pull Request in their own reviews.

With CI/CD Review AI, our goal is to increase the efficiency of the back half of your Developer Experience by rapidly identifying where to begin your debugging investigation and suggesting some possible paths forward.

Book-a-Feature-Demo_v1

How is it used?

Connect your GitHub account to the CTO.ai platform, and add the CTO.ai Review label to a Pull Request—the same label used to trigger our Code Review AI bot!

When the Pipelines triggered by your Pull Request are complete, any that result in a failed Pipeline run will be automatically reviewed by our CI/CD Review AI.

As you can see from the logs in the Pull Request’s Check Runs, the Pipeline run failed as a result of unit test failures in our test Pipeline:

After our CI/CD Review AI processes these logs, the analysis is delivered in the form of a comment on your Pull Request. You can see how the most relevant lines of the Pipeline logs were extracted from the raw output and explained in a neutral way:

Recontextualizing the problem can help break the mental logjams that lead to prolonged debugging sessions. For instance, in this example, the root cause of one test failure is explicitly explained as a “typographical error in the code (or test expectation)”—if you’ve ever spent time sifting through your code for a bug that was actually in your test suite, you’ll appreciate the value of this type of small shift in perspective.

What's next?

We're going to continue iterating on intelligent features that help your team remove friction from your Developer Experience. Stay tuned for more updates!

Want to see this feature in action? Book a feature demo with one of our experts today!