Blog Post

Ellipsis.dev (YC W24) Reviews Code, Fixes Bugs, and Reduces Time-to-Merge by 13% 🎛

Plus: Cofounder & CEO Hunter Brooks on how AI developer tools help teams be more productive...

Published 15 Apr 2025

CV Deep Dive

Today, we’re talking with Hunter Brooks, Co-Founder and CEO of Ellipsis.dev.

Ellipsis is an AI-powered developer tool that lives directly inside your GitHub organization. Taking the interface of an AI agent, Ellipsis reviews pull requests, proposes bug fixes, answers questions in Slack, automates standup reports, and more. The AI code reviews catch logical bugs, style guide violations, antipatterns, security issues, and more, allowing teams to automatically enforce their coding style across the codebase. Using Ellipsis in your GitHub workflow reduces the average team’s time-to-merge by a whopping ~13%.

Today, hundreds of companies are currently using Ellipsis to accelerate code reviews and automate tedious tasks in the software development lifecycle. Ellipsis supports powerful automations, from assigning reviewers and surfacing similar past PRs to creating daily standup updates in Slack. The product is especially sticky with teams of 25–100 engineers, where the pressure to ship faster with leaner teams is highest. Following their stint in YC W24, the team raised a $2m seed round in April 2024 to accelerate the future of software development.

In this conversation, Hunter shares how Ellipsis evolved from his experiences in 2019 with OpenAI’s first LLM trained on code, Codex, why the team is doubling down on hierarchical agents, and why the future of devtools looks more like invisible AI coworkers present in your workflows.

Let’s dive in ⚡️

Read time: 8 mins


Our Chat with Hunter 💬

Hunter, welcome to Cerebral Valley! First off, introduce yourself and give us a bit of background on you and Ellipsis. What led you to co-found Ellipsis?

Hey there! My name is Hunter Brooks, I'm CEO and co-founder at Ellipsis.dev.

Before founding Ellipsis, I was an ML engineer at AWS, and before that, I did research and published in the field of Astrophysics. I studied math and physics in college—I thought I was going to get a PhD and go down that route. But at the last minute, I decided to move into tech, which made things pretty tough when I found myself in an AWS interview having to solve algorithmic puzzles and do typical data structures and algorithms questions. I wasn’t trained as a computer scientist in college, so I had very little programming experience and really struggled with those types of problems.

At the time, I’d already been working as an engineer for a few years and had shipped some pretty big systems to prod at Capital One. So I was super frustrated—how could I be a functioning software engineer and still bomb those interviews just because I hadn’t memorized a bunch of algorithm tricks? I ended up getting the job, but that experience stuck with me. A couple of years later, I left AWS to build my first company, The Practical Interview. The idea was to make interviews more like day-to-day software engineering work. Instead of a 45 minute data structure and algorithms quiz, what would happen if we put a candidate in a real, working codebase and asked them to add a feature? My hunch was that candidates wouldn’t have to prepare for their interviews as much, because they’re similar to their day job, and hiring managers would get better insight into how a candidate would perform on the job.

To do this, I got early access to OpenAI’s Codex model in 2019—this was a GPT-3 model fine-tuned on source code files. That was my first encounter with LLMs, and I had a huge aha moment when I realized how incredible these models were, even back then with just a 4K context window.

But back then, the models couldn't write code reliably. They understood functions and classes, and could answer questions about them, but they got confused when writing code. The idea of making a file edit—like opening a file, inserting a change, and then re-rendering the file—was too complex. This made it difficult to generate entire GitHub repositories because most of the work is to generate code, not read existing code. Obviously, that’s changed dramatically. Now we can make file edits easily—tools like GitHub Copilot, Cursor, and Claude Code have all proven that. But that early experience showed me that LLMs were very soon going to play an important part in the software development workflow.

I realized that Codex was best at tasks that required mostly reading code, with very little writing of code. So I shifted focus. Unable to generate full codebases for candidates, I looked at something more practical: fixing bugs on pull requests. That’s a read-heavy problem, after all. The process of understanding an intended code change, identifying a bug, and making a small edit aligned really well with what early models could handle. That’s where the idea for Ellipsis came from—and it’s still core to our product today.

We built Ellipsis to fix bugs during code review. That’s it. That’s the origin story.

How would you describe Ellipsis to the uninitiated developer or AI team?

Quite simply, Ellipsis is a developer tool that reviews code and fixes bugs. We have hundreds of customers using it, and we help teams merge pull requests about 13% faster. We do that by putting an LLM-powered agent directly into the code review process. It identifies logical bugs, style guide violations, anti-patterns, and security issues, then proposes comments back to the PR author through the GitHub UI.

This makes it easy for authors to apply changes with a single click—no need to check out the feature branch in your IDE. It also helps tech leads, VPs of engineering, and other stakeholders ensure their style guides are enforced consistently across the organization.

Who are your users today? Who is finding the most value in what you're building with Ellipsis?

Unlike IDE-based tools like Windsurf or Cursor, who are targeting the developer as the user and sometimes the buyer, Ellipsis is a tool for teams. So it's a team-level product, and as a result, we sell to companies. GitHub admins install our bot—our product—into their GitHub repository, and Ellipsis is then able to be a resource on the entire team's PRs. Any developer working in that repo can then summon Ellipsis into their pull request to have it do things like update the pull request description, answer questions about related PR’s, and even push a commit or open a side-PR with a change.

Of course, being a developer tool, we have thousands of developers using our product—but we sell to the companies. It’s a B2B business. And so of the companies that we've found the strongest fit with today, it tends to be engineering teams between 25 and 100 developers. They really like our product because those teams are small enough to be on the cutting edge of AI developer tools, which is where Ellipsis is. And they can use tools like ours to reduce the need to hire and grow their organization. Our product decreases time to merge by about 13% on average, so it increases productivity by 13%. That’s a non-trivial gain.

It’s so easy to get started with Ellipsis - all developers need to do is get their GitHub admin to install it—that just takes two clicks—and then right away, every pull request, they’re going to get automatic summaries from Ellipsis so they don’t have to write their PR descriptions. They’re going to get those LLM-powered code reviews, and if any bugs pop up, Ellipsis will be able to fix them for them.

Talk to us about Workflows, which has been a super-exciting area of growth for you in the past few months.

Workflows is definitely one area that we're spending a lot of time on and seeing some serious customer pull. I've talked a little bit about how Ellipsis helps on the pull request side of things, but there are also a lot of other activities in the software development lifecycle that can be automated—and aren’t today. Ellipsis is able to do a lot of that for us.

For example, imagine I’m a developer working at a 100-person company and I need to make changes to a microservice that I’m unfamiliar with. Maybe I need to update how that service calls my own. I can make the code change and put up a pull request, but I don't know who to involve in the review process. Traditionally, teams use tribal knowledge or maybe a CODEOWNERS file to bring the right people in. But with Ellipsis, I can create a workflow that says “whenever someone updates Microservice X, add Person Y as a reviewer because they're the tech lead.” As a developer, I can create workflows in natural language that take the format of “If X happens, Ellipsis should do Y”. It’s like If-This-Then-That for your GitHub repositories.

Here’s another example from our Reactive Workflows product: many teams use it to remind PR authors to associate the correct ticket with the pull request and to make sure that the PR actually satisfies the ticket. This could be a JIRA ticket or a Linear ticket—some sort of product management tool. Traditionally, a human would have to ping the PR author to figure it out, but Ellipsis can go find the ticket, and append it to the PR automatically.

Within Workflows, any use-cases that have surprised or delighted you the most?

A cool feature we support, which I’m really excited about, is that beyond just updating PR descriptions or finding and fixing bugs, Ellipsis can explore the entire PR history in your GitHub repo. That means developers can tag Ellipsis on a pull request and ask things like, “Hey, do I need to update the environment variables as part of this change?” or “Who was the last developer to touch this part of the code? I need to add them as a reviewer.”

These are the types of helping-hand tasks that our React Workflow product supports. It’s interesting because while Ellipsis is primarily known as a bug finder (code review) and bug fixer (code generation), this workflow layer represents a third category—those annoying, manual tasks developers normally have to handle on their own. Things like searching PR history, identifying reviewers, managing tickets. What we’re really trending toward is building an AI teammate. But we’re doing it piece by piece, allowing users to summon Ellipsis at very specific points in the software development lifecycle. As a developer, all I need to do is tag @ellipsis in Slack or Linear or GitHub or wherever my team works.

We also have a Slack bot and Linear integration, so everything we’re talking about can also happen outside of GitHub. It’s very common. One of our most popular Workflow features is Cron Workflows. Cron Workflows are just workflows that repeatedly run at a set day and time. Teams create workflows to automate standup reports in Slack. For example, they might say “At 9am every business day, summarize the past 24 hours of code changes and post them to the #engineering channel”. Since Ellipsis sits in the GitHub repo and sees what code gets merged, it can handle tasks like this. It’s really cool. It also becomes even more helpful when non-technical folks—people in Slack who aren’t in GitHub—can jump in and ask things like “What’s the status of feature X?” or “When did we update Y?” or “Who is working on Z?” and Ellipsis can respond to those people directly.

How are you measuring the impact that you’re having on developers using Ellipsis in their workflows? What exactly are you tracking?

We’re in an interesting world right now where people have only recently started leaning in on evals for this. And we do too, of course. But as an applied AI company, we also put an equal emphasis on our customers. Qualitative feedback is just as important as quantitative data right now. Real-world production usage data is very different from eval data.

So first and foremost, we’re looking at time to merge. Before a customer installs our product, what was their average time to merge over a period of time? How is that trending? Then after they’ve been using Ellipsis for a while, did that go down? And on average, it goes down by 13%, which is great.

Now some development teams actually cut their time to merge in half, and that’s super exciting to me. Those tend to be teams that are more nimble—they may have historically been waiting on code reviews, and now they’ve entirely replaced that process with Ellipsis. That’s becoming more common. On the flip side, teams that might not get as much benefit from our product are often working in languages or frameworks that aren’t well represented in today’s LLM training data. So if you’re a team working in something like Golang or Rust, you might have a slightly harder time with AI-powered developer tools right now.

The middle of the bell curve is definitely Python and JavaScript, and we speed up time to merge there quite a bit. That’s largely because models from the big labs—OpenAI, Anthropic—are really good at those languages. Some of the other metrics we’re tracking now are around developer productivity. We're working on a unit of work measurement where one unit equals an hour of a 50th percentile developer’s time in 2020.

Why did you pick 2020?

We picked 2020 because it was before GitHub Copilot and other AI tools really started making developers more productive. We needed to pick a standard unit of efficiency—kind of like how UNIX epoch time chose 1970. We didn’t want to use today's numbers because these LLMs are improving so fast that the math would get messy. So we went pre-AI to define that unit of work and now we track how much more productive teams get over time. The data is still coming in, but right now, most teams are about 2–3x more productive than the average developer in 2020, which is very exciting.

One more useful metric we look at is around types of code changes being made—specifically, what logical purpose does the merged code have? Is it a refactor, an enhancement, some bug fixes, maybe infrastructure changes? We’ve noticed that teams are spending more time on feature development these days. Refactors and bug fixes happen a lot faster now, thanks to tools like Claude code, Cursor’s agent mode, and even just developers using ChatGPT to unblock themselves. So that’s really exciting. Measuring developer productivity is something only tools installed into the repo—like Ellipsis—can do. And it’s going to be super important as teams continue experimenting with AI dev tools. They're going to want to know: is productivity actually going up? Is this a good return on my AI dev tool budget?

Could you share a little bit about how Ellipsis actually works under the hood? What are some of the model architectures and frameworks - agentic or otherwise - that enable the tool to work the way it does?

We have a hierarchical agentic structure. Agents assign work to other agents. We built it this way because when the agents that are lower in the dependency tree improve, many products and functionalities across the surface of Ellipsis improve with them.

An example of that: we have several agents whose sole job is to find the relevant code snippets in the codebase. Some use RAG, some browse the file tree, some rely on string-matching search, and others even look at Slack messages about code. There are a bunch of different ways we pull in data. If any one of those agents—whose main role is codebase understanding or learning the team’s style guide—gets better, a bunch of our features improve with them. Code review gets better because we can understand second- and third-order consequences of a change more deeply. Code generation improves because we can reuse existing code—for example, Ellipsis doesn’t have to re-implement something from scratch if we know a function already exists. We can just import and use that one. And obviously our functionality around answering questions in Slack gets better because we understand the codebase more deeply.

So the hierarchical nature of our architecture makes it easy to ship fast improvements when we level up the foundational layers, and that’s something we spend a lot of time investing in. We do a lot of decomposition of problems into sub-agents. We have hundreds of agents with thousands of prompts, and we get very sophisticated about our few-shots, about the logic that these agents can handle, and how the agent hierarchy is structured. It’s a really fascinating applied AI problem. And—we’re hiring. So if you’re interested in applying LLMs to problems that come up in codebases or across the software development lifecycle, you should definitely reach out.

Another important thing about our architecture is that unlike a lot of tools in the space, we don’t overpromise and underdeliver. We don’t say we’re an “AI software engineer” yet—though that’s absolutely the direction we’re heading. So many people in the industry have already been disappointed by tools that made big promises and fell short. I hear all the time from developers who’ve tried products that are supposed to generate PR from Slack—they’re frustrated by the real world results. It goes back to what I was saying about evals - they’re great, but proficiency in real world codebases is paramount. Something that performs great in a toy problem on a small repo might not scale to a production-level codebase with hundreds of thousands of files and lots of technical debt.

So instead, we’ve chosen to do one thing really well—understanding codebases and making small changes. And as LLMs continue to improve and are able to reason better across large, complex codebases, we’ll expand the scope of what Ellipsis offers. But today we’re careful not to overpromise and underdeliver.

What has the reception from more experienced developers been like, given it’s not simple to add new tools into your workflow after years or decades of coding in a certain way?

The coolest thing about Ellipsis is that it fits into your workflow without requiring any changes. Developers simply publish a pull request, and the first thing Ellipsis does is react with an eyes emoji to indicate that it’s working. It then appends a PR description to whatever you wrote—a quick blurb summarizing the changes—and keeps that updated as you continue pushing. After that, Ellipsis leaves a comment on the PR, either saying “Looks good to me, no problems detected” or “Changes requested.” If it spots a bug, it highlights the line, includes a link, explains the issue, and, if possible, suggests a fix that the developer can accept with a single click.

The way Ellipsis interacts with a PR author is identical to how a human would, which means there’s no learning curve and no need for teams to change how they already work. That’s part of what makes it so compelling. On top of that, Ellipsis reviews 99% of pull requests within three minutes—it’s basically synchronous. This enables super fast feedback loops and nearly immediate time to value after installation. Install, open a PR, see code review. The only hesitation I’ve encountered with Ellipsis is on the security side.

People are sometimes nervous to connect a third party GitHub app to their codebase. We encountered this early on, so we make some very serious, very important promises to our customers. We have SOC 2 Type I certification and are working toward Type II to back that up. First, we promise that we never persist any customer source code to disk—which means we will never train any models using customer code. Only while we’re reviewing code does an ephemeral Docker container in AWS see the code. Second, we’re really quick to share our pen test reports and SOC 2 documentation.

That said, having worked on this type of product for a while now, I’ve noticed that people are getting substantially more comfortable installing tools like this. The appetite is definitely there, and it continues to grow. That’s honestly just because these LLMs are badass now. They can do a lot of really helpful things, they save developers a ton of time, and as a result they’re becoming pretty ubiquitous. Most developers are already using tools like Copilot or Cursor, so it’s an easy sell: two clicks to install, instant code reviews, and the confidence that your code is secure. Why not try it out?

How do you see Ellipsis evolving over the next 6-12 months? Any product developments that your users should be excited about?

So, we think o3 is amazing and good enough to start solving non-trivial engineering tasks from teams’ backlogs. We’re shipping a lot of code generation features—these are proactive code generation features. You might open up a GitHub issue and get a pull request that already has the implementation. That’s very exciting because Ellipsis can actually execute the code that it writes, and as a result, the PRs it generates are very high quality.

We’re excited about this because it means that teams, from a product perspective, will onboard Ellipsis almost like they’re onboarding a new colleague. They'll help Ellipsis get set up with a Docker container, know what commands to run, and then they’ll start seeing PRs show up in their repo—to the degree that they want them. That’s very exciting. We might even see product managers being able to make technical changes because of stuff like this.

How would you describe the culture you’re aiming to build at Ellipsis? Are you hiring, and what do you look for in prospective team members joining the team?

So, we’re a two-person team, and our goal is to be a 10-person unicorn. We’re aiming to keep the team incredibly small and incredibly talented. We think tools like Ellipsis will allow us to ship a lot of product very quickly. We use our product every day. There’s a culture here of getting a feature request or a Slack DM from a customer and shipping the fix within the hour.

Every developer at Ellipsis is constantly juggling multiple workflows through the various AI developer tools we’re using. There’s a ton of ownership. It’s an exciting codebase and product to work on because there’s so much to build: codebase search, code review, code generation, onboarding flows for our sandboxed code gen, insights and analytics from customer activity. There’s a huge surface area for ownership, and that’s why we’re hiring now.

We raised a $2 million seed round in April 2024 after going through the Y Combinator Winter ‘24 batch. Our customer base has been growing fast, and we’ve finally reached the point where two people just can’t manage it all so we’re inviting a handful of talented developers to build the future of software development with us.

Ellipsis is looking for a Founding AI Engineer - apply here.


Conclusion

Stay up to date on the latest with Ellipsis, learn more about them here.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email (newsletter@cerebralvalley.ai) or DM us on Twitter or LinkedIn.

Join Slack | All Events | Jobs

Subscribe to the Cerebral Valley Newsletter