E2B is the interpretation layer for AI Agents šŸŽ›

Plus: Founder & CEO Vasek on OSS, code-gen and AI agents...

Published 23 Apr 2024

CV Deep Dive

Today, weā€™re talking with Vasek Mlejnsky, Co-Founder and CEO of E2B.

E2B is a developer tool startup working on an open-source code interpretation layer for AI agents and AI apps. Founded by Vasek and his co-founder Tomas in 2023, the startupā€™s mission is to build the best automated cloud platform for the millions of AI agents that Vasek sees as the future of the AI software ecosystem. E2B supports any LLM, multiple AI frameworks, infrastructure to support scalable multi-agent environments, and testing tools to check an AI agent's work.

E2Bā€™s typical user is an ā€˜AI Engineerā€™, or a developer who is natively familiar with AI infrastructure and tooling in order to build production-ready applications around code generation, reasoning, and data analysis. Over 200,000 agents have run on their platform. They have raised $3M pre-seed led by Kaya, Sunflower Capital together with investors like Guillermo Rauch (CEO of Vercel), Paul Copplestone (CEO of Supabase), Juraj Masar (CEO of Better Stack), and employees from companies like OpenAI, Retool, and Figma.

In this conversation, Vasek walks us through the founding premise of E2B, why open-source agents are the future of AI and E2Bā€™s goals for the next 12 months.

Letā€™s dive in āš”ļø

Read time: 8 mins


Our Chat with Vasek šŸ’¬

Vasek - welcome to Cerebral Valley. Firstly, give us a bit of background on yourself and what led you to co-found E2B?

Hey there - Iā€™m Vasek, CEO of E2B. At E2B, weā€™re building the code-execution layer for AI agents and AI apps, and our goal is to build an automated cloud platform for AI agents. I started E2B with my co-founder Tomas in March 2023, and the idea originally came from our own needs. When GPT-3.5 came out, we started experimenting with it and built an early Devin-like agent. To run it, we needed a cloud environment to start the server and run the AI code, so that we could provide the agent with a feedback loop and workspace.

Our realization was that, in the future, there will be millions or billions of these agents, and they will need a new kind of cloud where the software can build other software. No current offering was a good fit for us, so we started building E2B as a completely open-source solution.

How would you describe E2B to a new AI developer? And who's finding the most value in your platform so far?

With E2B, we give a small computer to each instance of your AI agent. This computer is a small VM (virtual machine) that we can spin up very quickly, built securely from the ground up, and made for running untrusted code. With it, you can basically give sudo-privileges to your AI.

Today, our users are mostly AI engineers. This is a new concept, but those are the developers who are using LLMs to either build AI features into their apps or to build applications and software that havenā€™t really been built before.

Youā€™re heavily focused on building infrastructure for the AI agent ecosystem. How would you assess how far along we are in terms of developing agents?

One thing weā€™re seeing is that coding agents are becoming some of the most useful applications of AI agents. This is true even within AI apps that don't necessarily need to write code for end users, but still need to perform reasoning or analysis of underlying data. In that sense, LLMs are good at generating code but require a stronger reasoning part for the "brain" of your AI app. This usually comes in the form of a code interpreter type making the LLM performance better. Now, we are starting to see first glimpses of more autonomous systems, where thereā€™s not much HITL (human in the loop).

In the first half of 2024, what's really becoming clear is how much engineering you need to build around the agent itself. Itā€™s not just about having a strong LLM architecture, but also orchestrating of LLMs and multi-agent systems and building a complex product around the agent. For example, I think the Devin demo was really useful because of the UI interface informing the user about what the agent is doing. Including the ability to make it run for hours and come back to it, or the ability to talk to the agent while it's working. We are getting into the production phase of agents and getting first glimpses of where the technology actually works. It might take two or three years as LLMs get better and as we figure out the correct form-factor and UI. In that scenario, we as a company want to be there for developers, and so now is the right time to be building.

Which kind of applications or use-cases with E2B are you most excited about today? Talk to us about any successful customer stories so far.

Today, we have three core use-cases. The first one is to add a reasoning part of the brain to your LLM agent and AI data analysts. Today, we have customers that are building GPTs for bioinformatics, for example, where the AI agent is very autonomous and doing a bunch of long-running experiments with R, which is a very specific use-cases. We also have a company that is building a GPT internally to increase their productivity. So, a code interpreter is something that really works in production and has already increased the productivity of our users and their employees - I like to call it an OG coding agent.

The most exciting use-cases are on the edge of what's possible. For example, Devin-like agents are either focused on a very specific use-case, like helping with certain kinds of applications, and need to use the full environment to use a file system, make network requests, download files from the Internet, and dynamically install packages. That's definitely the use-case where it is very helpful. For example, we are collaborating with the creators of a huge open-source project called OpenDevin to integrate E2B, to help them even with evals for the SwE benchmark.

The last use-case, which I'm personally very excited about, is generative UI - and I think this is the least explored out of all three. I've seen some very exciting demos where developers use generative UI to dynamically create dashboards and code interpreters to build really exciting use-cases. Imagine something like Retool, but you can actually connect to your database and ask it questions as you would in ChatGPT. But instead of getting answers, you are actually getting full dashboards that are live, can get updated in real-time, and that you can chat with continuously. Generative UI has a huge future potential, especially if you combine it with products like Perplexity - and then you can even order from Uber or book a hotel if you build that UI around it based on the user's intention.

The agent space has a lot of different startups operating within that ecosystem. What's unique or different about the approach that E2B is taking relative to other players in the space?

One big difference is that we very intentionally don't go into the agent framework level. We want to stay low-level - we don't even make calls to LLM for you! So, we intentionally want to be completely LLM-agnostic and framework-agnostic and want to work as a code-execution layer. For example, you wouldn't build a server for a web app, and similarly, we don't want you to build the infrastructure for your agent. That would be the analogy.

Importantly, the cloud for an AI agent has to be built to handle the fact that you canā€™t predict which operation is going to take place - which is completely different from how previous cloud platforms have been built. The new LLM-powered software is by definition untrusted. You donā€™t know what it will do but at the same time, you want to give your AI app a lot of tools so it achieves the required tasks. No current cloud solution has been built for software that builds and deploys another software while giving developers full observability of whatā€™s happening. You have to go very deep into the kernel level to achieve that.

How do you keep up with the vast amount of research and product developments taking place on a weekly basis? How does this impact the way you build?

This is difficult because in the AI space, you need to ship fast whilst keeping up with everything. Partly, what helps is what I just mentioned - when something new comes up, it's easy for us to show how it can work within E2B because we decided to stay lower within the AI stack. Practically, this usually means that if there's a new agent framework or a new way of prompting LLMs that really works better, we can integrate into the new framework quickly and itā€™s very likely going to work with E2B. You can create a variety of tools built with different techstack and frameworks that are using E2B to execute code - that's a big advantage.

Our general principle is not to think about what is possible now, but what will be possible in 1-2 years. You want to focus on where the ball will be, and not where it is right now because that way you would end up building something that a bigger player or an LLM itself will be capable of doing very soon.

Youā€™re fully open-source - can you talk a little bit about how the OSS ecosystem has impacted the way you lead E2B, and how youā€™ve benefited from OSS technology?

We are fully open-source with Apache 2.0 license, which means that you can build a commercial product on top of us. The goal is for our users to be able to deploy easily on their AWS,GCP, Azure accounts, or on their own cloud.

I also truly believe in two things with infrastructure companies. Firstly, you should only pay for what you use. Secondly, you should be able to self-host, even though most users probably don't want to self-host and want the ease of cloud offering. Especially larger customers want to self-host and control their infrastructure. We want to make it as easy as possible for them.

If we want to build an infrastructure layer for the next billions of agents, we prioritize being very transparent and showing people what's happening - even in the development stage - and making it easy for them to use this on their own in whichever way they like.

What are the main points of focus for E2B over the next 6-12 months, as the agent ecosystem matures?

Our main priority is around improving the observability aspect of E2B. Until now, we have been spending time on making sure our infrastructure is really stable and the core set of features is there. Now, we can finally start differentiating from other well-known services and offer more tools that are very agent-specific.

One of the recent improvements is making it easy for developers to know what is happening inside their sandboxes. Another element is having these sandboxes running for a very long time. We often have customers running them for 24 hours because it can take a lot of time for AI to achieve certain tasks - even though the time will likely shrink.

If you look around existing cloud offerings, none of them are really providing efficient computing for a long runtime. This is another goal for us - being able to support these very long-running use-cases without charging a fortune to our users and making sure they only pay when the code is running.

Lastly, share a little bit about your team dynamic. How would you describe your culture, and what do you look for in prospective hires?

We have a team of two co-founders and two founding members. We recently moved from the Czech Republic to San Francisco because this area has the strongest community around building AI agents and agentic products.

Internally, we are heavily focused on our users - we talk to them daily, prioritize what they need, and work with them very closely, especially if they are near San Francisco. As an early startup, you have a lot of work to be done, but we enjoy working long hours, or even doing an internal hackathon over the weekend. For example, when a new LLM such as Llama 3 or Claude was launched, we made an example of a code interpreter to our cookbook with it. We are motivated to exciting new things, and we prioritize speed - you have to be able to iterate fast in AI. The opportunity to be part of a technological shift of this importance is there truly once in a lifetime.

We are currently looking for a new engineer to become a 5th member of E2B here in San Francisco. The role would be a Product Engineer to work on the user-facing part of the product very closely with me.


Conclusion

To stay up to date on the latest with E2B, follow them on X and learn more at them at E2B.dev.

Read our past few Deep Dives below:

If you would like us to ā€˜Deep Diveā€™ a founder, team or product launch, please reply to this email (newsletter@cerebralvalley.ai) or DM us on Twitter or LinkedIn.

Join Slack | All Events | Jobs | Home

Subscribe to the Cerebral Valley Newsletter