Our chat with OpenAI's Logan Kilpatrick (Pt. 2)

Logan on Agents, Open-Source and the Quest for AGI...

Published 19 Feb 2024

CV Deep Dive

Welcome to Part 2 of our chat with Logan Kilpatrick of OpenAI.

Today, Logan walks us through his thoughts on AI agents, open-source and OpenAI’s quest for AGI. Check out Part 1 here.

Logan is the prominent face of Developer Relations at OpenAI. Since joining in November 2022, he’s been central to the way thousands of developers have interacted with OpenAI’s developer-facing products, including the GPT Series, various model APIs and the Plugins-turned-GPT-Store. He’s also involved in broader conversations around how OpenAI interacts with the developer ecosystem at large.

Let’s dive in ⚡️

Read time: 7 mins


Our Chat with Logan 💬

Logan - agentic systems have been a huge point of interest for devs building AI apps. How far are we from creating a functional agentic system?

The GPT launch is our first shot on goal for this, but I don't think we’ve figured out the perfect system or product for agents yet. Plus, there’s also an apprehension to just building agents and letting them loose in the world, given the safety and policy questions that come up - how is that going to affect users of the Internet? How is it going to affect everyday people?

The Internet isn’t built for 20 billion agents running around loose and clicking on buttons and doing unpredictable things. And so I do think there will need to be a rethinking of core Internet infrastructure that enables this agentic future that we're heading towards. This is also a major factor in the robustness and reliability piece.

I also think we need a new model iteration in order for these use-cases to become more reliable, since things just don't work on the Internet sometimes - even for humans. And today’s models can work around some issues, but not to the extent that humans can. Hopefully the next iteration of models will be much better at dealing with the ambiguity, failures and edge-cases that will be common in the age of agents.

Like I said, there will need to be new Internet infrastructure that's built, and there are people building towards that - for example, giving agents an entry door to a website that's different from the door that human users use. And I think that's going to be super important.

How do you think about OpenAI’s place in the DevTool ecosystem, with tools like LangChain and LlamaIndex that are widely used by so many AI developers?

Historically, we haven't had bandwidth to build any of these tools - and in absence of that, there are tools like LangChain, LlamaIndex and Haystack that have built great products for developers and are providing a ton of value. I also think the opportunity for those players is that they're agnostic to model providers, which is exactly what developers are interested in today. Again, you can use LlamaIndex or LangChain as that interface so that you don't need to sort of be locked into a single provider.

Also, developers have had to go and build a lot of things themselves that we haven't provided, and there are a lot of great examples of this - such as front-end SDKs to handle streaming TypeScript and Python. It's kind of a pain to do that stuff yourself, and in the future as we scale the team and have more bandwidth to do things, we‘ll probably provide those abstractions for developers.

That said, I still think that's not going to fundamentally change much for the existing players as there is so much value in being model-agnostic, but there are a lot of people who don't care about that. I think it'll be nice that we can provide them tools to make their life easier as they're building with our API.

Cost has been a major bottleneck for developers who are worried about footing five-figure bills when their AI app goes viral. How are you thinking about this internally?

There are really two solutions to this problem. The first is that we'll keep doing what we're doing now, which is driving down costs. I saw a thread from @swyx (Latent Space) where he said the cost of GPT-3.5-turbo has gone down so significantly since we released that model originally. So we are going to continue to do those things.

Our imperative is not to make the most money in the world possible - we want to enable developers to actually use these models in production and we're going to continue to do as much as we can to drive costs down, because the cheaper it is, the more people can use it.

Hopefully, we have a good track record of this - and embeddings is a great example where the new model enables 62,000 pages of text for $1, which is just incredible. It ends up being that all of English Wikipedia is on the order of less than a few hundred dollars, which is crazy to think about. So we're going to continue to do that, and folks should hold us accountable in making sure that we continue to drive down those prices for developers.

The other piece of this is developers getting that surprise bill when their usage spikes. The idea of “sign in with OpenAI” has been thrown around a lot, and that's something that makes a ton of sense to me. I'm hopeful we'll do something like that because it is going to enable builders to build and not have to worry about as $10,000 bill when users find them.

tldraw is a good example of this - they went crazy viral with all those demos and I can imagine a ton of people showed up to their platform - and now they're kind of stuck holding the bill. It would be great if usage was not actually super expensive on a per-user basis.

As a user, I'd be happy to try this tool if it cost me on the order of a few cents or maybe a dollar. So I think we need to do something like that. We've talked about it for a long time, so again, hopefully we'll land it. Hold us accountable if we don't.

Sam Altman has mentioned several times that the path to AGI requires careful iterative releases in order to help society adjust to a “new normal”. Is that the main reason OpenAI has recently productized so much of its research?

That’s definitely a huge element of it. The other element from a product and business standpoint is that, at the end of the day, it's going to cost a lot of money to train and build systems for AGI. So we currently have ChatGPT and the API as business mechanisms, which allow us to make money and then buy GPUs to ultimately train what will become AGI.

So, the products are essentially acting as those two mechanisms - one, the ability for us to generate revenue so that we can purchase GPUs, and two, the ability to make sure that those products are distributed widely. We want to make sure these actually benefit everybody, and are released in a way that is upholding our mission and the safety standards we have.

Those two things in conjunction make our focus on product super important. It'd be much less likely that we achieve our mission if we were to not release those products. I know that Sam’s said before that we could sit in a lab behind closed doors for the next 5 years and then make AGI. I think that's maybe true, but I actually think the order of magnitude of likelihood of success goes up significantly with us having these API products and ChatGPT that we can release to the world.

There’s a big debate over whether open-source is the way forward for deploying AI safely. OpenAI has a particular point of view on this - could you speak to that, and share your personal views as someone who previously worked in open-source?

I always like to remind people that I come from the open source world. Before I joined OpenAI, I spent many years helping lead the Julia programming language, which is an open source language for scientific computing. I also spent a year plus advising NASA on their open science strategy. So I’ve done a ton of this stuff - open source runs deep in my blood.

Also, open-source is of the most transformative ways in which technology has progressed, and in many ways, it’s now the prevailing one. If you think back to 20 years ago, the scientific and the programmatic tools that people were using were closed source, and you have to think about how that's transitioned. Every developer is building on open source now, and it's created this immense amount of value for the world.

In the context of AI, LLMs, AGI and super-intelligence, though, it's a different ballgame. People like to simplify things and assume it’s all the same, but this technology is fundamentally different. The risk profile for open-sourcing PyTorch or Jupyter is non-existent - there's not really any downside. On the other hand, I think you can open-source these extremely capable and highly performant models, like a GPT-4, and expect that people will do bad things with it. OpenAI's position is simply that we're not going to do that unless it be done safely.

If there was a world in which people couldn't undo our safety and alignment work on GPT-4, that would be positive - and maybe that’ll be possible in the future. But, my intuition today is that's not possible - people can still undo the safety work that we've done and create a model that’s inherently against our mission, which is to make AI that benefits everybody.

Again, as the model capabilities continue to ramp up, I think this problem becomes more and more difficult. I’m excited for people who are doing open source work today, and hopeful there won't be negative consequences. But, I personally have a hard time seeing a world in which there is not an explosion of use cases, and where people are using open source models to do bad things. I think that's inherently a very likely outcome that's going to happen. And so I'm hopeful that the open source companies working on this will figure out ways to put the right guardrails in place.

It's a tricky trade off, and I think there's a lot of absolutism in the conversations of open vs. closed source. My perspective after spending such a long time in open source is there's this very wide spectrum of what it means to be open. In many ways, our platform is open because we make it accessible to developers through our API, and we're constantly driving down the costs and trying to provide a better service to people. And that's our flavor in many ways - being open with the technology that we're providing.

Have you been surprised at how quickly OpenAI has captured the attention - positive or negative - of wider society since ChatGPT released? Governments, multi-nationals, unions - it feels like there’s a lot of eyeballs on you right now.

It's really important that this happened because, as Sam has said a few times, nobody was taking the conversation seriously about AGI 3 years ago. Today, people are taking that conversation very seriously, and you've definitely seen this from regulators in Europe and the US. So I think that's a positive thing, just making sure the world is ready for this technology when it arrives. On the flip side, naturally there's more scrutiny from everybody, including developers who expect the world from us. Realistically, we're not always perfect, and we don't always release exactly what people are looking for. So it's tough.

We have so many stakeholders, and I get exposed to a lot of those conversations because I do a decent amount of public-facing work, like talking to and working with developers - so it's natural to be brought into other conversations. Really, there's such a thirst for knowledge about this technology and what people should be thinking about that I actually enjoy it. Outside of the developer persona, I spend a lot of time talking to businesses about how they should be thinking about AI. And it's super important to me that we continue this work.

We were talking off-camera about how I don’t live in San Francisco, and part of what I'm really excited about is making sure that people are building with this technology in places that aren't just there. Again, our mission is benefiting humanity, and humanity is not just in one city. If we over-index on certain areas, we’ll end up unable to fulfill our mission.

So again, the real challenge with all of this is that there’s an infinite amount of demand for the services that we're providing - for the time, for the knowledge - and we don't have comparatively a really large company. I think Google has like 200,000 employees, and we have like 800. So it's just really difficult to service all the demand and answer all of the questions that people have.


QuickFire Round

What’s your favorite AI tool, aside from ChatGPT?

Great question. I definitely use ChatGPT a lot. I've been trying a lot of the browsers, like Arc. I'm a Google customer, so I've used a lot of the Google stuff before. That said, I do think ChatGPT is definitely the daily driver for me just because I'm always trying to build empathy with what people are doing.

Aside from that, Superhuman is what I just started using for email, and it’s been super helpful since I'm always swamped with emails and can never catch up. And I need Superhuman for my Twitter DMs too, because there are so many good conversations there and it's basically impossible to manage any sufficiently large amount of them at once. So, Superhuman team - please build Superhuman for my DMs as well!

What are the top AI developments you’re most excited for in 2024?

Firstly, I think ChatGPT is going to become incredibly more useful than it is today, and the number of use-cases is going to expand. I'm super excited to watch that happen, just as a consumer and as somebody who uses ChatGPT every day. I want it to do more, and I'm excited that our team is going to build more inside of it.

I also think multi-modality is going to be huge, as well as this vision of agents that resonates with so many people. And I'm super excited to see us do that with GPTs. And once you get some of those workflows going, I think it's going to be a game changer.

And finally - describe the OpenAI culture in 3 words.

Extremely high agency.


Conclusion

That’s a wrap for our sixth Deep Dive of 2024! Follow Logan on X and LinkedIn to keep up with his work at OpenAI.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please send us an email at newsletter@cerebralvalley.ai or DM us on Twitter or LinkedIn.

Subscribe to the Cerebral Valley Newsletter