Lightcap focused on explaining how AI technology can bring differentiated value to enterprises and enhance customer experience and innovation capabilities. In addition, Lightcap also talked about its close partnership with Microsoft, including the latest progress of Sora. The following is the full content of this conversation: host: Brad, when you joined OpenAI as CFO in early 2018, OpenAI had virtually no products, let alone a business model. Can you tell us a little bit about how you helped transform the company from what you called at the time a “sleepy research organization” into a real business? Brad Lightcap: I joined OpenAI in 2018, which feels like ancient history now. We weren't even into modern AI yet, what we now call Transformer-based architectures, which are the foundation of LLMs that we use every day. It was completely off the table. We were focused on a very different kind of AI called reinforcement learning, where you train these agents to basically surpass human capabilities at any given task. So we're trying to apply that to trying to beat humans at video games. We actually have a project where we have an agent playing against the best Dota players in the world, if anyone is familiar with Dota. We're applying the same principles to robotics research to see if we can teach a robotic hand how to manipulate a Rubik's Cube. So here I am, sitting here in 2018, trying to figure out what kind of business we're going to be. We're probably going to be somewhere between a gaming company and a robotics company. But Reed Hoffman was actually very helpful in helping us think about this early on, the real Reed, not the AI digital Reed. So we ended up looking in that direction, but it still felt very early. Later when Transformer came out and LLM started to play a role, it felt at the beginning that that was going to be something important. host: OpenAI's mission is to build AGI, right? One day, perhaps even more intelligent than us humans. What would you say to the many people, including potential clients, who are still pretty freaked out about this? Brad Lightcap: Yeah, AGI is obviously an ambitious goal. I don't think anybody has a perfect definition of AGI. We joke sometimes that maybe we'll only know it when we see it, but we try to define it as a system that can basically perform most tasks. So if you take it to some arbitrarily complex or difficult task, it can go about solving it, it can reason about what tools it needs, what data it needs, how to ask follow-up questions, and just like humans have this general reasoning ability, we think AI systems will have that at some point, too. It's scary, right? It's a pretty bold thing to have a system that has some level of agency and autonomy out in the world, that can make decisions, that can use tools, that can access the internet and write code. So when we founded OpenAI, we really shaped it into being, I think, a very mission-driven company where safety and broad benefits are core to how we operate. We tried to come up with a structure that really reflected that, so we didn't have to prioritize things that we thought were unsafe over things that we thought were safe. So we did our best to understand how people were using the technology, to apply it at an early scale. Understanding how people were using systems like GPT-4 actually helps us think about how the next thing might be used, and we can improve our safety systems and technology. I think if we take this iterative approach and are able to have a global conversation with people about what they want this technology to do, both in terms of new things and in areas that need safety to start with, we'll get there and be very happy with the results. host: Sam said that today's AI models are laughably stupid compared to the future. You say today's AI will look laughable in a year, and then AI will be used for more complex jobs. So how will this AI get smarter, more complex, and better? Brad Lightcap: Yes, we think systems are going to get better, and saying today's system is the dumbest celebrity is that we think the next system is definitely going to be better. I think we need to understand what that means. But it's also really a commentary on the current systems, which are that they're not that capable, if you think about it, right? These systems you can ask questions, they have this oracle-like interface. They can give you an answer, and that's about the extent of their capabilities. They can mimic a certain level of intelligence, emotional intelligence, if you ask them to be funny. But I've heard people describe them as, in many ways, just the best magic tricks in the world. That's not quite the case in reality. GPT-4 actually has amazing applications in the enterprise, which has been very surprising to us. We can come back to that later if you want. But we do think that the next system is going to have to cross a kind of chasm of usefulness and ability to be able to go out and be useful and help you actually do things, like be truly helpful to you. In order to achieve the goals of this API, we need to work towards that. host: Would you say that in the future it will become more of an Agent rather than just a chatbot, or will we use your technology in other forms? Brad Lightcap: I feel like we tend not to have the vocabulary to describe these things until we have them. The vocabulary tends to evolve based on the current technology. For example, an interesting question is whether there will be what are called "prompt engineers" in, say, 2026 or 2027, where you try to tweak the prompt to make it better. You wouldn't tweak the prompt for your friend. You might tweak the prompt for your child, but you and I don't have to sit here and exchange drafts of prompts to make each other understand. So, this seems to be something that is specific to this era. I think that we're going to look at the next systems, and the way we describe what they can do is going to be specific to their capabilities and their flaws. host: ChatGPT became one of the fastest growing consumer apps of all time right out of the gate, which is hard to top internally, right? Do you think the novelty is wearing off for some people? How do you continue to grow after such a viral start? Brad Lightcap: Yeah, we do look at this with a research-first attitude, which is that if we're just building something called GPT-4 and that's our best bet for anything to get better, then I think we're going to sit here and say, okay, let's really go figure out how to maximize GPT-4 for scale production, and we're more concerned about learning how people use this technology. One of the exercises that we tried to do in the early days of ChatGPT was we tried to, in classic business operations style, we tried to qualify all of the use cases and understand, okay, have we found a common use case that we identified that we can improve the next version of the product based on because we're going to make it better at this. But the number of people that we talked to, they said, I don't have a use case. I use it to help me plan my kid's birthday party, and then the next second I use it to help me write code, and the next second I use it to help my elderly parents navigate their healthcare. You can see, it's just the diversity of use cases, which makes it hard to know how to improve it other than making the model smarter and better. host: Okay. You guys have been really aggressively growing your enterprise business, and I know there are some very exciting applications on the enterprise side. You guys had a recent deal with Moderna Pharmaceuticals, and I know you have some other partnerships with a few other companies as well. We can talk about more details on that. But I also wanted to ask, when we saw this leap forward with ChatGPT, there was a lot of early anticipation and promise about the potential of AI. I think it's also fair to say that while there's a lot of promise about AI in the year and a half since ChatGPT came out, we haven't seen it completely reshape our economy or most people's day-to-day work. What do you say to people who are skeptical that AI will have as big of an impact as some have hyped it up to be? Brad Lightcap: OK, we have a year to reinvent the economy. It's a huge undertaking. I said before that we've been really blown away by the enterprise adoption and the utility of these tools in the enterprise. Thinking about that comment, we think that even if we go back home and decide to end what we're doing, as a student of technology, I'm currently very confident that we'll have this 10-20 year period of diffusion through GPT-4 or similar technologies in the economy. At some point, we'll have a better system and we'll have to migrate everyone to this new system and then start working out how to fit into it. So we're very encouraged by that. The things we look at are how these systems can make a difference in the enterprise, help enterprises understand their customers better, build more personalized relationships with users, allow them to create new product experiences, allow them to really do things that they couldn't do otherwise. These are all examples of people using this technology. So our job is really, I tell our team, we're basically this kind of who knows what's possible today, what's not possible today, to what might be possible tomorrow, what might be impossible tomorrow. host: You’ve been in charge of partnerships with other companies at OpenAI. This is a very interesting part of your job because you’re currently working very closely with the media industry. Recently, you’ve inked some deals with companies like the Financial Times, Dotdash, and IAC’s Meredith. You’re also in talks with dozens of other publications about integrating ChatGPT into their products and licensing content. How many deals can we expect? I know this is a bit of a joke, but how many publishers do you think will eventually be part of the ChatGPT ecosystem? Brad Lightcap: Yeah, we think there are amazing applications for this technology in this space. So if you boil it down to the most basic principles, publishing is really the business of getting information to an audience and allowing that audience to engage with the topics that they care about in a way that satisfies them in terms of what they want or should know about. And when you phrase it like that, almost in plain language, you realize that these AI systems are really almost tailor-made to enhance that experience. How do you better understand a topic? How do you empower journalists to report on something? How do you give people more exposure to data and the ability to interact with data? We live in a data-driven world. That's the opportunity we see in publishing, and that's what we're excited about. We still have a long way to go, and I think we need to build these tools and start working with the industry to get there. But we're committed to doing this work, and that's really the foundation of our collaborative effort. And being able to bring information into the experience of ChatGPT as a source of truth. People have a somewhat misguided idea of what these models can and should be used for. They're not actually databases. So if you use them as a database to store information, they're not 100 percent accurate, and they're very expensive. We've built much better database technology. These models are not actually meant for recalling facts. They're actually meant for reasoning about new information. So you can get them more information, and that's what we think about when we think about working with publishers, to bring more information into their field of view that we think is more useful to people. Ultimately, we think it's good for the world. host: You also recently went to Hollywood to talk to producers, directors, and creatives about Sora, right? Your text-to-video generator, how did those talks go? I remember we covered it with a headline like “AI Goes to Hollywood.” Tell me what that was like. Brad Lightcap: Yeah, very positive. So we announced Sora as our text-to-video model a few months ago, but we didn't release it. What's interesting is that this is actually an example of how we think about iterative deployment, which is putting something out in the world so that people start to appreciate what it is, start talking about how it might be used, and also give us an opportunity to get some feedback on how to improve it and be able to communicate with the people who will really have an impact. The feedback here has been really helpful. We've learned a lot of things that we didn't know about how video content is made for people, for creatives in the industry, and now we can go back and think about how to incorporate those things into the model. And actually, it turns out that a lot of creative people, especially at the high end of filmmaking, really care about things like, is the camera angle 5% higher or lower than it was supposed to be, and I want to reshoot it, tweak it a little bit, and that's an interesting research question. So having that conversation, I think, is critical. Our hope is actually that if you can get the cost of production down, whether it's a full movie, which we're not close to, but even a portion of a movie, one thing that's been consistent in the feedback is that you'll see more of it being made. So there's a lot of stuff that's been put on hold because it's too expensive. So if you can create some deflationary effect in the industry, more stuff will be made, like epics, westerns, and so on, which you don't see very often now. We think that's the promise of technology. host: So do you think we're months or years away from this being a real product that you can take out and sell to Hollywood or anyone else? Brad Lightcap: What we have now is what we have. We think we can make it better. So we're still going to treat it as a research problem, but it's going to be collaborative, and we want to get creative input in the process. host: Speaking of Sora, as you may have seen, there has been a lot of discussion about the training data used to train the model. Can you clarify once and for all whether Sora was trained on YouTube data? Brad Lightcap: Yeah, I mean, the discussion around data is really important. We obviously need to know where the data is coming from. We actually published a post this week on this topic, basically about the need for AI to have something like a content ID system that allows creators to understand where their content is going as it's created, who's training it, to be able to opt in and out of training, to be able to opt in and out of using the content. And then also, on the other side, being able to actively allow your content to be put into the model or accessed by the model. Because there could be other economic opportunities on that end. And that's what we're exploring, how to have a completely different social contract with the network, with the creators, with the publishers, and the degree to which value is created when these models go out into the world and perform some useful tasks is being able to reference and integrate content from the network. There should be ways for people to be able to benefit from that. We're thinking about this. It's really hard. We don't have all the answers yet, maybe 2026. If you have any ideas, we'd like to hear them. But it's a big question. So no answers for YouTube yet. host: The last partnership I want to talk about is probably the most important one, which is the partnership with Microsoft, right, your investment partner. How do you navigate that relationship with a partner that sometimes might be viewed as a competitor, who is also selling enterprise products similar to yours? Brad Lightcap: Yes, they're an amazing partner. I don't think there are many companies in the world that can work at the scale that we do with Microsoft to build systems and still tolerate us. We move fast and we're demanding, and they see it as an opportunity to help Microsoft get better at building systems and understand AI better. We think the market is huge. So the way we look at it is, we're a small company. We create technology, and we have a view on how and where we want to create and deploy that technology. They're also a company with their own set of products, with their own set of customers. Ultimately, as you mentioned earlier, if we're going to have any real impact economically, I don't think it's just OpenAI going out as an independent company and pressing buttons that's going to make that happen. Working with our partners is very important to us. So that's where I'm focused from a go-to-market perspective and we see them as an absolutely critical factor in that regard, along with of course all the other partners that we work with. So that's been very positive and I think there's a lot of work to do in the future. |
>>: Marketing experts are all masters of psychology
Sometimes, some opportunities are right in front o...
The author of this article is explaining the marke...
During his visit to Japan, the author solved a puz...
Speaking of agency operation, what do you think of...
Dear Amazon merchants, if you want to open a store...
In this era of information explosion, how can bran...
Based on the experience of entrepreneurs, this art...
When it comes to eBay, many people's impressio...
As a Shopee platform, although the threshold for e...
No matter which platform you open a store on, whet...
Amazon Affiliate is an advertising affiliate progr...
Shopee, like AliExpress, is a cross-border e-comme...
When you have accumulated tens of millions or even...
In the current fierce market competition, companie...
A commercial consensus has gradually been reached ...