Every day, the business potential of Generative AI is expanding. And in response, both new and existing technology companies are looking for ways to capitalize on that potential. At the 2023 G1 Global Conference, GLOBIS Capital Partners’ Emre Yuasa joined panelists Jerry Chi, Karim Lakhani, and Wakana Tanaka to discuss how firms are competing and cooperating in the burgeoning Generative AI industry.
Next Article
The Business Case for Generative AI
OpenAI CEO Sam Altman: Balancing Scientific Vision, Ethical Governance, and Profit
4 Great Books on How to Implement AI in Business, Automation, and the Future of Work
Competition between Types of AI Models
Jerry Chi: This is basically what I think about and work on every day. We have a research and development team in Japan as well. We’re deciding which kind of models we want to build.
Do we want to build bigger text models? Bigger language models? A text model for just the advertising industry? There are so many different types of models you could build.
Also, the competitive landscape is changing quickly. But I think we will not just have a world where everyone is only using one model or three models. I think that’s too extreme.
Not all these foundation model companies will kind of survive in their current form. We don’t need thousands of companies building foundation models because the models are general enough and have enough knowledge built into them that most companies can build from the foundation models instead of building one themselves.
I think you could have different foundation models that do certain things well. For example, you could fine-tune a foundation model for the marketing industry or the online learning industry.
I think a big problem right now is cost and speed. Some people ask, “Why don’t you just use GPT-4 for everything? You don’t need to think about other models.”
But when a lot of companies try that, they run into various problems. One, it is too slow. Two, it is not as good in Japanese as it is in English. Three, it is too expensive.
If you have an app that has millions of daily active users and each of your users is calling that AI every day, that could end up with you paying $1 million every day. You don’t want to do that.
You want to pick another model that doesn’t have to have all the knowledge in the world. It only has to have knowledge necessary for that specific use case, or some other basic knowledge.
Another big need is for security and privacy. There was a survey in Japan that said 94 percent of users and enterprises are concerned about their data leaking out when they use a generative AI. This is a problem when the AI model you’re using is only available via an external black box API.
In an open model such as the models that we create as Stability AI, you can download them to whatever location you want. You can put them in your own predefined environment within AWS or Google Cloud. You can put them on your own servers. You can put them anywhere you want.
That allows enterprises to satisfy their privacy and security guarantees and not leak out their data somewhere else.
Open models also have the benefit that anybody can do research on them. Anybody can analyze the internals of the model and understand what is going on.
The Stable Diffusion image generation ecosystem is very strong because by default it’s what people benchmark on and what people try to build further research on the image generation.
Personalization of Large Language Models
Karim Lakhani: I think the world is not going to be dominated by just a few models. We see a real need for many models.
I imagine a world where each one of us will have our own personalized language model, depending upon our own preferences and tastes. The work we’re doing at Mozilla is trying to push forward on that.
We also see an interesting thing in our paper that we just recently did during our experiment. What we found was that if many people are using the same generative AI system for the same use case, the solutions tend to converge. And we actually want diversity.
Just for that matter, I think we’ll see companies wanting to adopt multiple large language models.
The other thing I was going to say at the beginning of your comment, I mean, I think today the cost is high, right? Lots of data and lots of compute is needed.
But it kind of reminds me of, you know, if you study the history of technology, when things come out of the labs and they start hitting commercial use at that time, the technology is fragile and expensive and hard to use. But over time, with consistent engineering efforts, they get cheaper, faster, and better.
And I think we are just months into this stuff getting out of the labs and into industrial settings.
And I think I expect the compute cost to decline. Data needs to decline, quite a bit as time goes on as well.
Types of Applications Currently Being Developed Using Generative AI Models
Wakana Tanaka: LinkedIn just announced two new features using generative AI last week. One is a LinkedIn coach for learning. You can ask [the AI] “I’m in a meeting with my CEO. He asked me to give him feedback. What do I say then?” It can tell you what you want to say. And if you want to prepare for it, these are the courses you can take.
And you can also I have a big negotiation coming up with my best, biggest partner. What do I do to gain the most out of it? And it can give all this advice on top of how to prepare.
The other is called LinkedIn Recruiter 2024. It generates from data 950 members globally to consolidate talent and gives you the best possible talent availability.
So those are two features. But there are so many others coming up and we’ve seen the data. If you have a job post that mentions ChatGPT or generative AI, there is a 17% increase in applications. People are looking for jobs or companies that are already using AI because they see the future there. And so professionals are learning AI at an exponential speed.