This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

Most people don’t realize that this is just the tip of the iceberg.

“This is gonna change everything”: A very short history of generative AI

Jerry Kaplan, Ph.D., is a serial entrepreneur and author of the new book, Generative Artificial Intelligence: What Everyone Needs to Know.

Silicon Valley inventor Jerry Kaplan talks with Transform’s Gavin Allen about the rise of generative AI.

Gavin: Can I start with just the most basic of questions: what is generative AI?
Jerry: Most people are familiar with artificial intelligence in the applications of things like facial recognition, self-driving cars, or language translation. 

But some recent technical advances have made a whole new class of systems, which are much more capable than these earlier systems.

They take this system and basically feed in everything that's ever been written, trillions of words, and the system trains itself and learns the connections between those words. It becomes, in a way, generally intelligent. You can talk to these systems in ways that seem utterly, astonishingly natural in the way that they respond; and the breadth of knowledge they have is encyclopedic.

Gavin: But it’s not just words, it’s images as well. 
Jerry: Right, there's also another form of generative AI that is visual. There are new systems where you say, please paint me a picture in the style of Degas, of two children playing on a swing, on the moon. These systems will generate all kinds of beautiful images of that sort. 

We're going to encounter AI in a very different way in the future, in a much more natural way, in a way that makes it possible for anybody to get all kinds of information and have things explained to them in their own language. That's gonna change the way we work and live.

Gavin: What has surprised you about generative AI? 
Jerry: One of the most surprising things is that these systems not only know a lot but are, for lack of a better word, creative. They are also predictive, in the sense that they can fill in the blanks at a very deep level. They take in what you say. Internally, it is translated into a representation of the meaning of what you say, in the context of all of the knowledge of humanity.

Then the AI figures out what to say next, and what's appropriate to say next as a response. The results are astonishing. These are polymaths, experts in almost every subject. They can give you advice, draft documents, write poetry. Just say, write me a poem for my birthday, and it will go ahead and do that.

Gavin: So you don’t think AI has been over-hyped? 
Jerry: No. This is just like the internet. We went through the same thing: a period when everybody was throwing money at it. But that's not really an accurate description of what's going on, because this is going to change everything. 

It is quite possible that generative AI will prove to be the single most important invention in human history. These systems will discover new drugs and help us address major problems like climate change. They will provide advice of every conceivable nature. In the future, when you want the most objective, reliable, accurate information, you're not going to go to a human being; you're going to ask a machine.

Gavin: So why do you think so many people, including the media, default to a sort of anxiety? Why are we not galvanized by AI’s possibilities? 
Jerry: Most people don't realize that this is just the tip of the iceberg. They’re thinking that it's like teaching a bear to ride a bicycle. But these systems will be managing our institutions and our organizations. They'll create business plans for you. 

You can't just look at what's available today. This is the leading wave of an incredible sequence of improvements that are going to take place over the next five to 10 years. 

But when you have a new tool of this power, it is scary because you don't know how it's really going to affect things. Obviously, it has tremendous potential to make our lives better: to eliminate poverty, to increase our standard of living, to improve our communications, to streamline all kinds of business processes. But it also has a number of negative effects, including an ability to generate disinformation. 

Still, I think this technology is going to be very important. When you query these systems, you're not asking a thing; you're asking a question to the accumulated knowledge of mankind. It's a new kind of tool for querying, all the knowledge and information that's out there. Even though it looks like it's talking to you, that's just the interface. 

It’s a tool that can use tools. It's an invention that can invent.

Gavin: Do you have ethical concerns about misinformation or other issues?
Jerry: There's a whole raft of problems that are going to be made worse by this technology, and we're going to have to find the best ways to mitigate those risks while not cutting out the opportunities that we also need. 

A new problem that I think is going to be very serious: when you have children, or adults for that matter, who have been brought up being tutored by these infinitely patient, infinitely attentive systems, you're naturally going to have in an instinctual desire to have some kind of relationship with them, to trust them, to get emotional support from them.

I imagine people coming home from work who are lonely or old, people who don't have enough interaction with other human beings. You'll come home at the end of the day, sit down, and tell a machine all about your problems. Instead of getting the kind of comfort you want to be getting through direct human interaction, you're getting that from a machine. It may disconnect people because instead of getting your connection from other human beings, you're going to get that emotional need for comfort and friendship from a machine.

And I think that's gonna be a very big issue.

Gavin: One of the other big issues that’s been talked about a lot recently is governance and regulation. What's your view on there being an “AI referee”?
Jerry: The problem is, we don't know what the game is yet or what the rules are. 

But when you get to be an old guy like me, you've seen this movie before.

Now, I'm not that old, but the automobile is an interesting example. Streets used to be like public parks, and there were horse-drawn carriages. The big problem in places like New York was that there was so much horse dung, to put it politely, that it was unsanitary. Also, there were accidents.

In response, we modified the infrastructure. We got lane markings, streetlights and stop signs. In the end, we banned horses and people from walking in the street the way they used to. That was a transition in which there was a great deal of conflict and some violence, believe it or not. That's just one example.

Gavin: Since we're still not clear on who the referee is and what the rules of the game are, it's inevitable that the big technological companies will drive the rules for now. Is that a concern? 
Jerry: Obviously, we have to look to the companies because they're the ones who understand the technology. 

But this took all those companies by surprise. You might think they've been planning this and working on it for years; it was all by design. No. There were a couple of technical things that happened, and when these technical things were combined, and these systems were scaled up in terms of computing power, they suddenly started to make sense. 

The companies are still coming to terms with what that means. In my view, it raises some fascinating philosophical questions like, ‘What is intelligence?’ ‘Could a system like this ever be conscious?’ I think over the next 30, 40, 50 years, we're going to live in a very different world and have a very different view of what a computer is and what it can do.

Gavin: What do you make of this call for a pause in AI development to make sure we’re not going too fast? What if it inadvertently slows down progress on lifesaving drugs?
Jerry: That's a good question. I'll give you an unusually direct answer: It's a big mistake. There's no sense in pausing this. First of all, you can't do it. The rollout of new technologies happens at its own pace based on the value that people who are adopting them see. 

This call came from a group of people who were worried that AI was somehow going to come alive and take over or wipe out humanity. That's silly. There's no fundamental basis for it. 

That's not to say that we may not build dangerous tools, and we do need to have control over those tools. But the pause is not going to happen. 

Gavin: So, don't stop and don't panic.
Jerry: Exactly. This has happened over and over in the past. 

This is a major new wave. It's almost like the domestication of electricity. That's the scale of the change that we're talking about here. The world runs on electricity today. I think that the world will run on some future version of generative AI in the future.

Contact us!