This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

AI can write believable lies.  That is one of the hazards no one expected.
08

“Slow but spectacular”: A different slant on AI

Ken MacLeod will be a Guest of Honor at Glasgow WorldCon in 2024.

An award-winning science fiction writer talks about what excites him—or not—about artificial intelligence.

Q: Are you excited by AI, or is it more “let's wait and see”?
A: A bit of both. To tell you the truth, the science fiction writers of the 90s really bought into the hype about AI. We bought into the hype about nanotechnology and we oversold the near future possibilities of AI and nanotech. And that's really the way that science and technology advance. Like in real life, there is a very recognizable “hype cycle” for any new technology where they promise the earth at the beginning, and lots of investors rush in, and you get funding, and so on. And then things eventually are somewhat less spectacular than originally hoped. 

Q: I always thought it tended to be slower than people thought, but bigger than people thought. So, it will be massive, but just not as immediate?
A: That's a very good way of putting it. I had not expected Chat GPT, for instance. And the real advances have been slow but spectacular. To take a very simple example: Google Translate. I tried it once 10 years ago for a laugh. I had given a remote talk in Russia that was interpreted into Russian, and then I interpreted it back. And the result was very funny and very clunky. And nowadays, you go to a Chinese site, click on Google Translate, and you get almost seamless idiomatic English right in front of you in a second or two.

Q: So, is there going to be a Worldcon [The World Science Fiction Convention] in five or 10 years, or is AI going to be able to write the sci-fi that humans currently write? 
A: I don't think AI can yet write believable fiction. It can write believable lies. That is one of the AI hazards that nobody had expected. If you do a search on a certain company's search engine and use the little chatbot, you will find loads of very bad results very quickly. This is shocking because it's filling the internet with rubbish. 

Q: So, what do you make of Elon Musk’s call for a referee for AI development—the need for it to be contained, to have guardrails? 
A: It is really striking that the big AI developers keep telling us that AI could pose an existential threat to the human race, and yet they keep developing it. What's really going on here? I think we can only approach this with a degree of wariness.

Q: They would say you can't put the genie back in the bottle, and therefore we have to keep developing it. And it's not for us to do the guardrails. It's for regulators and governments. 
A: Yes, I think there needs to be regulation for sure, but again, you have to be wary of stepping on tender shoots there because the development of AI could be slowed down in open and liberal societies, and it could therefore be moved to less responsible actors. There's a balance to be struck, which probably requires a rather higher level of statesmanship than we currently enjoy. 

Q: Also, the problem, of course, is that most regulators and statesmen don't have the skills, the technological know-how, to know how to regulate it properly.
A: Indeed, they don't. And politicians often do a very bad job of even the simple thing of regulating the internet. And they come out with statements of terrifying ignorance—one British Member of Parliament was going around saying, “Why don't we just ban algorithms?” 

Q: How would you hope that AI does change society? 
A: What I want to be able to see is AI replacing an awful lot of routine and repetitive and stultifying work, and to a degree, it's already doing that. Though, as often with these things, it's not noticeable. You don't notice what isn't there. And what isn't there at the moment since the advent of desktop computers are typists and dispatch clerks. I was a dispatch clerk for a few years when I was still trying to be a scientist and doing it part-time. And it was a good, steady job, but it was intensely boring because it was basically typing the same invoices over and over again with different dates and all of that and putting them in. Nobody has to do that now. 

Q: So what is the role, do you think, of technology companies such as Huawei in trying to sell the benefits of technology? The world—and the media—seems pretty skeptical about it and tend to go for the fear element before the enthusiasm. 
A: You can see so many things that would be so useful to have AI doing rather than people doing—or even not being done at all—monitoring the health of every tree in that park, keeping the lake clean, designing the structure behind us, and so on. The only thing—and the best thing—that companies that want to be responsible can do is be honest. I think trust is very hard to recover once lost. 


Contact us! transform@huawei.com