Davos, Switzerland
January 21, 2020
Zanny Minton Beddoes, Editor-in-Chief of The Economist: I think both I tried to find things you have in common and I think it is a love of history. One you're obviously a professional historian. Mr. Ren, I would say that you perhaps are an excellent amateur historian. You have focused a lot on the lessons of history. So I think you're both extremely equipped to tell us about what this future is going to look like. And we're going to shake the next half hour by trying to answer three broad questions.
One is what is at stake? How much does it matter to humanity, to the world that we have this tech arms race? Is it a question simply of market dominance, or are there deeper questions about the future of market systems, the future of our democracies, the future of who has global dominance? What is at stake?
Secondly, what are the consequences of the tech arms race? What happens? Do we split into a two ecosystem world? And what does that mean?
And thirdly, what do we do to avoid the worst outcomes? That's a Davos-ian attempt to end on an upbeat note. So I'd like you to tell us exactly how we make sure we get the best outcomes.
So I'm going to start, Professor Yuval, with you. To shape us... What is at stake? And I want to start with a quote from one of your books. You said humans, you wrote humans will change more in the next hundred years than in their existence before. AI and biotech could undermine the idea of individual freedom, making free markets and liberal democracy obsolete. Democracy, it went on to stay in its current form, cannot survive the merger of biotech and info tech. So would it be fair to say if you think a huge amount is at stake in this and why?
Yuval Noah Harari: Yeah, very much so. So I mean, on one level, the more shallow level, it would be a repeat of the 19th century industrial revolution when the leaders in industry basically have the power to dominate the entire world economically and politically. And it can happen again with the AI revolution and biotech revolution of the 21st century. And we are already beginning, I understand the current arms race as an imperial arms race, which may leave very soon to the creation of data colonies. You don't need to send the soldiers in if you have all the data for a particular country, but on a much broader and deeper, from a deeper perspective, I think it really is going to shape the future of humanity and the future of life itself, because with the new technologies you are soon giving some corporations and governments the ability to hack human beings.
There is a lot of talk about hacking computers, smartphones, emails, bank accounts, but the really big thing is hacking human beings, to hack human beings. You need a lot of biological knowledge, a lot of computing power, and especially a lot of data. If you have enough data about me and enough computing power in biological knowledge, you can have my body, my brain, my life. You can reach a point when you know me better than I know myself. And once you reach that point and we are very close to that point, then democracy, the free market as we… actually all political systems, also authoritarian regimes, we have no idea what happens once you pass that point.
Zanny Minton Beddoes: Do you think that China, which in many ways is further ahead on this in terms of being a surveillance state, is a harbinger of where things are going?
Yuval Noah Harari: I think that at present, we see competition between state surveillance in China and surveillance capitalism in the US. So it's not like the US is free from surveillance. There are also very sophisticated mechanisms of surveillance there. I think in the competition at present, there is no serious third player in this arms race. And the outcome of the arms race is really going to shape how everybody on the planet is going to live in twenty to fifty years: humans, other animals, new kinds of entities.
Zanny Minton Beddoes: So Mr. Ren, you heard that. Do you share Professor Harari's assessment of the stakes, that the very future of humanity and political systems is at stake?
Ren: I've read Professor Harari's A Brief History of Tomorrow and 21 Lessons for the 21st Century. I agree with many of his views on the rules that govern human society and the conflict between technology and future social structures and changing ideologies.
Ren Zhengfei at Davos
First, we must understand that technology is good. Technological development is not bad; it's good. Humanity has a long history of development. For thousands of years, technological advancement was very slow, which was very much in sync with biological evolution. So people didn't panic. When textile machines, steam ships, and trains appeared, people had some fears. However, as the industrial society progressed, these fears disappeared.
After we entered the information society, the intervals between technology booms started becoming even shorter. Now, we have made great breakthroughs in electronic technologies. Although Moore's law is still constraining the development of electronics technologies, we are sure that we will be able to scale chipsets down to two or three nanometers.
Second, due to great improvements in computing power, information technologies are like seeds spreading everywhere. Breakthroughs in biotech, physics, chemistry, neurology, and mathematics, as well as interdisciplinary and cross-domain innovations have built significant momentum for humanity's advancement. When that momentum hits its tipping point, it will lead to an explosion of intelligence. This great technological explosion may scare people. Is such an explosion good or bad? To me, I think it's good.
I think humans have always been able to use new technology to benefit society, rather than destroy it. That's because most people aspire for a good life, rather than a miserable one.
Just after I was born, the atomic bomb exploded in Hiroshima; when I was around seven and eight, I found that people's biggest fear was the atomic bomb. People around the world were afraid of it. However, when we take a long-term view on history, we realize that atomic technology can be used to generate power to the benefit of society. Its applications in radiation therapy and other fields have also benefited mankind. Because of this, there's no need to panic about AI today. While atomic bombs may hurt people, the development of AI today can't cause as much hurt.
Of course, our company is just studying weak AI, which is limited to a closed system, clear rules, and a complete set of information. It still requires certain conditions and the support of data to drive industrial, agricultural, scientific, and medical advancements. That means its application has boundaries. There are boundaries in many applications, including autonomous driving, mining, and pharmaceutical technologies. With the improvement of AI within these boundaries, huge wealth will be created.
Some say, "Many people would lose their jobs in the process of wealth creation." This is a social problem, and creating more wealth is better than creating less. In today's society, even the poor have a greater absolute wealth than what they had a few decades ago. The widening gap between the rich and the poor doesn't mean that the poor are sliding into more severe conditions of absolute poverty. Resolving the conflicts caused by the widening wealth gap is a social issue, not a technological one. How to fairly distribute wealth is a matter of policy and law. It's a challenge for social governance.
Zanny Minton Beddoes: Thank you. You raised a huge number of really interesting issues. I want to focus on two of them and ask Professor Harari to respond. One is the comparison between the atom bomb and atomic energy broadly. Is that an appropriate analogy? Because I think that is a very interesting analogy in the context of this discussion about the technology arms race. I'm sure everybody in this room, Mr. Ren, would agree that there are huge benefits to be had from technology. I'm sure Professor Harari would agree with that too. But do you think that there is something, and I'm back to asking you again Professor Harari, fundamentally different about the nature of AI and biotech, which means that it is significantly more dangerous than previous technological breakthroughs?
Yuval Noah Harari: Yeah, I mean, the comparison with the atom bomb is important. It teaches us that when humanity recognizes a common threat, then it can unite, even in the midst of a Cold War, to lay down rules and prevent the worst, which is what happened in the Cold War.
The problem with AI compared with atomic weapons is that the danger is not so obvious. And at least for some actors, they see an enormous benefit from using it. With the atom bomb, the great thing was that everybody knows when you use it, it's the end of the world. You can't win a nuclear war, an all-out nuclear war. But many people think, and I think with some good reason, that you can win an AI arms race. And that's very dangerous, because the temptation to win the race and dominate the world is much bigger.
Zanny Minton Beddoes: I'm gonna really put you on the spot there. Do you think that is a mindset more in Washington or in Beijing?
Yuval Noah Harari: I would say Beijing and San Francisco. Washington… they don't fully understand the implications of what is happening. I think at present that the race is really between Beijing and San Francisco, but San Francisco is getting closer to Washington because they need the backing of the government on this. So it's not completely separate. So that was the one question, what was the other?
02 Zanny Minton Beddoes: The second question was about AI. You've answered it broadly, and I actually want to go back to Mr. Ren to respond to that. Because you're clearly… the target of much American concern… Given what we've just been talking about, do you understand why the Americans are so concerned? Is it a reasonable concern to have that China, an authoritarian regime, should be at the cutting edge of technologies that can, as Professor Harari said, possibly shape future societies and individual freedom? Is it a reasonable concern for them to have?
Ren: Professor Harari said the US government doesn't really understand AI. I think the Chinese government might not understand it either. If the two countries really want to develop AI, they should invest more in basic education and basic research. China's education is still stuck in an industrial era, and the focus of the education system is on cultivating engineers. Therefore, it is impossible for AI to grow quickly in China. Developing AI takes a lot of mathematicians, physicists, biologists, chemists, etc. It also takes a great deal of supercomputers, super connections, and super storage. China is just a toddler in these areas. So I think the US is worrying a bit too much. It has gotten used to being the reigning champ, and it thinks that it should be the best at everything. If someone else does well in something, it might feel uncomfortable. However, what the US thinks will not change global trends.
I think eventually humanity should make good use of AI and learn how to use it to benefit us all. As Mr. Harari said, rules should be developed to regulate what we can research and what we can't, so that we can control how it develops. There are also ethical problems in technologies. In my opinion, Mr. Harari's idea of electronics infiltrating our minds will not come true in the next 20 to 30 years or even after that. However, AI will first transform production, improve productivity, and create more wealth. As long as there is more wealth, the government can distribute it to ease social conflicts.
In my recent article in The Economist, I quoted a sentence, "What would happen if semiconductors integrated with genetics?" But they took it out because it would start a discussion. When they told me it had been deleted, I immediately agreed to it, because I know it is a complicated issue.
03 Zanny Minton Beddoes: Let me follow up there by asking, the US may not understand, and the US in your view may overrate what it sees the threats from China. But what are the consequences of this current tech arms race? And what are the consequences of the US's blacklisting of Huawei? Are we seeing the world shift into two tech ecosystems? Is that what going to happen?
Ren: Huawei, as a company, used to be a fan of the US. An important reason for Huawei's success today is that we learned most of our management practices from US companies. Since Huawei was founded, we have hired dozens of US consulting firms to teach us how to manage the company. Now our entire management system is very similar to those of US companies. So the US should be proud, as US companies has contributed to our development. We are a model in terms of how successfully the US can export its management practices.
Therefore, from this perspective, I don't think the US needs to worry too much about Huawei's position and growth in the world. Being placed on the US's Entity List last year didn't have much impact on us. We have basically been able to withstand the attacks, as we started to make preparations over 10 years ago. This year, the US may step up its attacks on us. We will be affected, but not significantly. More than a decade ago, Huawei was a very poor company. 20 years ago, I didn't have my own house, and rented a small apartment, which was only about 30 square meters. Where was our money? All of it was invested in Huawei's research and development. If we had felt we were safe with the US, we wouldn't have made our plan B. But we didn't feel this way. That was why we spent hundreds of billions of yuan making preparations. As a result, we withstood the first round of US attacks last year. As to the second round of attacks this year, with the experience we gained and the lessons we learned last year, we are confident that we will be able to withstand these attacks.
Will the world be split into two tech ecosystems? I don't think so. Because science is about truth, and there is only one truth. When any scientist discovers the truth, it will be spread to the whole world. The basic theories of science and technology are unified across the world, whereas there can be a diversity of technological inventions, representing different applications of science. For example, there are various models of automobiles competing with each other, and this competition is conducive to social progress. So it's not that society must promote only one set of technical standards. Will the world be divided? No, as the foundation of science and technology is unified.
04 Zanny Minton Beddoes: Professor Harari, what's your take on that? I want to quote back to you something you wrote actually in The Economist, indeed. An AI arms race or a biotech arms race almost guarantees the worst outcome. The loser will be humanity itself.
Yuval Noah Harari: Yes, because once you're in an arms race situation, many technological developments and experiments are dangerous, and everybody may recognize that they are dangerous. And you don't want to go in that direction, at least not now. You're thinking this: Well, we don't want to do it; we're the good guys; but we can't trust our rivals not to do it. The Americans must be doing it. The Chinese must be doing it. We can't stay behind. So we have to do it. That's the arms race logic.
And a very very clear example is autonomous weapon systems, which is a real arms race. And you don't need to be a genius to realize this is a very dangerous development. But everybody's saying the same thing: We can't stay behind. And this is likely to spread to more and more areas. Now, I agree that we are unlikely to see computers and humans merge into cyborgs in the next twenty or thirty years.
I think there are so many things that we can see development in AI in the next two decades. But the most important point to focus on is what I mentioned as hacking human beings. The point when you've got enough data on people and you have enough computing power to get to know people better than they know themselves.
Now I would like to hear what their thoughts are, also for people in the hall. Are we at a point… I'm not a technologist, but the people who really understand, are we close to or at the point when Huawei or Facebook or the government or whoever can systematically hack millions of people, meaning knowing them better than they know themselves. They know more about me than I know about myself, about my medical condition, about my mental weaknesses, about my life history. Once you reach that point, the implication is that they can predict and manipulate my decisions better than me. Not perfect. It's impossible to predict anything perfectly. They just have to do it better than me.
Zanny Minton Beddoes: Shall we ask Mr. Ren, do you think Huawei is at that stage yet? Do you know people better than they know themselves?
Ren: We are not sure whether the science and technology Mr. Harari is imagining will become a reality or not, but I will not dismiss his imagination. As an enterprise, we must have a deeper understanding of our customers and their data and information. For example, is it possible for mining to rely solely on AI, without any manual labor? I think it's possible. Remote mining from several thousand kilometers away has become a reality. If a mine is located in a frozen or high-altitude region, AI will prove its worth there. In the future, top mines, like those in Brazil, may adopt this remote mining model. However, this requires us to have an in-depth understanding of mines. To better understand mines, tech experts need to work with mining experts. Similarly, telemedicine is only possible when doctors and electronic devices are integrated. Therefore, this understanding of humanity is a gradual process.
Mr. Harari said that embedding electronic devices in humans will make us gods. I don't think we have to worry about that, because we humans may die at 80 and our souls cannot just continue. That's why I don't think humans will ever become gods.
05 Zanny Minton Beddoes: What about the other subject Professor Harari raised of autonomous weapons? Because that does seem to be one where we are there. Military systems have them. What is your view of that? Do you think that they are as dangerous as Professor Harari says? And how do you stop the logic of mutually assured destruction from autonomous weapons?
Ren: I don't know much about military affairs, nor am I a military expert. But if everyone can create weapons, weapons will no longer be weapons, but will be just like sticks.
06 Audience: I just want to ask Harari. Why do you think that there's an AI arms race between China and the US? At least one sees that the applications in China are all for civilian use. And there seems no mind for really competing. Is there an arms race?
Yuval Noah Harari: Well, by arms race, I don't mean necessarily developing weapons. Today, to conquer a country, you don't need necessarily weapons.
Audience: What I meant was, what's the difference between the usual commercial competition versus what's state, you know, the state …?
Yuval Noah Harari: There is no clear border there. That happened in the 19th century and earlier with European imperialism. There is no border between commercial imperialism and military or political imperialism. Now with data, we see this new phenomenon of data colonialism to control a country, let's say, Africa, South America, or the Middle East. Just imagine this situation 20 years from now when somebody, maybe in Beijing, maybe in Washington or San Francisco, knows the entire personal medical and sexual history of every politician, judge, and journalist in Brazil, or in Egypt. And just imagine the situation. It's not weapons. It's not soldiers. It's not tanks. It's just the entire personal information of the next candidate for the Supreme Court of the US, of somebody who is running for president of Brazil. And they know their mental weaknesses. They know something they did when they were in college, when they were 20. They know all that. Is it still an independent country, or is it a data colony? So that's the arms race…
07 Audience: I'm a global shaper from the young community of the World Economic Forum. So my question will be for both of you. First of all, I would like to ask, you know worldwide governments and big companies are so powerful that they are actually able to shape the life of consumers. What is actually the power that is left to normal people? I'm a technician, so I have my own opinion about information security. But what is the power that is left to normal customers?
Ren: As technical exchanges become easier, humans will get a better understanding of things and will become increasingly smarter. Actually, this is what is already happening. For example, we may not understand the textbooks of today's elementary school students. Why do they learn these things? Courses we used to take in our universities are now being taken in middle school. This means we have made progress in the information age. However, we still need to master new knowledge. Different people may have varying degrees of knowledge, and may therefore have different jobs. People will still take the initiative, rather than being enslaved.
Zanny Minton Beddoes: So you would say that technology is giving individual people more agency and more power.
Ren: Yes.
Yuval Noah Harari: I think that technology can work both ways, both to limit and to enhance individual abilities or agency. And what individuals can do, especially technicians and engineers, is to design a different technology. For instance, now a lot of effort is about building surveillance tools that surveil individuals in the service of corporations and governments. But some of us can decide to just build an opposite kind of technology. The technology is neutral on this. You can design a tool that surveils the government and big corporations in the service of individuals. They like surveillance so much that they wouldn't mind if the citizens surveil them. For instance, you're an engineer. Build an AI tool that surveils government corruption. Or you build an anti-virus for the computer. You can build an anti-virus for the mind that alerts you when somebody is trying to hack you or to manipulate you. So that's up to you.
Zanny Minton Beddoes: We've run out of time. I apologize. But that is an appropriately upbeat place to end on: Create tools that can empower the individual in this. Thank you both very much for fascinating points.