This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy
William Lehr, Economist and Research Associate in the Computer Science and AI Laboratory at Massachusetts Institute of Technology
We’re at the foothills of smart manufacturing. MIT’s Bill Lehr scans the horizon for signs of progress.
LEHR: I don't think the economic impact and potential of 5.5G is sufficiently understood. 5.5G is part of a trajectory - it's a vision. Robert Browning said in a poem that a man’s reach should exceed his grasp. The technology of this whole industry, and of wireless, digital transformation, is a bit like a shark. If it doesn't keep moving, it dies. But part of the issue is that the value case for the specific applications of the development and evolution is a bit of a bet on the future. It’s an incremental thing that's happening in a bunch of different steps – a vision of where we're going. But because different people disagree about what those incremental steps are, it still isn’t adequately understood. And there are a lot of problems confronting the industry.
Q: Such as what?
A: Well, to deliver the kind of capabilities that 5.5G aspires to deliver, advances are needed in a bunch of areas. You need a next generation network, and a lot of the policy folk seem to think all that means is broadband and fiber connectivity. But no. If you want the capabilities to do the things that are really interesting – virtual reality, autonomous vehicles, extended reality, digital twins – then you need computing resources. Those resources have to be integrated with the connectivity. And the kind of connectivity you need is very different in a factory setting where you have little widgets talking to other widgets that may eventually be talking to a human in some more complicated way.
Everybody in a digital economy needs to be digitally enabled and all business functions and processes will be impacted. AI is really about the capability to deliver smart applications and to allow the great IT that’s already out there – much of which is not AI – to be configured and to introduce applications into market situations or contexts where previously the smart digital applications were not economically viable because the cost of adoption would have been too high. AI can help solve those problems. And if this stuff becomes infrastructure, as will be necessary in a digital economy, then, as infrastructure, it’ll become the thing you're not aware of until it doesn't work. And then when it doesn't work, you'll be angry.
Q: So are you a hope or a hype man when it comes to AI?
A: I'm both. The GDPR Act in Europe was basically addressing how we deal with privacy and basic human values in a digital world. That's a hard-enough problem. But AI is about everything. And I think the hype added too much heat to the fire.
There's two stories that I love from childhood. One is Chicken Little. The other is the Boy Who Cries Wolf. So Chicken Little is running around saying, “The sky's falling, the sky's falling.” It doesn't help when Sam Altman and Elon Musk say the sky's falling since those are the guys that the rest of us were looking to for leading AI development efforts. Now, they are saying AI poses an existential risk for the planet.
I do not dispute that the potential for creating a super-intelligence is real and that could pose an existential risk in the longer term. However, I think we should be so lucky to live so long. Our problems are much more immediate – we're probably going to die from climate change before super-intelligence kills us. I don’t believe that AI is enough to save us from the climate change or the other geopolitical threats that challenge us today and in the immediate future, but I believe AI and digital technologies have to be part of the solution...
And, despite Chicken Little, the sky's not falling. The boy who cries wolf says, when I push this button, everyone's going to come scrambling. But when you push the alarm button too many times, and nobody comes, folks may decide there are no risks and no need for policy. I believe we do need AI policy, and as a baseline, let's start with the idea that nothing we want our technology to add to the world should be able to do things that, if a human did it, they'd be in trouble. If a human screws around with the financial markets, we've got a whole infrastructure for dealing with that, a really complicated set of rules. The same is true about healthcare and criminal activity, problem domains with existing frameworks for addressing those problems. Those are the places to start. The problem with AI policy is you're trying to regulate the future, and people are trying to move too fast and aren’t listening enough to each other. We need to keep a dialogue going.
Q: Is this a high wire act – are you worried we're about to tip off one side or the other?
A: Well, we've been here before – for example, consider the original net neutrality debates, which led to folks arguing whether every bit [of data] should be treated the same, an idiotic misconception of what network management is all about. Obviously, we do not want every bit treated the same and non-neutral traffic management that excludes bad bits (such as malware) while supporting good bits (i.e., for good traffic) is what you want to do.
Those debates took a decade to mostly resolve themselves. With AI, I think the situation is a bit more like the challenge of supervising a kindergarten: hopefully, there aren't any toys out there that are really going to hurt anybody, and there isn't any kid that’s a bully that monopolizes the toys, or whacks some other kid in the head. When the kids need to cross the street or go to the playground, the supervisor needs to get them there safely. And that's what I think these 5.5G standards efforts are trying to do – they are trying to maintain an informed dialogue, to establish guardrails and consensus about how things ought to work so that the new toys can be used safely. But it's a challenge as to how you do that, and it's going to require more coordination.
Q: So is there currently a very fragmented, rather than coordinated, approach to this?
A: There's a number of tipping points. People worry about all this automated digital stuff that's constantly observing you and automating things, asking will humans lose control, etc. To understand AI, look at the world before the first Industrial Revolution, when nobody had horses, or steam engines. Then look at the world after. Only then can you really understand the scale of that change 100 years later and realize that, “Oh, the world's totally different now.”
Now, we’re in the early stages of a Fourth Industrial Revolution, and again, the change is not going to happen overnight, and it's not going to happen evenly. There are going to be winners and losers at every level, within and across industries. There’ll be fast-adopting sectors and slower sectors. And as you push the technology more into people's lives, you're repeating something that happened when we went from mainframes to PCs. That totally changed the way businesses operated, with computers on everybody's desk, which was wonderful in many ways. But the life cycle costs of managing that situation weren't necessarily better. You know, we didn't immediately get it completely right. And the same thing's going to be true of AI. So, we have to build a measurement ecosystem. You have the users, the applications, the economy, national and regional governments, international coordination. All those things are moving at different levels, and we need all of them to be talking, and to have measurements that work. A technical person may say: "This tech protocol is better because it has lower latency." But an economist goes: "Yeah, but at what cost?" And with all these things, you have to ask: What’s the trade-off?
Q: Will AI fundamentally transform smart manufacturing processes or simply make them more efficient?
A: It potentially has a really transformative effect and transformation has significant economic implications for how work is organized, what it means for jobs, and who benefits from the transformation. The evidence says no jobs are safe from potentially being replaced by AI. But that hasn’t happened yet, and preliminary evidence suggests it is unlikely to lead to mass unemployment. It's more likely it will change everybody's jobs.
So, it won't get rid of lawyers, but lawyers who aren't able to work with, or understand how to operate in digital businesses, are probably going to be out of jobs. You'll need to adapt. It really is a global world and we've got too many people on the planet, with too many problems that we absolutely cannot solve unless we have AI. On climate change, if Africa follows the same trajectory of per-capita GDP growth and per-capita energy consumption as the developed world did through the First and Second Industrial Revolutions, then we’re all going down, we’re all sunk. And the only solution is a more renewable world, and that's going to take a lot of information technology. That issue is replicated across every domain.
With smart manufacturing, one of the questions is, does it allow more onshoring or offshoring of production? The answer is, it's ambiguous. Adopting smart factory production technologies for companies in the developing world can help them get into new markets and upscale what they do. But it also means that if you have production in the US, where labor costs are higher, you can use smart production technology to better manage those labor costs, and retain manufacturing on-shore. AI helps make labor and digital technology both closer substitutes and complements: substitutes because AI enables more flexible management of factor inputs, complements because AI can augment the productive potential of other factor inputs.
Q: And are we still at the foothills of smart manufacturing?
A: There's actually been a lot done to augment manufacturing with digital technology and much of that has not involved AI, although AI is finding many more applications in manufacturing processes. For example, AI can enable robotic process automation technologies to be enhanced and to be deployed more easily in new environments. AI can help train people who weren’t appropriately skilled. Access to such training, via AI, can be expanded to wider audiences and more folks, bypassing the expense of flying trainees to the U.S. or wherever training was managed in pre-AI/pre-connected times.
Today, AI applications can help upskill local workers faster and at lower expense, which makes more markets more attractive for deploying manufacturing processes. What's even more interesting is IoT and the ability of AI to take a continuous flow of information in ways that humans simply can't do. We have six senses, and we don't really even understand how they work. And now we can use XR (Extended Reality) to augment them. We can look three blocks ahead, understand what the traffic looks like there – even though nobody can physically see it – and get back and say, as a result of what's happened three blocks away, you want to drive differently here now. That's adding additional senses.
Now, expanding human capabilities can make a bully worse – you don't want to give the bully a baseball bat. But it can also help the other guy defend himself better. Ideally what AI offers are expanded and better choices: the kindergartners are happier, they're progressing faster, they're getting across the street safer. The challenge for regulation is managing the markets and keeping the kindergarten safe, but relying on markets doesn’t mean we don’t need regulation.
Q: It's about devising the necessary regulation.
A: Yes, one of the metaphors, which is described in the European Union’s AI Act, is around the Prohibition era in the United States. Blocking whisky was motivated by the Baptists, social do-gooders who argued that alcohol should be banned because it was harming people. But gangsters were also in favor of banning alcohol. They saw it as a license to steal – which, of course, it was. And there's no evidence that Prohibition actually did anything to stop harmful drinking in the US. It is not that alcohol should be unregulated, but that bad regulation can be worse than the problem it was supposed to address.
Q: So, be careful what you regulate?
A: Yeah. Not that you shouldn't do it, you just shouldn't do it badly. And there's aspects of what they've done in Europe around regulating AI that I would rather they hadn't done. There’s been a rush to do something – not helped by people who should have known better, and who, in the public policy debate, have been irresponsible. But, if I was a betting man, I’d suggest you're not ultimately going to see anything that looks like what some of the initial proposals were. It'll hopefully be better.
And to detect what’s needed, we need to build the right apparatus. That means multidisciplinary capacity. We need to have economists and other social scientists talking more closely with engineers, working together, just as I’m trying to do at MIT.
Contact us! transform@huawei.com