This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

If you get it wrong and use it unwisely, you can do all sorts of damage.
10

The AI risks we tend not to think about

John Higgins, Chair of Global Digital Foundation

Poorly designed economic regulation could throttle AI’s potential, says this expert.

What are the main risks we face with AI?
It's convenient and useful to divide the risks into business risk, societal risk, and then maybe individual risk. 

For the business risk, it's the risk of not taking advantage of the benefits that AI can deliver to you and your customers. So, if you just miss out, you don't gain a competitive edge.

But probably the bigger risk as a business is that, if you get it wrong and use it unwisely, you can do all sorts of damage to your reputation. Most business executives understand what they're doing and can make judgments about it. But in AI, they're not very well equipped to make those judgments.

The risk for the consumer includes believing things that just aren't true because they've been generated in some sort of AI world. For society as a whole, there's the risk of opinions forming into silos because they're based on AI-powered social media.

Which of those do you feel is the most immediate or the most worrying? 
It depends on which perspective. But I think the one that's less well thought about is the business risk. So yeah, we can all get captured by the things we see as individuals, as consumers, or think about from our children's perspective. When we read the papers or watch TV, we might also get caught up in the societal risk. 

But I think the business and economic risks are the ones that most of us don't think about. And yet in some ways, they could have the biggest impact of all.
If we regulate too quickly, we potentially lose out on some of the many benefits AI could deliver, whether in healthcare, climate change, or some other area.

So where do you stand on that spectrum of “not too fast, not too slow”?
We should think in terms of regulating for societal protection, if you will. And businesses like a common regulatory environment. You don't want to make one for here and one for there. 

Right now, in Europe, we take a policy approach based on the “precautionary principle,” meaning that, if it could do some harm, we'd better put a regulation in place. That differs from the Anglo-American approach, which is more, “Let the market have a go at it, and then we'll fill the gaps with regulation afterwards if we identify market failures.”

It would be great to have one approach. From a business perspective, the closer you could get to that, the better. 

In the world of regulatory approaches to AI, there is this convergence of opinion about what we need to do. We know we want AI to be safe and reliable; we know we want it to be secure. If it's appropriate, we know we want human intervention to be enabled at the right point. We know we want to use data that we've acquired legally, and that doesn't have loads of dodgy biases in it. 

So, I think we're beginning to get a common set of understandings about the sort of expectations we have for AI. This will help companies operate with the degree of business certainty that they want.

You sound quite positive about it, but it sounds like there's the theoretical need to overcome the various challenges that arise. 
I am. There’s still a long way to go, but I am encouraged. It's a funny thing, but the way people work together, a set of concrete things are emerging, I think, that will serve our needs—the beginnings of a framework. 

I'm not saying it will answer all our problems. But I am quite confident that a common set of safeguards, the guardrails people refer to, are beginning to shape up and solidify.

But there's no turning back there. You can’t put the genie back in the bottle.
Absolutely. And nor would we want to, if you think about the fantastic advances that technology has enabled us to achieve. What we’ve got to do is deal with it as best we can. I think we're making good progress. 


Contact us! transform@huawei.com