This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

We need to bring as many voices to the table as possible.

To make AI truly inclusive, don’t leave it to the experts (says this expert)

Frederic Werner, Chief of Strategy and Operations, AI for Good, International Telecommunication Union (ITU)

Frederic Werner is Chief of Strategy and Operations at the ITU. In an interview with Transform Editor-in-Chief Gavin Allen, he talks about the ITU’s AI for Good initiative, and what “inclusive AI” really means.

What is AI for Good? How did it first come about?
AI for Good was built on the premise that AI can advance many of the UN Sustainable Development Goals, from health care to climate change, education to gender equity, or more high-tech solutions like autonomous driving and smart cities. 

We have to be mindful of the unintended consequences of AI, so top of mind is job loss due to automation, but other things like bias and unfairness. There are also issues of privacy, transparency, accountability, and the digital divide.

We must figure out how to scale solutions for global impact. We have virtually the entire system of 40 UN agencies as partners of AI for Good. And actually, it's organized by ITU, where I work. 

But even the experts would say that “AI is too important to leave to the experts.” So, we bring in industry, academia, civil society, member states, and different NGOs, artists, athletes, and creatives.

Our thinking is that we need to bring as many different voices to the table so that we can have a proper, inclusive dialogue on how AI might benefit humanity.

What are some of the positive outcomes that “AI for Good” has already achieved? 
Use cases come across my desk every day. For example, using a mobile phone to detect skin cancer, where even in developed countries, you sometimes need to wait a year to see a dermatologist. 

Likewise, in education, you can do customized e-learning. You’re making learning accessible available in settings where there's one teacher to a hundred students. Tech can be used to create customized learning plans for students.

Another example is combining satellite imagery with big data analytics and machine learning to help predict weather patterns or natural disasters, or to optimize crop yields. 

But I think the biggest challenge is ensuring that these high potential use cases work equally well for men, women, children, the elderly, people with different skin colors, or disabilities, especially in low-resource settings where basic things like electricity and connectivity are still issues.

These are things that don't occur naturally in the fast-moving tech industry and startups. I think the approach up until now has been “Build it, and we'll figure all of that out later.”

But these are things at AI for Good. And that’s really important if you're gonna scale AI for Good globally.

How do we ensure that when we talk about AI, it is good for all? That it is, as you say, inclusive, rather than good only for a select few?
It's a very good question. You and I could spend all day arguing that what's good for me might not be good for you, or for different cultures or countries.

But luckily, we don't have to start from scratch because we have the same Sustainable Development goals to guide us. There are 17 goals and 169 targets acting as a framework for decision-making on where we put our efforts. Without that framework, we'd be starting from scratch every time we discussed what is good. But the SDG framework guides our strategy and gives us something that's implementable and measurable. 

AI for Good is presented as an annual summit in Geneva, but it's also an all-year online platform where we have about 150 online events per year, reaching thousands of people from 183 countries. I like to think we're more of just a talking shop. Through these activities and talks, there's a lot of knowledge sharing, best practices, discourse, sharing of opinions, expertise, and so on.  From these collaborative efforts, we obtain what would call the building blocks of AI for Good.

For example, we have what we call focus groups. These are pre-standardization efforts. For example, we have AI and Health with WHO, AI and Natural Disaster Management with the World Meteorological Organization (WMO), and AI and digital agriculture with the Food and Agriculture Organization (FAO). We also have focus groups on autonomous driving and 5G.

Even though these topics are different, we have the ITU, whose mandate is telecommunications and ICT, working with member states, private companies, academia, and basically working on the building blocks of AI for Good.

So you’re trying to lay the groundwork for common AI standards? 
Yes, these groups are working on gap analysis: what standards exist, and which ones don’t? Is there any overlap between existing standards? That's a good starting point for figuring out what to work on. But more than that, they're identifying high-potential use cases. So one challenge is, developing a framework for the testing and evaluation of AI for Health applications. Imagine if you're a mayor or running a hospital and you're evaluating these high-potential applications. And there are hundreds, if not thousands. How do you know which ones are any good? 

In the past, there was no one way to measure apples with apples. So they've created a system where you can actually test the performance of these algorithms, and then use the information to make informed decisions.

These are the bottlenecks that are really preventing AI-for-Good solutions. But, the ITU acts as a funnel, collecting all these requirements and problems from a wide range of stakeholders. It then goes into this pre-standardization mechanism called focus groups, and eventually finds its way into the international standardization process.

What are the challenges with respect to AI governance? 
I think no one could have anticipated the advent of generative AI and what it would do to the world. ITU company leads, UNESCO, and the UN Inter-agency Working Group on Artificial Intelligence, coordinates all AI efforts in the UN. UNESCO has ethical guidelines on AI, and different partners bring different pieces to the table.

When it comes to governance, we know what the problems are. How do we handle bias? How do we make AI ethical, safe, transparent, accountable, sustainable, and inclusive? 

We also know that we want agile regulation and governance. I think it's quite similar to standardization, where you have that sweet spot: standardize too early and you stifle innovation. Do it too late, and you might get negative consequences.

So, we want AI to be inclusive. Great efforts have been made to bring developing countries to the table and involve multiple stakeholders, including industry, academia, NGOs, civil society, and our governments. Again, we can't just leave it to a handful of people.

How important is global coordination in AI governance?
There are more than 700 guidelines on AI policy governance and measuring different indicators. In Europe, they lean towards consumer protection, while the US takes more of a free-market approach, so you have differences in philosophy.

But there are more commonalities than differences. No one wants unethical AI, no one wants to make bad decisions based on flawed data sets . No one wants autonomous cars that are unsafe. So a lot of consensus exists. 

Contact us!