AI: The reality and the hype
In the past, overhyping artificial intelligence has led to two AI winters. Edinburgh University's Professor Jon Oberlander gives his view on where we're at now.

By Gary Maidment
Artificial Intelligence (AI) is a pervasive technological force that’s impacting individuals, business, and society. While another AI winter seems unlikely thanks to advances in deep learning this decade, it’s important to separate fact from fiction so that governments can regulate AI in a way that doesn’t stifle its potential, play up to public fears, or create a climate of overhype. Edinburgh University’s Professor of Epistemics Jon Oberlander gave us his thoughts on the current state of play of this game-changing technology.
Probably a better driver than you are
According to Oberlander, the answer to whether AI is overhyped is a “very firm yes and no,” meaning that the tech is viable, but that tangent obstacles exist. He uses driverless vehicles as an example, “I think [they] are not quite as close as we might imagine…The reasons aren’t technical, they’re regulatory.”
The first issue with regulating driverless cars is ethical. Imagine a child running into the road after a ball where avoidance would force the car to either swerve into an elderly couple or cause injury to its passenger – the AI would need to make its choice in a split-second. And where would insurance and the law sit in this type of scenario?
A linked second issue is accountability: Who’s responsible if a driverless car crashes? The manufacturer, tech vendor, or passenger-driver? In the blurry worlds of semi-autonomous vehicles and the impending mix of autonomous and human-driven vehicles, the liability issue gets even more complex. According to Oberlander, “It’s the designers or the owners…of the machines, the self-driving cars, who should be responsible for all of the actions of their tools.” Manufacturers are divided: Volvo, for example, made the news in 2015 as the first car maker to say it full liability for its vehicles, whereas Tesla CEO and founder Elon Musk believes the occupant’s insurance should take the hit for non-design related faults.
Distrust about AI
When assessing the perception of driverless vehicles, surveys in both 2016 and 2017 by the insurer AAA reveal that, “Three-quarters of U.S. drivers report feeling afraid to ride in a self-driving car.” Research by MIT in 2016 shows similar results, “The trust to adopt these technologies is not yet here for many potential users and may need to be built-up over time,” while another MIT survey holds that 48 percent of respondents wouldn’t buy a fully autonomous car.
Oberlander believes that a mix of public trepidation and unclear regulations are why there’s “a whole lot of arguments that the AIs being developed now are not quite ready to be socially acceptable.”
It’s not just cars
A 2016 survey by the British Science Association found that people are reluctant to use AI in other scenarios: 53 percent in the case of surgical procedures and 62 percent for commercial aircraft.
However, this hides the fact that AI is alive and kicking in both cases. In healthcare, the tele-operated Da Vinci system has to date performed more than 3 million operations, and AI is already helping radiologists check scans for tumors. Concerning aircraft, the tech mag Wired addresses the public perception issue in the title of the article, “Don’t freak out over Boeing’s self-flying plane – robots already run the skies.” Reporting on Boeing’s plan to take pilots out of the equation completely by extending more decisions to AI, the writer points out that this isn’t really that far from what’s happening now.
According to Oberlander, though, many AI’s are “not doing quite the things that you might think of as being really ‘AI-ish’ just yet.” This is a key point. While narrow AI abounds in various fields whereby the AI system can perform a very specific task outstandingly well, the public’s perception of what AI does is a bit murky because it’s hard to define. Thus many people have mixed feelings towards it, although few believe in the movie trope of robot overlords.
Nevertheless, we might be going in the wrong direction if regulations are influenced by a collective misunderstanding of AI.
AI’s tech enablers
For those in the industry, the technological side of AI is less overhyped than the anticipation of the sci-fi-esque ways it’ll be applied. Its major technology enablers are beginning to fall into place, including broadband connectivity, data centers, cloud, big data and analytics, and IoT.
How do they slot together? Broadband connects the data centers that provide cloud services like computing, storage, and XaaS, including AI-as-a-Service. In large part thanks to cloud, computer processing and GPU power recently became cheap enough to facilitate sufficiently fast parallel processing on a massive scale and enable deep learning. IoT and its potentially billions of sensors yield the big data that AI needs for its algorithms to perform deep learning and analytics. However, Oberlander points out a current issue with AI’s dependence on big data, “On the one hand we have a surfeit of data…But, a lot of data is not labeled, and so to use some of the most powerful techniques, supervised learning techniques, you need to label that data.”
Going deep
In the area of deep learning applied to computer vision, big data and improved computer processing power helped Google’s Andrew Ng make a breakthrough in 2012 by bombarding a vast neural network with 10 million video thumbnails from YouTube over three days. The system was given a list of 20,000 items without being instructed on how to distinguish between them in an unsupervised learning scenario using unlabeled data. Over the course of the experiment, it began to detect human faces, human body parts, and cats with 81.7 percent, 76.7 percent, and 74.8 percent accuracy, respectively. “There’s genuine excitement particularly in areas around neural networks and deep learning, where there’s been dramatic progress,” says Oberlander.
Another exciting field is probabilistic machine learning in natural language processing, which according to Oberlander, “uses Bayesian Inference for unsupervised language acquisition; basically, just throwing the machine in the deep end.” With Bayesian Inference, there are no target prediction examples that predicate statistical learning. Oberlander explains how his colleague from the University of Edinburgh’s School of Informatics, Dr. Sharon Goldwater, used Bayesian Inference “to explain how you can build automatic speech recognition from first principles.”
Oberlander also mentions deep reinforcement learning, a crossover point between cognitive science and deep learning that takes a reward-punishment approach to AI learning. Talking of Google’s Deepmind’s success at learning several Atari games by retaining past experience rather than following separate programming for each game, Oberlander says that, “There’s a very clear reward function….The numbers that constitute the reward, I think, are what the systems themselves discover.”
Artificial General Intelligence (AGI)
While there’s clearly a lot of excitement about the cutting-edge of AI research, Oberlander isn’t particularly bullish about AGI, believing we’re still “a long way off” from the theoretical singularity whereby artificial intelligence equals human intelligence across the whole spectrum of human intellect. Despite Deepmind’s skill at Atari games, which ostensibly implies some sort of generality of intelligence, aka AGI, Oberlander believes that, “pulling together the narrow intelligences we have now isn’t necessarily the route to that destination.”
He takes a pragmatic view towards what’s going to happen over the decade, “My feeling is that there’ll be a lot more AI there, but you won’t necessarily notice it.”
AI ubiquity, therefore, may pass without much fanfare as far as the reality goes, while regulations could well push back against how fast exciting applications like driverless vehicles and robot assistants become socially acceptable. In July 2017, The Guardian reported on researchers’ calls for robots to be fitted with an “ethical black box” to explain an AI’s decisions if accidents happen in scenarios like healthcare, security, customer assistants, and driverless vehicles.
The excitement in the industry is thus tempered by a lack of clear regulations not just on liability should an accident occur, but also on both transparency in AI research and on releasing open-source code, which some companies do. Astro Teller, who participated in Stanford University’s One Hundred Year Study on Artificial Intelligence, wrote in his blog that, “For that last reason (regulations), it is imperative to ensure that the basics of AI (what it is and how it works and what it can and can’t do) become critical knowledge pieces for the government of any high functioning developed nation.”
Equally, Oberlander strongly believes in the responsible development of AI, “We need to start thinking about the implications of the technology now if we want to be able to control that technology and deliver the right kinds of social benefits in the longer term.”
And public-private partnerships are one way to promote the responsible application of AI. Announced this June, the University of Edinburgh and Huawei are collaborating on a joint lab, which will be housed in the university’s School of Informatics. The partners are focusing on distributed data management and processing, NLP, general inference in neural networks, and machine learning on huge data sets.
Getting down to business
With robust regulations in place, AI can flourish in a transparent environment that can have huge benefits on society, result in a well-informed public, and fuel the digital economy.
Business is one area where AI’s value is destined to match the hype. Covering 12 developed economies, research by Accenture suggests that AI will double economic output by 2035 in the 12 developed economies it studied, and increase labor productivity by up to 40 percent.
Cloud computing will enable AI-as-a-Service and bring innovation potential into the many more hands across the globe. Continued advances in robotics, big data, IoT, deep learning, and predictive analytics will produce actionable insights across all industry verticals, delivering a goldmine of efficiency and productivity – something that’s worth getting really excited about.