Artificial intelligence: Friend or frenemy?
AI is poised to be an overwhelming force for good, but we're still dipping our collective big toe in its relatively uncharted waters. A robust approach is required to maximize the value of AI.

By Gary Maidment
“It's likely to be either the best or worst thing ever to happen to humanity,” warned physics legend Professor Stephen Hawking in 2015, neatly summing up the dichotomy that artificial intelligence (AI) inspires in even the planet’s greatest minds. AI is poised to be an overwhelming force for good, but we’re still dipping our collective big toe in its relatively uncharted waters. Major public concerns include safety and security, the possibility of clever machines liberating people’s jobs, and AI deployed in warfare.
In safe hands?
While a Hollywood-style robot rebellion may be the stuff of CGI pixels, safety fears are tempering the excitement and interest surrounding AI with a large dose of skepticism.
Driverless cars, for example, are literally just around the corner, with 10 million such vehicles expected to be on the roads by 2020. But, a 2016 survey by insurance provider AAA revealed that 75 percent of Americans are afraid to ride in one. Equally telling, a survey by the British Science Association in March of this year revealed that 53 percent of Brits wouldn’t fancy going under the knife of a robot surgeon. Notably though, the public doesn’t seem to have the same reservations about unmanned flight, domestic robot helpers, and things that are ostensibly less likely to hurt or kill them.
Much like the initial fear of using credit cards online, society remains a little wary of transferring the burden of personal safety to the virtual or robotic hands of AI. But as history shows, the tipping point of acceptance is likely to pass without much fanfare, not least because today’s open-minded generations are growing up with Siri, Alexa, Cortana, and the like.
Get ready to be a backseat driver
Another compelling reason for the widespread acceptance of AI will be the mounting evidence that it’s safer than humans. In the case of getting from A to B, AI doesn’t get tired, drunk, or distracted. Each year, 1.3 million people are killed on the world’s roads, with more than 90 percent of all crashes attributed to human error.
Audi, Mercedes-Benz, and Google are all testing out tech like LIDAR, with Google having already clocked up 1.5 million miles on the road to commercializing its driverless vehicles by 2020. Although AI has some ground to cover before it can read road signs and hand signals, negotiate snow, and fully act without human intervention, the gradual inception of driverless vehicles is a case of when, not if.
There are several measures essential to fostering peace of mind in Joe Public, many of which apply to AI in general. Coherent regulations and policies are needed to safely introduce driverless vehicles alongside human drivers, while strong cyber security measures must be developed to minimize hacking risks and future threats like ransomware. Interactive interfaces are required to engender trust between human and machine, and robustness in AI must be designed into technology. Equally important, technology must be rolled out when it’s ready and not before, with clear liability policies in place if something goes wrong.
A better bedside manner
Despite the reticence of the British public for robot surgery, AI will inevitably add more brains to the brawn of surgical technologies like the da Vinci System, which broke new ground in 2000 as the world’s first FDA-approved, all-inclusive teleoperated surgical robot. It has since been used to perform procedures on more than 3 million patients.
Fast-forward to 2016, and STAR (Smart Tissue Autonomous Robot) stitched up a pig’s small intestines early in the year using its own vision, tools, and intelligence. Crucially, the surgical bot performed better than human surgeons tasked with the same procedure.
Although STAR doesn’t herald the arrival of fully autonomous surgery, it represents a huge breakthrough in supervised autonomy on soft tissue procedures, an area that’s far harder to automate than things like knee surgery because tissue is messy and slips about.
In 2015, Google and Johnson & Johnson started working on applying machine vision, image analysis, augmented reality, and analytics to assist surgeons. AI in these contexts isn’t necessarily designed to fully replace humans; it’s more about working alongside people to improve diagnostics, predictions, and precision. Bots like STAR will provide an opportunity for surgeons to concentrate on higher-value tasks and offload repetitive and precision tasks to the tech.
Employment blues
When the autonomous tech in cars and robot surgeons proves its safety credentials, doing so will fall in the same ballpark as showing that AI can perform better than people – a trend that doesn’t bode well for taxi and truck drivers or specialist surgeons.
The AI revolution has indeed put a new spin on technological unemployment in that it’s starting to affect white collar workers, and highly skilled ones at that. Architects, pharmacists, financial advisers, translators, lawyers, and judges are all on a long list of skilled jobs stamped “at risk”, not to mention the new wave of manual jobs that are being automated now – Foxconn and Samsung have replaced 60,000 workers in China with precision robots capable of completing phone assembly tasks previously only possible with a human’s nimble fingers.
This year, leading computer scientist Moshe Vardi warned the American Association for the Advancement of Science that half of all jobs could be at risk in 30 years – around the same time many scientists believe AI will achieve human-level intelligence. Tom Goodwin, Senior VP for Havas Media US, told us in a recent interview that the employment issue could also pose something of an existential conundrum, despite AI’s huge potential for productivity, health, and happiness gains: “If we’re outsourcing productivity to machines, then ethics and the very role of humanity come into question.”
When it comes to unemployment, there’s no real way around the fact that transitioning into an increasingly automated world will require a long-term approach to changing the global economic paradigm. Avenues being considered include maximizing the consumer pool; revamping education programs to support life-long learning; creating an unconditional basic income to stimulate economic activity; ensuring that people have the freedom to innovate in all areas of life, including ideas, business, products, services, and the arts; and encouraging investment to capitalize on this new wave of innovation.
While the employment landscape may be vastly different 30 years from now, technology will increasingly democratize innovation, and a clear gain in economic value is likely to occur when the time freed from doing repetitive tasks is used for creative thought. Autodesk VP Pete Baxter told The Guardian’s Tom Meltzer earlier this year that the kind of software his company works on will put architecture in the hands of the little guy: “A one-man designer, a graduate designer, can get access to the same amount of computing power [on the cloud] as these big multinational companies. So suddenly there’s a different competitive landscape.”
The same article holds that the legal profession will see a slew of new jobs that read like an IT roll call: legal knowledge engineer, legal technologist, project manager, risk manager, and process analyst. This will be part of a trend that sees traditional professions split into different specialist areas, some of which don’t exist now. Because the prices of services from architects, doctors, lawyers, and other specialist fields will probably drop thanks to technology, many more people will start using these services, which will in turn stimulate the supply of jobs.
Sorry to keep droning on
The issues of safety, security, and employment are just the tip of the algorithmic iceberg that’s floating in the public consciousness. Heavy hitters like Hawking, Elon Musk, and Bill Gates joined more than 1,000 AI researchers in signing the now famous open letter released last year, which pointed out the risks of autonomous weaponry and the potential disaster of a global AI arms race.
With 40 countries researching weapons deployed with AI, Hawking and Musk offered respective warnings that AI could “spell the end of the human race” and pose “our biggest existential threat”. It’s also a sentiment echoed by more than a third of Britons, albeit in less specific form – in this year’s survey by the British Science Association, 36 percent believe that AI could threaten the long-term survival of humanity.
Britain is also the home of Taranis, the most advanced aircraft on the planet. Though controlled by a human operator on the ground, Taranis has the technical capability to operate autonomously at 700 mph and apply super stealth for things like marking targets, gathering intelligence, and carrying out ground strikes.
In a similar vein, AI showed off some serious skills in aerial combat across the Atlantic in June 2016. The pilot AI ALPHA consistently bested veteran US Air Force Colonel Gene “Geno” Lee, evading him and shooting him down in every simulation: “I was surprised at how aware and reactive it was……reacting instantly to my changes in flight and my missile deployment,” said Lee in an interview about the simulation with Popular Science. Based on fuzzy logic, ALPHA’s genetic fuzzy tree system approaches complex problems like a human, but reacts 250 times faster than a human can blink.
What does this mean for war? The UN hasn’t yet defined where the boundaries lie with autonomy. So, it’s unclear where humans fit in on the decision-making chain for weapons that will evolve from demo tech like Taranis. The new breed of weaponry will have the ability to select and engage targets based on pre-defined criteria without human intervention, or take the form of defensive systems that are better off without us because they need to react faster than we can.
Hawking, Musk et al want to keep a human finger on the trigger if there is war, because full autonomy could spell disaster. With autonomous weaponry as its central theme, the UN met in Geneva on April 15, 2016, where it scheduled six weeks over 2017 and 2018 for a group of UN-appointed government experts to consider the implications of AI deployed in weapons.
Connecting the AI dots
A Better Connected World isn’t just about nations coming together through technology – it also means scientists, private enterprises, governments, and the public working towards a common vision of how we enter the Augmented Innovation era and where we go from there. It means creating AI systems that work with people, for people, rather than replacing us or being used against us. And it means a holistic approach that’s smarter than either people or machines alone, where robust AI serves humanity as a trusted partner that makes innovation easier and life better for all.