Keeping a close eye on AI
The rise of big data coupled with breakthroughs in machine learning has fueled an interest in AI and raised expectations. In the area of robotics, we’re at once fascinated and threatened by i-Robot and Terminator type scenarios. Though robot overlords are unlikely, safety and security issues and threats are relative, and we must approach them realistically.
I think, therefore I am
Two basic questions need to be considered when it comes to AI: The first concerns scientific capability: Can we build AI to become self-aware? The second is ethical: If we can build consciousness into AI, should we?
AI technology is in essence computer technology determined by integrated circuit chips based on silicon transistors and hardware and software technologies. Building AI with consciousness requires much greater research on the brain and bio-tech, and we’re not there yet.
Moore’s Law holds that the number of transistors in a densely integrated circuit doubles every two years. Many believe that when the number of transistors in a chip exceeds the neurons in the human brain, computers will be the more powerful system. Nevertheless, just like a billion ants are no match for a person in terms of brain power, quantity doesn’t necessarily equal quality.
When we consider the current state of play, super computers like China’s Tianhe computers have petabyte speeds, but they don’t possess intelligence. Equally, the Google engine is able to access all of the content on the Internet, but it doesn’t represent a significant improvement in AI.
Today’s computers can barely comprehend human language or see and detect objects with accuracy. While it’s likely that the development of bio and quantum technologies will result in computing breakthroughs that will benefit society, we’re perhaps being too paranoid about fearing AI.
Doing some good
In the next few decades, service robots that perform all sorts of mundane tasks in the forms of intelligent machines will continue to improve life and free us from repetitive and onerous tasks.
Self-driving vehicles are currently the most far-reaching pilot involving robots, but are we paying enough attention to the potential risks? We’re basically putting our lives in the hands of a car that we’re not controlling. What happens if it malfunctions or is hacked? Examples of the destruction wrought by machines in disasters like plane crashes are well documented.
There’s still a long way to go before self-driving cars become mature enough to appear on roads, meaning there’s enough time for issues to be discovered and solved.
Stephen Hawking’s concerns mainly focus on the misuse of technology. AI tech applied to weaponry, for example, could prove devastating. There are two take home messages here: First, as tech matures we will gradually learn how to use and control self-driving automobiles, service bots, and other AI applications for the betterment of society. In this sense, we have little to fear. Two, we must use tech in a controllable and responsible way, especially in the weaponry, bio-tech, and transgenesis fields.
We need to keep a close eye on new AI to ensure it’s controlled and not misused. Then, the rewards will continue to vastly outweigh the risks.
Most Popular
-
01
Forging an Era of Intelligence with the Three Trees Talent Model
By Sun Gang, Director, ICT Talent Partner Development Dept, Huawei -
02
Hypotheses and Visions for an Intelligent World
Zhou Hong, President of the Institute of Strategic Research, Huawei -
03
How Network Transformation Enables NaaS Business Success
By Wei Yongfu, Network Architecture Transformation Marketing Dept, Huawei Carrier BG -
04
How I Became the First HCIE-Datacom Expert in the World
By Han Shiliang, Gold Medal Lecturer, YESLAB Training Center Laboratory -
05
Green Network Evolution with Digital-twins Platform (GNED) Whitepaper
-
06
Improving Digital Skills to Unleash the Power of Women
By Vicky Zhang, Vice President, Corporate Communications, Huawei