How intelligent will AI get?
Will AI ever reach human levels of intelligence? Opinions vary, but one thing is certain: AI is far from easy.

By An Wei & Zhang Baofeng
A survey in 2013 by Vincent C. Müller and Nick Bostrom asked hundreds of scientists when they believe machines will achieve artificial general intelligence (AGI), meaning human-level intelligence. The median years for 10, 50, and 90 percent probability of reaching AGI were 2022, 2040, and 2075, respectively. But, there are still many challenges to reaching human-level intelligence.
Current limits to achieving AGI
The first is domain limitation. Today’s artificial intelligence primarily applies a mathematical approach that can solve a finite set of statements for a finite set of terms described under a finite set of rules. Marvin Minsky, widely regarded as the father of AI, dismissed deep learning as a fad that wasn’t a good model of intelligence because it mostly models bottom-up perception. Quite often, the potential scenarios and number of parameters are infinite, and the only way to get around this are to limit how the AI can be applied.
The second is causality. Current state-of-the-art technology is built upon a complex model that requires massive compute power and huge volumes of data to simulate relatively weak patterns and correlations. However, it cannot easily solve causality. An analogy is WWII bombers, which were great for the war effort but easy targets. On returned bombers the wings were the most popular destination for bullets, and so the knee-jerk response was to fit them with armor. But the Jewish mathematician Abraham Wald saw things differently, believing it wasn’t the most bullet-riddled places that needed armor; it was the place with the least – the engine. Why? Planes that were hit in the engine never returned. However, data analysis without human abstraction doesn’t reveal this causality.
The brain: A force to be reckoned with
The third reason complicating AGI is that transistors and neurons are not born equal. Equating intelligence to computing power is likely to be a misguided over-simplification. For one thing, transistors have already far surpassed the speed of neurons, which can process a single lexical decision task at no faster than 60 bps; however, transistors are far less efficient at extracting information.
Computers also lack the punching power of the brain, which can perform 20 petaflops of computations per second compared with the 91 gigaflops of a high-end desktop PC. Parity won’t be reached for some years yet – not until around 2041 according to cognitive neuropsychologist Chris Westbury, which ties in with the predicted arrival of AGI.
An Elon Musk-funded AI project headed by two PhD students compares the human brain and computers based on a measurement they specifically developed: Traversed Edges Per Second (TEPS). Their work suggests that our gray matter is 30 times more powerful than IBM’s number cruncher, Watson. Current TEPS prices means that an hour of computing time at the brain level would cost up to US$170,000, a dollar value that’s expected to drop to US$100 in the next 7 to 14 years.
Marshalling the same level of computing power isn’t impossible now, but it’s clearly expensive and we still can’t get computers to do the same things our brains can so effortlessly. It’s still worth noting that the efficiency of AGI design may be such that it could surpass human-level performance using much less computing power than the human brain does. One thing is seems to be inevitable: the rate of growth in computing power will be exponential.
Two sides of the same coin
Our brains may be superior at the moment, but machines and humans still work in different ways to solve different sides of the same problem, with machines adept and at rapid calculations and correlations, and the human brain more skilled at finding causality, abstracting, and applying common sense.
Many people underestimate how difficult AI is, which doesn’t help the industry. Most optimists base estimates on a linear path towards AGI and non-linear progress in IT. The logic here is flawed, and in fact is the kind of thinking that led us into two AI winters in the past where interest in AI and funding by governments and venture capitalists evaporated. The fact is we’re not even close to understanding human intelligence in all its multi-faceted glory: reasoning, abstraction, generalization, consciousness, dreams, memory, imagination, quantum waves in our brains – there are so many questions that we’ve yet to answer. Replicating something we don’t really understand is far from easy, and is perhaps not the best avenue to explore when it comes to AI.
Popular areas in the field, such as machine self-awareness and passing the Turing test aren’t necessarily practical and don’t help solve real world problems. Educated AI, however, puts more emphasis on applied intelligence, with the goal of allowing intelligent technologies to serve society - the purpose of all tech. The principles of Educated AI in a world of Augmented Innovation can result in a positive symbiosis of collaboration between man and machine where the whole is greater than the sum of the parts.