Using AI for the betterment of humanity: Keeping it real with Educated AI
Educated AI is an area of artificial intelligence that is application-specific, human-centric, user-educated, self-learning, and personalized.

By An Wei & Zhang Baofeng, Huawei 2012 Labs
Intelligence has always been a marker that defines humanity. Now, it’s also defining machines and how they interact with us and the world. They still can’t do the things we can do with our adaptable intelligence, and for the time being artificial intelligence (AI) remains confined to fairly narrow and specific tasks. Educated AI is a field of AI that adopts a machine learning approach that’s dynamic and responsive to the environment, but is tailored to specific applications and tasks. It’s an intelligence that learns by trial and error to form a practical approach that solves real world problems to make life better and more efficient.
AI, specifically artificial narrow intelligence (ANI), where AI is really good at one task, has become an extension of human intelligence. Our smartphones are capable of routine tasks like backing up people’s memories as stored photos, getting us places with maps and GPS, recommending books and music, and tracking our preferences. With network technologies and Internet prevalence cementing this trend, other ANI tech like driverless vehicles and domestic robots will see the worlds of human and machine intelligence further intertwine. When applied to practical problems, Educated AI is an enabler of ANI.
Educated AI doesn’t seek to reproduce human intelligence, and instead is bound under five parameters:
Application-specific: It uses different intelligent systems with different applications to achieve different tasks where intelligence is only measured by the capability to complete these target tasks. For example, an AI home management system or AI tutor are only intelligent in their own fields; for example, a child asking the former for help doing his math homework might be met with a recommendation for an upgrade or different system. Application-specific AI greatly reduces the chance of mistakes.
Human centric: Usable intelligence needs to be understandable and predictable for a human. Equally, the AI has to share a person’s value network; for instance, technology needs to comprehend how people will react to its actions.
User-educated: The system is relatively autonomous, but users can quickly teach it about new environments; for example, familiarizing a smart machine with your house can allow the AI brain to memorize the layout, but the system can decide what tasks it needs to do without further instruction.
Self-learning: Educated by a user who’s in the machine learning loop, the system learns both commands and patterns. It can correct errors, make judgments according to its environment, and provide the user with recommendations and reminders.
Personalized: Educated AI is designed to improve the user experience in specific applications. While not designed to reproduce human abilities, it can make decisions in a dynamic environment rather than just perform repetitive tasks; for example, an AI home manager can decide whether to adjust the temperature or close a window based on analyzing statistics from sensors and the user’s habits.
In the Augmented Innovation stage, better results will be produced when the human brain works alongside an Educated AI system that can interact with its environment. Professor Pieter Abbeel from UC Berkeley provided an example when he trained the robot BRETT in a series of motor skills that could be applied to motor tasks like putting a clothes hanger on a rack, assembling a toy plane and Lego blocks, and screwing a cap on a water bottle. Previously Herculean tasks for a computer, the robot accomplished these without pre-programming, instead applying the human fail-safe approach: trial and error. Professor Abbeel told the Berkeley News that, “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
In some ways, this mirrors Professor Andy Clark’s concept of extended mind for humans, where consciousness is embedded into interaction with what’s around us. The Edinburgh University professor used the example of a child doing arithmetic with the help of its fingers, which is in effect part of the cognition process. Thus cognition is not bound by three pounds of brain tissue; rather it flows in the environment. In Professor Abbeel’s project, BRETT also interacts with its environment to learn how to do a range of tasks by trial and error using a single artificial neural network.
Another example is driverless cars – it might take several months to build a car that runs by itself, but it’ll probably take years, if not decades, to perfect the autonomous tech, because it’s impossible to exhaust all possible scenarios in traditional programming. A more efficient way is to teach driving by giving a huge number of examples, and let the machine generalize patterns rather than use an “if-then” model that fails in the face of infinite scenarios.
A third example is natural language processing (NLP) which enables computers to use language as well as humans. NLP is extremely difficult for two reasons: First, language comprehension theoretically requires reasoning ability and extensive knowledge. Second, everything a computer does must be depicted by a mathematical model. The key problem is how to represent all the knowledge of a language in a way that allows the program to reason and apply that knowledge in other areas.
In each case, the AI is designed to excel at a particular task by a process of deep learning and interacting with its environment to improve life.