Information overload is a fact of modern life, creating a tyranny of choice often resulting in analysis paralysis. Too many options makes it difficult to understand any one option deeply enough to make an informed decision. That’s a problem, because the whole point of Ubiquitous Computing – a term coined by Xerox PARC in the 80s to represent a dream of the time – was to make our lives and businesses better and more efficient.
Thankfully, amid the enormity of data lies the means to manage it. Our computing infrastructure has now reached a level of data and connectivity where it’s possible for computers to recognize patterns and prioritize options, so it’s easier for humans to make informed decisions.
One of the capabilities of human intelligence is recognizing patterns of varying complexity: from the primal shapes of edible plants, prey, and predators to the more abstract symbols used in mathematics, language, and philosophy. We‘re able to perceive these patterns through senses that detect light, sound, and other sensations. Our brains intuitively recognize recurring shapes, events, and the outcomes of our encounters – we give these things labels so we can communicate our accumulated knowledge to others.
Today, computer algorithms are emulating that capability in a set of technologies broadly called Machine Learning (ML). In ML, which is related to big data and data mining, large data sets are labeled by humans with the names of the objects within them: people, places, things, events, patterns, and the sequences of those objects. This labeled data set is called ground truth, which is then fed to set of algorithms that analyze the data for statistical correlations, probabilities, patterns, and other features that detect the existence of the labeled objects. We call this the training phase, which isn’t unlike the idea of parents and teachers interacting with children for thousands of hours, showing them examples and practising skills.
In fact, a currently popular form of ML uses a technique called a deep neural network that operates in a way analogous to how neurons in a human brain operate, firing pulses across a web of nodes such that recurring paths are reinforced. Like human brains, the DNN labels these reinforced paths with words or symbols and they represent knowledge that’s triggered when the network encounters similar patterns in the future.
ML has created a breakthrough in artificial intelligence because it emulates the way humans learn and simplifies how we build an AI system. In traditional AI systems, knowledge had to be programmed into a logic framework that the computer could execute. Such knowledge would come from interviewing experts in a domain (for example, doctors, lawyers, and accountants), and then programming the knowledge into an expert system. Unfortunately, programming was labor intensive and never-ending. Sometimes experts cannot articulate what they know or even be aware of their own knowledge.
In the end, expert systems can be flat wrong at worst and incomplete at best. In ML systems, the statistical features of the data may be barely recognizable to humans, yet easily discernible to a computer. Now, attributes of knowledge that humans don’t realize they know and that an expert system programmer would have overlooked, are detected and incorporated into the machine’s model of knowledge. Old-school AI hasn’t gone away and the best AI systems today use a hybrid of programmed expert-system knowledge, machine learning, and searches.
Machines are also learning how to see and hear better than ever using ML techniques. In old-school AI, the input data had to be hand coded and entered by humans. The humans acted as sensors, encoding the state of the world and inputting that into the machine logic. Now, through computer vision and audio intelligence technologies, machines can see and hear nearly as well as, and sometimes better than, humans. Now, like humans, machines can make sense of their environment allowing them to improve and learn for themselves.
The secret to the success of ML is that the amount of labeled data has exploded. Online services offer many ways for people to upload their data, tagging a few items as they do so, and then the service can make sense of future data. As we increasingly make speech queries to search engines, tag photos of our family and friends on social networks, upload our finances to online accounting services, and conduct our lives online, the machines can see more of what’s important to people. At the same time, networking and cloud-based data centers make it possible to create clusters of machines with enough power to churn through these enormous data sets.
Artificial intelligence has been coming for decades but there’s a difference now. Until recently, the technology had to be contained into a single computer machine, but now the data and algorithms are distributed across the network. This provides richer knowledge bases and also puts the AI into more hands. As data, networking and cloud processing proliferate, the benefits of machine learning technologies will reach into more parts of our lives, propelling us into an age of Augmented Innovation.
At one time, we cursed our computers for being too dumb to detect the most obvious things in our lives but I hear the phrase “stupid computer” much less these days. Now, we’re delighted by how clever they can be and are almost surprised when they fail to understand something about us.
These and other as-yet-unpredicted applications of machine intelligence will change how to live and work. The combination of machine and human intelligences will open up new categories of Augmented Innovation.