The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI

There's a theory that human intelligence stems from a single algorithm. The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is -- at its core -- a general purpose machine that can be tuned to specific tasks.
Image may contain Andrew Ng Clothing Apparel Coat Suit Overcoat Human Person Tuxedo Tie Accessories and Accessory
Andrew Ng. Photo: Ariel Zambelich/Wired

There's a theory that human intelligence stems from a single algorithm.

The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is -- at its core -- a general-purpose machine that can be tuned to specific tasks.

About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. "For the first time in my life," Ng says, "it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime."

>'For the first time in my life, it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.'

Andrew Ng

In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT's Marvin Minsky called "The Society of Mind." To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.

When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the "one algorithm" hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who'd dabbled in neuroscience research. And the dream returned.

It was a shift that would change much more than Ng's career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.

This movement seeks to meld computer science with neuroscience -- something that never quite happened in the world of artificial intelligence. "I’ve seen a surprisingly large gulf between the engineers and the scientists," Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.

What's more, scientists often felt they "owned" the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.

The end result is that engineers started building AI systems that didn't necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.

But, now, thanks to Ng and others, this is starting to change. "There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers," says Dr. Thomas Insel, the director of the National Institute of Mental Health.

What Is Deep Learning?

Deep Learning is a first step in this new direction. Basically, it involves building neural networks -- networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like.

>With Deep Learning, Ng says, you just give the system a lot of data 'so it can discover by itself what some of the concepts in the world are.'

In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own -- a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.

With Deep Learning, Ng says, you just give the system a lot of data "so it can discover by itself what some of the concepts in the world are." Last year, one of his algorithms taught itself to recognize cats after scanning millions of images on the internet. The algorithm didn't know the word "cat" -- Ng had to supply that -- but over time, it learned to identify the furry creatures we know as cats, all on its own.

This approach is inspired by how scientists believe that humans learn. As babies, we watch our environments and start to understand the structure of objects we encounter, but until a parent tells us what it is, we can't put a name to it.

No, Ng's deep learning algorithms aren't yet as accurate -- or as versatile -- as the human brain. But he says this will come.

Andrew Ng's laptop explains Deep Learning.

Photo: Ariel Zambelich/Wired

From Google to China to Obama

Andrew Ng is just part of a larger movement. In 2011, he launched the Deep Learning project at Google, and in recents months, the search giant has significantly expanded this effort, acquiring the artificial intelligence outfit founded by University of Toronto professor Geoffrey Hinton, widely known as the godfather of neural networks. Chinese search giant Baidu has opened its own research lab dedicated to deep learning, vowing to invest heavy resources in this area. And according to Ng, big tech companies like Microsoft and Qualcomm are looking to hire more computer scientists with expertise in neuroscience-inspired algorithms.

Meanwhile, engineers in Japan are building artificial neural nets to control robots. And together with scientists from the European Union and Israel, neuroscientist Henry Markman is hoping to recreate a human brain inside a supercomputer, using data from thousands of real experiments.

>'Biology is hiding secrets well. We just don’t have the right tools to grasp the complexity of what’s going on.'

Bruno Olshausen

The rub is that we still don't completely understand how the brain works, but scientists are pushing forward in this as well. The Chinese are working on what they call the Brainnetdome, described as a new atlas of the brain, and in the U.S., the Era of Big Neuroscience is unfolding with ambitious, multidisciplinary projects like President Obama’s newly announced (and much criticized) Brain Research Through Advancing Innovative Neurotechnologies Initiative -- BRAIN for short.

The BRAIN planning committee had its first meeting this past Sunday, with more meetings scheduled for this week. One its goals is the development of novel technologies that can map the brain's myriad circuits, and there are hints that the project will also focus on artificial intelligence. Half of the $100 million in federal funding allotted to this program will come from Darpa -- more than the amount coming from the National Institutes of Health -- and the Defense Department's research arm hopes the project will “inspire new information processing architectures or new computing approaches.”

If we map how out how thousands of neurons are interconnected and "how information is stored and processed in neural networks," engineers like Ng and Olshausen will have better idea of what their artificial brains should look like. The data could ultimately feed and improve Deep Learning algorithms underlying technologies like computer vision, language analysis, and the voice recognition tools offered on smartphones from the likes of Apple and Google.

"That’s where we’re going to start to learn about the tricks that biology uses. I think the key is that biology is hiding secrets well," says Berkeley computational neuroscientist aid Olshausen. “We just don’t have the right tools to grasp the complexity of what’s going on."

What the World Wants

With the rise of mobile devices, cracking the neural code is more important than ever. As gadgets get smaller and smaller, we'll need new ways of making them faster and more accurate. As you shrink transistors -- the fundamental build blocks for our machines -- the more difficult it becomes to make them accurate and efficient. If you make them faster, for instance, that means it needs more current, and more current makes the system more noisy -- i.e. less precise.

>'If we could figure out how biology naturally deals with noisy computing elements, it would lead to a completely different model of computation.'

Bruno Olshausen

Right now, engineers design around these issues, says Olshausen, so they skimp on speed, size, or energy efficiency to make their systems work. But AI may provide a better answer. "Instead of dodging the problem, what I think biology could tell us is just how to deal with it....The switches that biology is using are also inherently noisy, but biology has found a good way to adapt and live with that noise and exploit it," Olshausen says. "If we could figure out how biology naturally deals with noisy computing elements, it would lead to a completely different model of computation."

But scientists aren't just aiming for smaller. They're trying to build machines that do things computer have never done before. No matter how sophisticated algorithms are, today's machines can’t fetch your groceries or pick out a purse or a dress you might like. That requires a more advanced breed of image intelligence and an ability to store and recall pertinent information in a way that’s reminiscent of human attention and memory. If you can do that, the possibilities are almost endless.

“Everybody recognizes that if you could solve these problems, it’s going to open up a vast, vast potential of commercial value,” Olshausen predicts.

That financial promise is why tech giants like Google, IBM, Microsoft, Apple, Chinese search giant Baidu and others are in an arms race to develop the best machine learning technologies. NYU's Yann LeCun, an expert in the field, expects that in the next two years, we'll see surge in Deep Learning startups, and many will be snatched up by larger outfits.

But even the best engineers aren't brain experts, so having more neuro-knowledge handy is important. "We need to really work more closely with neuroscientists," says Baidu's Yu, who is toying with the idea of hiring one. "We are already doing that, but we need to do more."

Ng's dream is on the way to reality. "It gives me hope –- no, more than hope –- that we might be able to do this," he says. "We clearly don’t have the right algorithms yet. It’s going to take decades. This is not going to be an easy one, but I think there’s hope."