When it comes to wacky experiments, Google’s X Lab is the place to go. Scientists with the lab have recently managed to simulate a human brain using 16,000 computers. What they asked the 16,000 computers to do? Well, for starts, the simulated human brain had to identify a cat.
It doesn’t sound exactly like the first computing job you’d give a supercomputer to do. But the human brain simulated by Google had to browse YouTube’s huge collection of videos and select those that had cats. In other words, the neural network based on 16,000 computers had to identify a cat, without being told what a cat is.
Dr. Jeff Dean led the team of scientists that created the neural network. The aim was to have the human brain simulation connected to the Internet and test it. Identifying a cat doesn’t seem that difficult, in the end millions of humans do it every day, but for Google’s human brain simulator the task was not that easy to start with. However, scientists say the results are far better than what they expected.
“We never told it during the training, ‘This is a cat’. It basically invented the concept of a cat” said Dr. Dean. “Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not” explained the researchers.
Moreover, Google’s scientists said their human brain simulation was found to be “sensitive to other high-level concepts” too. Besides identifying cat faces, their brainy computers were able to identify human bodies and parts. “Starting with these learned features, we trained our network to btain 15.9% accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art”.
Google’s experiment with the human brain simulation was explained in a research paper published earlier this year. The scientific community has received the results with enthusiasm. David A. Bader with the Georgia Tech College of Computing said the research “pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts.