Self-Taught AI May Have a Lot in Common With the Human Brain

0
27


For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled “tabby cat” or “tiger cat,” for example, to “train” an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.

Such “supervised” training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.

“We are raising a generation of algorithms that are like undergrads [who] didn’t come to class the whole semester and then the night before the final, they’re cramming,” said Alexei Efros, a computer scientist at the University of California, Berkeley. “They don’t really learn the material, but they do well on the test.”

For researchers interested in the intersection of animal and machine intelligence, moreover, this “supervised learning” might be limited in what it can reveal about biological brains. Animals—including humans—don’t use labeled data sets to learn. For the most part, they explore the environment on their own, and in doing so, they gain a rich and robust understanding of the world.

Now some computational neuroscientists have begun to explore neural networks that have been trained with little or no human-labeled data. These “self-supervised learning” algorithms have proved enormously successful at modeling human language and, more recently, image recognition. In recent work, computational models of the mammalian visual and auditory systems built using self-supervised learning models have shown a closer correspondence to brain function than their supervised-learning counterparts. To some neuroscientists, it seems as if the artificial networks are beginning to reveal some of the actual methods our brains use to learn.

Flawed Supervision

Brain models inspired by artificial neural networks came of age about 10 years ago, around the same time that a neural network named AlexNet revolutionized the task of classifying unknown images. That network, like all neural networks, was made of layers of artificial neurons, computational units that form connections to one another that can vary in strength, or “weight.” If a neural network fails to classify an image correctly, the learning algorithm updates the weights of the connections between the neurons to make that misclassification less likely in the next round of training. The algorithm repeats this process many times with all the training images, tweaking weights, until the network’s error rate is acceptably low.

Alexei Efros, a computer scientist at the University of California, Berkeley, thinks that most modern AI systems are too reliant on human-created labels. “They don’t really learn the material,” he said.Courtesy of Alexei Efros

Around the same time, neuroscientists developed the first computational models of the primate visual system, using neural networks like AlexNet and its successors. The union looked promising: When monkeys and artificial neural nets were shown the same images, for example, the activity of the real neurons and the artificial neurons showed an intriguing correspondence. Artificial models of hearing and odor detection followed.

But as the field progressed, researchers realized the limitations of supervised training. For instance, in 2017, Leon Gatys, a computer scientist then at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T, then overlaid a leopard skin pattern across the photo, generating a bizarre but easily recognizable image. A leading artificial neural network correctly classified the original image as a Model T, but considered the modified image a leopard. It had fixated on the texture and had no understanding of the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans don’t label the data. Rather, “the labels come from the data itself,” said Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the blanks. In a so-called large language model, for instance, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a massive corpus of text gleaned from the internet, the model appears to learn the syntactic structure of the language, demonstrating impressive linguistic ability—all without external labels or supervision.



Source link