New York, Computer-based Artificial Intelligence (AI) can function more like human intelligence when programmed to use a much faster technique for learning new objects, researchers say. In the journal Frontiers in Computational Neuroscience, the researchers explained how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.
“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” said the researcher, Maximilian Riesenhuber, Professor of neuroscience at Georgetown University in the US.
“We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing,” Riesenhuber added.
Humans can quickly and accurately learn new visual concepts from sparse data — sometimes a single example.
Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to “see” many examples of the same object to know what it is, the researcher explained.
The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, he added.
The researchers found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.
The brain architecture underlying human visual concept learning builds on the neural networks involved in object recognition.
The anterior temporal lobe of the brain is thought to contain “abstract” concept representations that go beyond shape. These complex neural hierarchies for visual recognition allow humans to learn new tasks and, crucially, leverage prior learning.