MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do

by Klaus Müller
0 comments
Neural Network Perception Discrepancy

MIT Researchers Uncover Discrepancies in Deep Neural Networks’ Perception Compared to Humans

A recent study conducted by neuroscientists at MIT has shed light on a significant disparity in the way deep neural networks perceive the world compared to human perception. While these neural networks excel at recognizing various images and sounds, they frequently make errors by identifying nonsensical stimuli as familiar objects or words. This discovery suggests that these models develop distinct and idiosyncratic “invariances” that differ from human perception. Additionally, the study suggests that adversarial training could offer a potential avenue for enhancing computational models of sensory perception.

In the human sensory system, the ability to recognize objects or words remains robust even when the object is presented upside down or the word is spoken by an unfamiliar voice. Computational models, specifically deep neural networks, can be trained to perform similarly by correctly identifying objects or words despite variations in color, orientation, or pitch of the speaker’s voice. However, the MIT study reveals that these models often exhibit similar responses to images or words that bear no resemblance to the intended target.

When these neural networks were tasked with generating images or words that they categorized similarly to specific natural inputs, such as a picture of a bear, the majority of the generated content was unrecognizable to human observers. This phenomenon suggests that these models develop their unique “invariances,” causing them to respond identically to stimuli with vastly different characteristics.

The implications of this research extend beyond mere observation. It provides a novel method for researchers to evaluate the degree to which these models replicate human sensory perception. Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, emphasizes that this test should become a standard evaluation tool for computational models.

The lead author of the study, Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, highlights that these findings offer insight into the unique invariances developed by different neural network models. The study discovered that different models displayed distinct idiosyncratic invariances, suggesting that these patterns are model-specific.

Moreover, the researchers found that adversarial training could make the models’ generated content more recognizable to humans, although it still did not match the fidelity of the original stimuli. The exact mechanism behind this improvement remains an area for future research.

In conclusion, the study’s exploration of model metamers—stimuli that produce similar responses in neural networks—offers a valuable tool for assessing the alignment between computational models and human sensory perception. Understanding the idiosyncratic invariances developed by these models could pave the way for future improvements in the field of artificial intelligence and sensory perception modeling.

Reference: “Model metamers reveal divergent invariances between biological and artificial neural networks” by Jenelle Feather, Guillaume Leclerc, Aleksander Mądry, and Josh H. McDermott, 16 October 2023, Nature Neuroscience. DOI: 10.1038/s41593-023-01442-0.

This research was supported by the National Science Foundation, the National Institutes of Health, a Department of Energy Computational Science Graduate Fellowship, and a Friends of the McGovern Institute Fellowship.

Frequently Asked Questions (FAQs) about Neural Network Perception Discrepancy

What did the MIT study reveal about deep neural networks’ perception?

The MIT study found that deep neural networks, while proficient at recognizing various images and sounds, often incorrectly identify nonsensical stimuli as familiar objects or words. This suggests that these models develop unique “invariances” distinct from human perception.

How do deep neural networks compare to human perception in recognizing objects and words?

Deep neural networks can be trained to recognize objects or words regardless of variations in color, orientation, or pitch of the speaker’s voice, much like human perception. However, the study indicates that these models sometimes respond similarly to unrelated stimuli, unlike human perception.

What are “model metamers” in the context of this study?

Model metamers are stimuli generated by neural networks that produce the same response within the model as an example stimulus given by researchers. They are used to diagnose the model’s invariances and were a focal point in this research.

Why is adversarial training important in this study?

Adversarial training, which introduces slight alterations to images during model training, can make the models’ generated content more recognizable to humans. This technique offers insights into improving AI models but requires further research to fully understand its mechanism.

What implications does this study have for AI research and sensory perception modeling?

The study provides a valuable tool for assessing the alignment between computational models and human sensory perception. Understanding the idiosyncratic invariances developed by these models could lead to advancements in artificial intelligence and sensory perception modeling.

More about Neural Network Perception Discrepancy

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

SciTechPost is a web resource dedicated to providing up-to-date information on the fast-paced world of science and technology. Our mission is to make science and technology accessible to everyone through our platform, by bringing together experts, innovators, and academics to share their knowledge and experience.

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!