Breakthrough at MIT: The Parallels Between Human Learning and AI Model Training Processes

by François Dupont
4 comments
self-supervised learning

Researchers at MIT have discovered that neural networks, when trained through self-supervised learning, exhibit patterns that parallel brain activity. This enhances our comprehension of both artificial intelligence and human cognition, particularly in predicting movement and navigating spaces.

“Self-supervised learning” models, which learn from unlabeled data about their surroundings, show activity patterns comparable to those found in the mammalian brain, according to two studies from MIT.

For successful navigation and interaction with our environment, our brains must inherently understand the physical world, which in turn informs how we process incoming sensory data.

It’s hypothesized that the brain develops this intrinsic comprehension via a mechanism akin to “self-supervised learning.” This method, initially devised to improve computer vision models, enables computational models to learn about visual environments based on their inherent similarities and differences, without any provided labels or additional data.

Neural Network Research Findings

The K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT presents new research indicating that when neural networks are trained with a certain type of self-supervised learning, they produce patterns of activity that closely resemble those observed in the brains of animals engaged in identical tasks.

These findings suggest that these models can learn about the physical world sufficiently to make predictions about it, implying that the mammalian brain may employ a similar approach, according to the researchers.

Neural networks, which are used to process data and make decisions, are computational systems that emulate human brain functions. They consist of multiple interconnected nodes, or neurons, that adapt their connections through a process known as training. By sifting through extensive data, neural networks learn to identify patterns and execute a variety of complex tasks, such as image recognition and language interpretation.

Aran Nayebi, a postdoc in the ICoN Center, suggests, “The main idea of our research is that AI, while being developed to enhance robotics, also offers a framework to gain a broader understanding of the brain.”

Nayebi is the primary author of one of the studies, alongside Rishi Rajalingham, now at Meta Reality Labs, and senior authors Mehrdad Jazayeri and Robert Yang, both of whom are associated with the McGovern Institute for Brain Research. The other study is led by Ila Fiete, the director of the ICoN Center, with the assistance of Mikail Khona and Rylan Schaeffer.

Both pieces of research are to be presented at the Neural Information Processing Systems (NeurIPS) conference in December 2023.

Developments in Computational Models and Their Significance

Early computer vision models largely depended on supervised learning, requiring vast amounts of labeled data for training. Recently, researchers have pivoted to contrastive self-supervised learning, a method that enables algorithms to classify objects by their similarities without any external labels.

Nayebi notes, “This powerful technique allows us to harness large modern data sets, particularly videos, unlocking their potential. Recent AI advancements, like ChatGPT and GPT-4, stem from training self-supervised objectives on large-scale data sets, which yields flexible representations.”

These models or neural networks comprise numerous processing units, each with varying connection strengths to others. Through extensive data analysis, these connections evolve as the network learns to fulfill a given task.

The studies at NeurIPS set out to determine if self-supervised computational models of cognitive functions also bear similarities to the mammalian brain. Nayebi’s team trained models on naturalistic video data to predict future environmental states, aiming to create models that can generalize to various cognitive tasks.

One of the trained models was tested on “Mental-Pong,” a modified version of the game Pong where the ball becomes invisible before reaching the paddle, requiring players to predict its trajectory. This model accurately tracked the ball’s path, with neural activation patterns resembling those in the brains of animals playing the same game, notably within the dorsomedial frontal cortex.

In the study focusing on grid cells in the entorhinal cortex, which are crucial for spatial navigation, the MIT team trained a contrastive self-supervised model to perform path integration tasks and efficiently represent space. This model learned to code positions based on similarities and differences, producing lattice patterns akin to those in grid cells.

This research, underpinned by the ICoN Center, NIH, Simons Foundation, McKnight Foundation, McGovern Institute, and Helen Hay Whitney Foundation, bridges the gap between computational efficiency and biological plausibility, potentially inching us closer to replicating natural intelligence in artificial systems.

Frequently Asked Questions (FAQs) about self-supervised learning

What is the key finding of the MIT studies on brain and AI learning?

MIT research has discovered that self-supervised learning in neural networks results in activity patterns that closely resemble those found in mammalian brains, particularly in areas such as motion prediction and spatial navigation.

How do self-supervised learning models work?

Self-supervised learning models learn from unlabeled data by identifying patterns based on similarities and differences within the data, without requiring external labels or additional information.

What implications do these MIT studies have for our understanding of AI and the brain?

The studies enhance our comprehension of artificial intelligence and brain cognition by showing that the brain may develop an intuitive understanding of the physical world in a way similar to self-supervised learning in AI.

What are grid cells and how do they relate to the MIT research findings?

Grid cells are specialized neurons that assist in spatial navigation by firing at vertices of a triangular lattice. The MIT research trained AI models using self-supervised learning that emulated grid cell-like patterns, connecting AI learning methods to brain function.

Who were the lead researchers in these MIT studies?

Aran Nayebi and Rishi Rajalingham led one of the studies, while Mikail Khona and Rylan Schaeffer were pivotal in the other, with both sets of research supported by senior authors from the MIT faculty.

When and where will the studies be presented?

The studies are scheduled to be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

What makes contrastive self-supervised learning significant according to the MIT researchers?

Contrastive self-supervised learning is significant because it enables models to learn from large datasets, particularly videos, without labeled data, leading to flexible representations that can be applied to various tasks, such as the ones developed for ChatGPT and GPT-4.

What funding sources supported the MIT research?

The research received funding from the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

More about self-supervised learning

You may also like

4 comments

Emma K. November 6, 2023 - 9:41 pm

So MIT is doing these studies on self-supervised learning, that’s the same tech behind stuff like GPT-4 right? really shows we’re on to something big, but does this mean AI is getting too close to how we think.

Reply
John Miller November 6, 2023 - 11:04 pm

just read about MIT’s findings, incredible stuff! I always thought the brain worked like computers in some way. now theres proof, the patterns are the same or so they say.

Reply
Alex R. November 7, 2023 - 5:34 am

missed the part about the conference, when is NeurIPS again? Gotta mark it down, seems like this research from MIT’s gonna make some waves, esp in AI and neuroscience fields.

Reply
SarahJ November 7, 2023 - 9:40 am

love how they’re linking AI to actual brain cells like grid cells for navigation. I read somewhere that these cells are why we dont just bump into stuff all the time. technology imitating life, isn’t it.

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

SciTechPost is a web resource dedicated to providing up-to-date information on the fast-paced world of science and technology. Our mission is to make science and technology accessible to everyone through our platform, by bringing together experts, innovators, and academics to share their knowledge and experience.

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!