Researchers at MIT Suggest an AI Model that Newer Stops Learning

IBL News | New York

Researchers at MIT presented a model called SEAL (Self-Adapting Language Models) that enables LLMs to learn to generate their own synthetic training data based on the input they receive and learn from their experiences. This AI model that never stops learning tries to mimic human intelligence.

Currently, the latest AI models can reason by performing more complex inference. By contrast, the MIT scheme generates new insights and then folds them into its own weights or parameters.

The system includes “a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and enable it to continue learning,” explained MIT at Wired.

The researchers tested their approach on small and medium-sized versions of two open-source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models, too.

Researchers noted that SEAL is computationally intensive, and it isn’t yet clear how best to schedule new periods of learning.

“Still, for all its limitations, SEAL is an exciting new path for further AI research, and it may well be something that finds its way into future frontier AI models,” said these researchers at MIT.