10/14/2025
MIT’s SEAL enables language models to self-adapt by generating their own finetuning data and update directives.
Highlights:
• +43% improvement in factual accuracy
• Outperforms GPT-4 synthetic baselines
• 72.5% success in abstract reasoning tasks
Sources: WIRED, VentureBeat, authors’ blog, and original paper
Full paper: arXiv: Self-Adapting Language Models (2025)