
Adaption Launches AutoScientist, Automating Fine-Tuning by Co-Optimizing Data and Models
Key Takeaways
- AutoScientist automates fine-tuning, enabling rapid self-improvement with minimal human input.
- Enables rapid learning of task-specific capabilities for AI models.
- Disciplined evaluations are needed to verify task-specific performance claims.
AutoScientist Launch
Adaption introduced AutoScientist on Wednesday as an AI product that automates conventional fine-tuning by improving training data and models together, aiming to help systems learn specific capabilities quickly.
“For years, AI researchers have anticipated the moment when systems could improve themselves more efficiently than humans”
TechCrunch quotes Adaption co-founder and CEO Sara Hooker saying, "What’s super exciting about it is that it co-optimizes both the data and the model, and learns the best way to basically learn any capability," as the company positions the tool as a step toward frontier AI training outside major labs.

The launch materials described AutoScientist as building on Adaption’s existing data offering, Adaptive Data, which is designed to make it easier to build high-quality datasets over time and then turn those datasets into continuously improving AI models.
TechCrunch also notes that AutoScientist is designed to be model-agnostic and that conventional benchmarks like SWE-Bench or ARC-AGI aren’t applicable because the system adapts models to specific tasks.
To encourage adoption, Adaption is making AutoScientist free for the first 30 days after its release, with TechCrunch reporting that the company is confident users will see the difference once they try it.
Benchmarks and Claims
AutoScientist’s performance claims are framed around win rates rather than standard benchmark scores, with TechCrunch stating that Adaption boasts AutoScientist has "more than doubled win rates across different models."
The product is described as using an automated approach to conventional fine-tuning that co-optimizes data and model architecture simultaneously, and Bitcoin World quotes Hooker saying, "It suggests we can finally allow for successful frontier AI trainings outside of these labs."

Bitcoin World adds that because AutoScientist adapts models to specific tasks rather than general benchmarks, conventional metrics like SWE-Bench or ARC-AGI do not directly apply.
Startup Fortune says the launch challenges "the scale-first economics of frontier AI" by arguing teams can teach existing models a specific capability faster and at lower cost instead of waiting for a bigger frontier model.
Startup Fortune also reports that a TechCrunch report published on Wednesday said Adaption claims AutoScientist has more than doubled win rates across different models while acknowledging that familiar benchmarks such as SWE-Bench or ARC-AGI do not neatly capture what the product is trying to do.
Where It Could Go
Adaption’s pitch links AutoScientist to a broader goal of letting the "whole stack" adapt to tasks, with TechCrunch quoting Hooker saying, "Our view at Adaption is that the whole stack should be completely adaptable, and should basically optimize on the fly to whatever task you have."
“Ai/Startup|Adaption Labs has launched AutoScientist, an AI tool designed to automate fine-tuning by improving training data and models together”
Bitcoin World frames the tool as a way to allow successful frontier AI trainings outside of labs, and it describes AutoScientist as turning continuously improving datasets into continuously improving AI models.
Startup Fortune says the company introduced AutoScientist on May 13, describing it as a product designed to automate conventional fine-tuning by improving both the training data and the model together.
Zamin.uz reports that AutoScientist enables rapid training of AI models and their adaptation to specific tasks through an automated approach, and it says this was reported by TechCrunch.
Across the coverage, the immediate next step for users is evaluation during the 30-day free trial window, with TechCrunch saying the tool is free for the first 30 days after its release and Startup Fortune describing the need for disciplined evaluations to verify task-specific performance claims.
More on Technology and Science

Solar Impulse 2 Crashes Into Gulf Of Mexico During Autonomous Test Flight
10 sources compared

France Allows Some Passengers Off Ambition After Norovirus Outbreak In Bordeaux
16 sources compared

FCC Approves EchoStar $40 Billion Spectrum Sales to SpaceX and AT&T
10 sources compared

CMS Launches ACCESS 10-Year Program To Reimburse Measurable Outcomes For Chronic Disease Care
10 sources compared