
Buckle up for a thrilling tale from the world of AI innovation. Researchers at Stanford and the University of Washington have stirred the pot by creating an AI "reasoning" model, dubbed s1, for under $50 in cloud compute credits. Yes, you heard that right—fifty quid!
This new model, s1, holds its own against big names like OpenAI’s o1 and DeepSeek’s R1 in math and coding tests. The secret sauce? A process called distillation, where the team fine-tuned an off-the-shelf model using Google’s Gemini 2.0 Flash Thinking Experimental as a guide.
The s1 model, available on GitHub, raises eyebrows and questions about the future of AI. If a few researchers can replicate a multi-million-dollar model on a shoestring budget, what does that mean for the big players? OpenAI, for one, isn't thrilled, accusing DeepSeek of data shenanigans.
Want to hear more? Join Mal on the Property AI Report Podcast each week!
Access from your preferred podcast provider by clicking here
Despite the drama, s1's creation is a testament to innovation's accessibility. Using supervised fine-tuning (SFT), the researchers distilled the model with just 1,000 curated questions. In under 30 minutes, using 16 Nvidia H100 GPUs, they achieved impressive results.
While distillation is a cost-effective way to replicate existing models, it doesn't yet create groundbreaking new ones. As giants like Meta, Google, and Microsoft prepare to invest billions in AI, the question remains: can small-scale innovation keep pace?
Stay tuned, as the world of AI continues to evolve, offering both challenges and opportunities for all!
Want to hear more? Join Mal on the Property AI Report Podcast each week!
Access from your preferred podcast provider by clicking here

Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai
Comments