American AI Startup Poolside Launches Free, High-Performing Open Model Laguna XS.2 for Local Agentic Coding
Poolside, a US AI startup, has released Laguna XS.2, a free, open-source AI model for local agentic coding, and Laguna M.1, a proprietary model optimized for high-consequence enterprise and government environments.

['The AI landscape has been rapidly evolving, with major players like Anthropic and OpenAI continually pushing the boundaries of what is possible. However, a new contender has emerged in the form of Poolside, a US-based AI startup founded in San Francisco in 2023. Today, Poolside announced the launch of its Laguna large language models, including Laguna XS.2, an Apache 2.0 open-licensed model designed for local agentic coding tasks, and Laguna M.1, a proprietary model optimized for high-consequence enterprise and government environments.', 'The Laguna models are designed to facilitate agentic workflows, enabling AI systems to do more than just chat or generate content, but also write code, use third-party tools, and take actions autonomously.
Laguna XS.2 is a 33-billion parameter Mixture of Experts (MoE) model with 3 billion active parameters, engineered for efficiency and community innovation. This model is designed to be versatile, allowing developers to fine-tune, quantize, or serve powerful agents on a single GPU. Notably, Laguna XS.2 can be downloaded and run on local computers without an internet connection, providing a private and secured environment.', "In addition to the Laguna models, Poolside also introduced a coding agent harness called 'pool' and a web-based, mobile-optimized agentic coding development and interactive preview environment called 'shimmer.' The company claims that its models were trained from scratch, rather than being fine-tuned or post-trained on base models from other sources.
According to Poolside, the Laguna models were trained using a unique digital environment called the 'Model Factory,' which utilizes a powerful internal software called Titan and a tool called the Muon optimizer to facilitate efficient learning.", "The results of the Laguna models' performance are impressive, with Laguna M.1 achieving a 46.9% score on the SWE-bench Pro benchmark, nearing the performance of larger models like Qwen-3.5 and DeepSeek V4-Flash. Laguna XS.2, despite being a smaller model, achieved a 44.5% score on SWE-bench Pro, surpassing other models like Claude Haiku 4.5 and Gemma 4 31B. These results suggest that Poolside's focus on agentic RL and synthetic data curation has allowed its smaller models to 'punch up' into weight classes typically reserved for far denser architectures.", "By releasing Laguna XS.2 under the Apache 2.0 license, Poolside is positioning itself as a cornerstone of the open-AI ecosystem.
The company's leadership believes that 'the West needs strong open-weight models' and that releasing the weights is the fastest way to improve their work through community evaluation and fine-tuning. With the Laguna release, Poolside is charting a path to AGI that is as much about the way we build as it is about what we build, with a focus on software engineering and a commitment to open weights and novel developer surfaces."]
Source: VentureBeat