Poolside AI Unveils Laguna XS.2 and M.1: Breakthrough Agentic Coding Models
Poolside AI releases Laguna M.1 and Laguna XS.2, two agentic coding models achieving 72.5% and 68.2% on SWE-bench Verified, respectively.

["Poolside AI has introduced the first two models in its Laguna family: Laguna M.1 and Laguna XS.2. These models, accompanied by the release of pool, a lightweight terminal-based coding agent, and a dual Agent Client Protocol (ACP) client-server, mark a significant milestone in the development of agentic coding capabilities. Both Laguna M.1 and Laguna XS.2 are Mixture-of-Experts (MoE) models, which optimize performance by activating only a subset of specialized sub-networks, or 'experts,' for each token.", 'Laguna M.1 is a 225B total parameter MoE model with 23B activated parameters.
Trained from scratch on 30T tokens using 6,144 interconnected NVIDIA Hopper GPUs, it serves as the foundation for the entire Laguna family. Its performance benchmarks include 72.5% on SWE-bench Verified, 67.3% on SWE-bench Multilingual, 46.9% on SWE-bench Pro, and 40.7% on Terminal-Bench 2.0.', "Laguna XS.2, the second-generation MoE and Poolside's first open-weight model, boasts 33B total parameters with 3B activated per token. Designed for agentic coding and long-horizon work on a local machine, it achieves 68.2% on SWE-bench Verified, 62.4% on SWE-bench Multilingual, 44.5% on SWE-bench Pro, and 30.1% on Terminal-Bench 2.0.
This model uses sigmoid gating with per-layer rotary scales, enabling a mixed Sliding Window Attention (SWA) and global attention layout.", "The development of Laguna M.1 and XS.2 involved significant investments in three key areas: AutoMixer, for optimizing the data mix automatically; Muon Optimizer, a distributed implementation of the Muon optimizer; and Async On-Policy Agent RL, a fully asynchronous online RL system. These advancements, combined with Poolside's diversity-preserving data curation approach, have yielded models with impressive performance and efficiency."]
Source: MarkTechPost