Anthropic's Month of Mishaps: AI Company Leaks Source Code
Anthropic, known for its careful approach to AI, has suffered two major data leaks in a week, exposing internal files and source code for its Claude Code software.
Anthropic, the AI company that has built its public identity around being careful and responsible, is having a rough month. It has been vocal about AI risks and has some of the best researchers in the field, but it seems to have had a couple of major mishaps. Last week, Fortune reported that nearly 3,000 internal files were accidentally made public, including a draft blog post about a new model. On Tuesday, the company pushed out version 2.1.88 of its Claude Code software package and accidentally included a file with nearly 2,000 source code files and over 512,000 lines of code. This exposed the full architectural blueprint for one of its most important products. A security researcher named Chaofan Shou noticed the leak and posted about it on X. Anthropic stated that the leak was caused by a release packaging issue due to human error, not a security breach. Claude Code is a command-line tool that lets developers use Anthropic's AI to write and edit code and has become a significant player in the market. The leaked code included the software scaffolding around the AI model, which provides instructions on how to behave, what tools to use, and where its limits are. Developers quickly published detailed analyses of the leaked code, describing it as a production-grade developer experience. While it's unclear if this will have a lasting impact, competitors may find the architecture instructive. For now, it's likely that someone at Anthropic is wondering about their job security, especially if they're the same engineer or team responsible for last week's leak.
Source: TechCrunch