Anthropic's Botched Attempt to Remove Leaked Source Code Sparks GitHub Controversy
Anthropic accidentally took down thousands of GitHub repositories while trying to remove leaked source code for its Claude Code application, sparking controversy and highlighting the company's challenges with execution and compliance.
Anthropic, a company behind the popular AI model Claude, recently found itself in a predicament after accidentally releasing source code for its Claude Code command line application. The leaked code was quickly picked up by AI enthusiasts, who pored over it for insights into Anthropic's use of large language models (LLMs) and shared it on GitHub. In an effort to contain the leak, Anthropic issued a takedown notice under U.S.
digital copyright law to GitHub, requesting that the platform remove repositories containing the offending code. However, the notice appears to have been overly broad, targeting approximately 8,100 repositories, including legitimate forks of Anthropic's own publicly released Claude Code repository. This move sparked outrage among social media users, whose code was inadvertently blocked.
Anthropic's head of Claude Code, Boris Cherny, acknowledged that the takedown was an accident and retracted the majority of the notices. The company limited its takedown request to just one repository and 96 forks that actually contained the accidentally released source code. According to an Anthropic spokesperson, the affected repository was part of a fork network connected to the company's own public Claude Code repository, which led to the takedown notice being applied more broadly than intended.
The spokesperson stated, "The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended." Anthropic retracted the notice for all but the one repository it specifically named, and GitHub has since restored access to the affected forks. This incident raises concerns about Anthropic's ability to manage sensitive information and execute its plans effectively, particularly as the company reportedly prepares for an initial public offering (IPO). The botched cleanup effort may also have legal implications, as shareholders may be upset about the leak of proprietary source code.
The mishap serves as another black eye for Anthropic, which appears to be struggling with attention to detail and compliance. As the company moves forward with its IPO plans, it will need to demonstrate a much higher level of competence in managing its intellectual property and navigating complex regulatory issues. Anthropic's experience serves as a reminder of the challenges that companies face in balancing the need for openness and collaboration with the need to protect sensitive information.
As the company continues to develop its AI models and applications, it will need to find a way to strike a more effective balance between these competing priorities.
Source: TechCrunch