
AI Safety · Anthropic · Cybersecurity · Regulatory Risk
Anthropic has unveiled "Project Glasswing," providing its highly dangerous "Claude Mythos" AI model to a select group of 40 companies, including Amazon and Google, after warning the model could cause catastrophic hacks and terror attacks if widely released, sparking debate over its motives.
Anthropic executives warned "Mythos" possesses capabilities to exploit critical infrastructure like electric grids and hospitals, having already identified thousands of high-severity vulnerabilities across major operating systems and web browsers. The company's CEO, Dario Amodei, stated this corporate-only rollout aims to allow major tech players like Apple, Nvidia, CrowdStrike, and JPMorgan Chase to proactively find and fix security flaws.
However, critics, including AI safety researcher Roman Yampolskiy and policy group chairman Perry Metzger, question Anthropic's intentions, suggesting the public announcement and restricted access are a form of "regulatory capture" or a deflection from compute capacity limitations, as an anonymous tech insider speculated. Anthropic maintains the initiative strengthens US cyber defenses against adversaries like Iran, China, and Russia, and has donated $4 million to open-source maintainers like the Linux Foundation.