
Anthropic filed a federal lawsuit against the Department of Defense after the agency designated it a 'supply chain risk' — a label typically reserved for foreign adversaries. The designation followed Anthropic's refusal to remove safeguards preventing its AI from being used for mass surveillance and autonomous weapons. Defense Secretary Pete Hegseth stated that no contractor doing business with the military may work with Anthropic. Over 30 employees from OpenAI and Google DeepMind filed a statement supporting Anthropic's challenge.
Why it matters
If you use or evaluate Anthropic (Claude) as a vendor, this designation creates procurement risk for any organization with government ties. Anthropic argues the majority of its customers are unaffected, but the reputational and legal precedent matters. Microsoft and Google have indicated they'll continue offering Claude through their cloud services, but monitor whether your procurement team flags this.
What to do
If your organization has defense contracts or follows DOD procurement guidance, flag this to legal and compliance teams immediately. If you use Claude through Azure or GCP, confirm with Microsoft or Google that access remains unaffected. Track the case — the court's decision on the temporary restraining order will clarify scope.