GovernmentNews

Pentagon memo orders military commanders to remove Anthropic from key systems

·2 min read
Pentagon memo orders military commanders to remove Anthropic from key systems

The Pentagon orders military commanders to remove all Anthropic AI products from systems within 180 days, following a March 4 supply chain risk designation. The March 6 memo from Defense Department CIO Kirsten Davies cites concerns that adversaries could exploit vulnerabilities posing "catastrophic risks to the warfighter." Anthropic sues the military, claiming the designation violates First Amendment rights and retaliates against the company's refusal to remove safeguards preventing domestic surveillance and autonomous weapons use.

Why it matters

The Pentagon's action signals that AI vendors unwilling to grant "any lawful use" access without guardrails face exclusion from lucrative defense contracts worth billions. OpenAI announces a Department of Defense partnership days after Anthropic's designation, demonstrating how compliance flexibility creates competitive advantage in the federal AI market. Enterprise CIOs now face a precedent where government customers may demand removal of vendor-imposed safety restrictions as a condition of doing business.

What to do

Review your AI vendor contracts for clauses limiting government use cases and assess whether similar supply chain designations could disrupt your operations. If your organization works with defense contractors or pursues federal contracts, establish clear policies on AI safety guardrails versus customer control requirements before procurement decisions.

Enterprise AI