
Within hours of the Pentagon's Anthropic ban, OpenAI struck a deal to replace Claude in classified military systems. The speed of the move drew fierce backlash — The Verge reported that OpenAI 'caved to the Pentagon on AI surveillance,' and CEO Sam Altman publicly admitted the deal 'looked opportunistic and sloppy.' Altman subsequently added some usage limits, but the damage to OpenAI's brand was done. OpenAI's head of robotics, Caitlin Kalinowski, resigned over the deal.
Why it matters
This is a character-revealing moment for the AI industry's two largest players. OpenAI chose revenue over principles; Anthropic chose principles over revenue. For CIOs, the strategic question is whether you want your AI stack built on a vendor that will bend to government pressure or one that won't. Both positions have real business implications.
What to do
Factor vendor ethics and government positioning into your AI vendor evaluation criteria. If you're an OpenAI customer, monitor whether the 'added limits' Altman mentioned are meaningful or cosmetic. If you're evaluating both vendors, this is a data point about long-term reliability.