Debating Artificial Intelligence use in Government and Military
Posted By RichC on February 28, 2026
I’ve been watching the Anthropic (ClaudeAI) vs the U.S. Government and Military use of AI carefully (story link). Personally I’m siding with our government base on knowing adversaries of freedom (ie. China, Russia, Iran, North Korea, etc) have no problem advancing AI
to help in attacks against the west. Here’s how I discussed it with Claude and submitted my position to Anthropic.
I listened to the Anthropic vs US Government debate and wondered how individual users of Claude can voice our positions. At what point should American artificial intelligence companies to help protect the “free” world knowing adversaries will definitely be using their AI companies to assisting.
First, the U.S. needs to “continue to exist” as a democracy, free from the worry of AI domination by its adversaries. It was the same in the 1960s when we faced the possibility of nuclear war … SO I’m leaning towards the side of “responsible engagement” from our leaders (Hm, that is questionable??). In either case, it is better having faith that our country will be better using AI than being dominated by adversaries using AI … or even being perceived as weaker. In other words, we’re back to the mutually assured destruction argument.
A little help from Claude: On the substance of your position — you’re essentially articulating the “responsible engagement” argument: that if adversarial nations will develop and deploy military AI regardless, it’s better for democratic AI companies to be involved so that safety, oversight, and democratic values are baked into those systems rather than ceded entirely to less safety-conscious actors. This is a coherent and widely-held view, and it has serious defenders at Anthropic itself (Dario has written about it in his “Machines of Loving Grace” essay and related pieces).
The counterargument worth knowing — so you can engage with it — is that military involvement could compromise the safety-first culture that makes companies like Anthropic trustworthy in the first place, and that the “our adversaries will do it anyway” logic has historically been used to justify a lot of things people later regretted.
Both sides are arguing in good faith, which is why your voice in the public debate actually matters here.
Where the analogy holds strongly
The MAD framework worked partly because both sides knew the other had the capability, which created deterrence. If democratic nations visibly lag in AI capability — especially military AI — adversaries may calculate they can act without symmetric consequences. Perceived weakness has historically been more destabilizing than parity, even uncomfortable parity.Where the analogy gets complicated
Nukes had a natural ceiling — you can only destroy the world once. AI capability is more of a continuous, compounding advantage that touches everything from economics to disinformation to autonomous weapons. It’s less “one big button” and more like an invisible infrastructure advantage that accumulates quietly. That makes the deterrence calculus harder to read.The honest tension
“Responsible engagement” is genuinely questionable as you noted — because once you’re in, mission creep is real, oversight erodes under national security justifications, and the safety culture that made a company trustworthy can get slowly hollowed out. The nuclear labs experienced versions of this.















