Late February 2026, US Defense Secretary Pete Hegseth gave Anthropic the ultimatum. Strip away the safety guardrails from the Claude AI model to allow "any lawful use," or lose a $200 million contract. Add to that the threat of being branded a national "supply chain risk".

The US government argues it’s not about leaving the door open for mass domestic surveillance and fully autonomous weapons, but that they as the policy makers get to decide, not a tech company like Anthropic.

Amodei and Anthropic on the other hand have so far contniued to deny US Department of War’s request, stating they could not in good conscience agree to uses that undermine democratic values.

Machine Learning has been used in defence for ages. But LLMs have changed the landscape drastically, especially in terms of outcomes and consequences. Let's dig in.

On Feb 27, 2026, Peter Hegseth, Head of Department of War, declared Anthropic as a supply chain risk.

We very rarely see big tech companies taking such strong stance on humanity and ethics. At least, I haven’t seen one notably taking such a strong stance on such a key topic. Mass surveillance has been second nature to most big tech companies and governments over the last decades and more.

However, here’s my concern. Seemingly, the Department of War has struck a similar deal with OpenAI with similar guardrails in place but not with Anthropic. This may seem to you like just a political story. It is not. It is a policy problem and a business one.

Most companies, not just big tech, often treat AI ethics like a PR exercise or a nice-to-have compliance checklist. No matter which side you are on, if a major leading AI company like Anthropic can look the Pentagon in the eye and say no to $200 million over ethical boundaries, your enterprise has zero excuse for flying blind.

Why AI Ethics Actually Matters

The supply chain risk designation applied to Anthropic is historically singular. This label has never before been applied to an American company. It is a penalty reserved for foreign adversaries such as Huawei, etc. Hegseth applied it to a San Francisco AI company for refusing to remove its ethical guardrails for any and all use cases including mass surveillance and use in autonomous weapons.

Trump followed with an order for all federal agencies to immediately stop using Anthropic's technology, with a six-month wind-down window for the Pentagon. Anthropic responded by vowing to challenge the designation in court, calling it "legally unsound" and "retaliatory and punitive." Legal experts have since noted that a post on X does not create a lawful supply chain designation.

And then something happened that I did not expect and why AI ethics truly matters: Hundreds of tech workers from OpenAI, IBM, Slack, and Salesforce Ventures signed an open letter calling the designation a dangerous precedent for the entire industry. Workers from the very company that took Anthropic's contract, ie. Open AI, publicly defended Anthropic's right to refuse it.

Hours after Anthropic was blacklisted, OpenAI announced it had secured the Pentagon contract. Their deal carries three stated red lines: no mass domestic surveillance, no autonomous weapons, no high-stakes automated decisions. OpenAI positioned it as more “guardrailed” than any previous classified AI deployment.

But MIT Technology Review landed on this within 24 hours: "OpenAI's compromise is what Anthropic feared."

This tells you everything about where the industry actually stands and why this matters.

With what is happening right now in the world, with the government versus the tech company at conflict with each other, and for the very first time a tech giant actually being on the right side of humanity, the stakes could not have been any higher. So this is not just one country's or one organisation’s problem. This is very much your problem.

AI models are probabilistic engines. They entirely lack empathy and human judgment, despite AI sounding like and mimicking human-like behaviour.

You are handing these systems the keys to complex decision making, affecting business outcomes, operational resilience and even human lives. Without a strong ethical foundation, an AI system will amplify historical biases, make incorrect decisions even though it may “sound” correct, will support your subconscious bias and execute illogical actions at a speed humans cannot intercept. What happens when these actions have dire consequences?

Ethics is not a philosophical debate for academics. It dictates whether your AI deployment solves a massive business problem or creates a catastrophic corporate liability. AI ethics risk is very much your enterprise risk, just like any other AI or cyber risk.

The Financial and Legal Reality

What is happening between Anthropic and the Pentagon is not a one-off political drama. It is the most public preview yet of what AI governance failure costs: contracts (about to be) terminated, an unprecedented federal blacklist, and an industry scrambling to redraw its ethical lines in court. And the financial reality is now catching up with every enterprise that has been watching from the sidelines. The fallout from unmanaged AI is already bleeding balance sheets.

AI is now the top no. 2 global business risk in the 2026 Allianz Risk Barometer. The single biggest jump in the barometer's history, rising from #10 in just one year. Cyber holds no. 1 for the fifth consecutive year. This is not a niche concern. It is a damn consensus.

The Anthropic case is heading to court. Whatever the outcome, one thing is already certain: "The algorithm did it" is dead as a legal defense. Amidst all this, most are missing the deeper picture, about how deeply rooted this problem is.

The Hidden Iceberg

Companies like Lockheed Martin, Palantir, Shield AI, etc. have been using AI autonomously in missiles, weapons, sensors and intelligent infrastructures that decide where everything goes and how the systems work, how the missiles target etc. They have been doing it for a while.

What’s different though is that companies like OpenAI and Anthropic now provide the reasoning layer, the actual intelligent layer that allow these autonomous weapons to make a decision, that can have dire consequences.

The systems now operate on large language models that can mislabel and misidentify targets, hallucinate, produce wrong outcomes, and still not be able to explain how it came to the decision that it came. That's where large language models fall apart. That's what makes it's really scary.

ICYMI:

You, Me and AI in Decision Making

The only thing standing between you and your decision making is your imagination to leverage AI. Read full story —>

How to Get There Ethically

In a noisy world, your leadership will hinge on how well you govern AI through deterministic controls and to improve decision-making and solve real-world problems.

It's great that we're using AI more and more not just for taking actions but also for augmenting decision-making in businesses, in personal lives, in organisations, and in society at large. But there's a big caveat.

These decisions require ethics and human oversight as an integrated part of that AI augmented decision-making. While we need laws and regulations, and policy makers to define those, we do need ethics integrated and embedded as a part of AI development and deployment. Guardrails work when they are deterministic and not when they are prompted. How do you get there, ethically?

  • Test your data. Really. Most data today is biased. Think of all the racial and gender imbalances across your data, that even predates AI. If you don’t fix that, your AI will only amplify those biases.

  • Make an AI ‘ethical use’ matrix that defines very clearly what are the types of use cases that your AI tools will be applied to versus what are some of the key cases that AI will never be used for, that are deemed unethical. They shouldn't be far away from your ethics framework that you most likely already have in your organisation.

  • Amazon's AI recruiting tool hated women. You do not want to repeat the same mistake. You need to test your AI outputs whether they differ systematically by gender, age, ethnicity, geography or social backgrounds before you expand their use across multiple use cases.

  • Create a model card for every AI system you deploy: what it was trained on, what decisions it influences, and it's known limitations. That is a minimum requirement.

  • The Dutch government’s automated childcare benefit system ended up falsely accusing tens of thousands of families of fraud, disproportionately targeting at the minorities. No human reviewed the decisions. That is not an edge case. It is what happens when AI decisions affecting people have no testing on its data or its output against ethical requirements. This is what happens when I decisions affecting people have no human in the loop.

  • For every decision that AI makes, the EU AI Act mandates and appeals process for high risk AI systems. When humans override AI decisions, log it. Just because something is legal, doesn’t make it ethical.

  • Use hooks in your AI agents to enforce it. That is the only deterministic way to add some level of security and safety to your AI agents.

Without hooks, you are only “vibe coding” at scale. Hooks are the ways to get deterministic controls in place, interrupting and validating actions and decisions (to be) made by AI agent, in a way that creates reproducible, auditable and governable agentic behaviour.

The sheer scale and speed of that AI augmentation requires structure. Your no. 1 prerequisite is mapping your business decision workflows, at least your business critical ones. In security, we often talk about mapping the technical asset landscape, but not so often the decision landscape. How many organisations do you think have any idea or have ever mapped what decisions are made in the organisation that lead to their business operating?

In the AI world, having a clear mapping and understanding of your decision workflows will be key.

Once you start understanding that decision landscape, you can start categorising critical vs. non-critical, reversible vs. non-reversible, ethical vs. non-ethical AI decisions and putting in human accountability, where and when needed.

Where does your accountability buck stop? If it stops with an AI model, it stops nowhere.

If you or your employee built and deployed an AI app in your organisation that is being used and leads to whatever dire consequences or repercussions, then ultimately you are accountable. Not a machine you. You are the owner of the non-human identities.

The Challenge

You are at one of the most important crossroads in the history of humanity and machines. AI will influence your operations, your profitability, and your business resilience. The speed of this technology means you cannot afford to wait for a disaster to force your hand on AI governance. It means governance that is not just in paper but integrated across your development lifecycle, across your business processes and across decision workflows.

Anthropic drew its line. The US government has tried to erase it. Hundreds of industry workers defended it. OpenAI negotiated around it. And the rest of the enterprise world is still deciding whether they have one at all.

I will leave you with one question to ask your CEO and your board today:

Where do you draw your line, and when your AI makes a disastrous decision, especially an unethical one, who in your boardroom is taking the fall?

Until next time, this is Monica, signing off!

— Monica Verma

P.S. Please follow me/subscribe on Youtube, Linkedin, Spotify and Apple. It truly helps. Or book a 1-1 advisory call, if I can help you.

***

Keep Reading