Anthropic’s constantly in news for all sorts of reasons, whether it’s their latest partnership with Mozilla for testing Firefox’s Security or their ongoing feud with the US government. While Anthropic’s actions are building more trust with their users, OpenAI saw a 295% surge in ChatGPT uninstalls over a single weekend including it’s robotics lead quitting over the Pentagon deal. Add to that Meta is once again in the news for all the wrong reasons. When your critical systems may be hallucinating or your most intimate data is being used to train AI, and that too without consent, we need to talk about ethics (independent of legality) and integrating AI security.
Today’s edition of The Predictability Factor by Monica Talks Cyber, covers:
Quick Updates
🫣 I was born in India, at a time, when education for women was still considered ridiculous and laugh-worthy. In some parts of the world and society, it still is.
My grandfather was a bus driver. My grandmother hustled 5 jobs. My parents were middle-class workers. Never in a million years did I imagine that one day this day will come. Never in a million years did I imagine that after dealing with toxic ex-bosses, toxic ex-partner, fighting cancer, I would still be here to take this plunge.
After 20+ years in this industry, going from hacker to CISO and AI advisor for large organisations, particularly in the critical infrastructure space, I’ve finally taken the plunge.
I’ve quit 9 to 5, to go all in to build my legacy.
I am scared. I am excited. I am grateful.
P.S. Last month, I was invited by PwC and Microsoft to their joint-workshop to share my best practices on data governance and how to get your organisation AI-ready in terms of Agentic AI security, Governance, and Controls and Engineering. I talk about one of the many issues, here. I’ll do a deep dive soon on the rest.
What Anthropic’s Feud Means for You
In a recent interview, Dario Amodei said Anthropic has "no choice" but to challenge the US Department of War's supply chain risk designation in court. That three-word phrase, "no choice but", is where your enterprise risk analysis needs to begin.
We do not believe this action is legally sound, and we see no choice but to challenge it in court.
After Anthropic denied to remove the clauses around the use of AI for a) mass domestic surveillance and b) fully autonomous weapons, the US government designated Anthropic a supply chain risk, via a post on X. A supply chain risk designation prohibits defense contractors from using Anthropic's models as a part of any contract with the Department of War. But this is not just going to affect the US.
Anthropic has decided to fight this in court. I think this case can take years. And that is exactly the problem.
Your organisation may not have anything to do with the Pentagon, but your vendors may. For example, Microsoft had to put out a statement clarifying that Anthropic products remain available through M365, GitHub, and AI Foundry for non-defence work. Amazon and Google made similar statements.
Either this supply-chain risk designation is a whole load of bs that will be difficult to implement, or it will just lead to chaos across infrastructures, even outside the Pentagon, likely affecting enterprise procurement decisions far beyond the federal contractors. Worst case, both of these scenarios may come true at the same time.
From a security and third-party risk perspective, even if you have nothing to do with the Pentagon or you are not even in the US, but your organisation has any US government contracts or is in a regulated sector with US federal exposure, this is a vendor risk you need to monitor actively. If you are based in Europe with no US federal exposure, the direct compliance risk is limited for now. But the precedent this case will set for tech companies and AI vendor risk management is not, and it may impact Europe and the rest of the world.
The supply chain risk designation may be legally unsound. But it is technically being enforced right now.
Not Divided: An Open Letter
On the other hand, hundreds of tech employees from Google and OpenAI signed an open letter calling on the Pentagon to withdraw its supply chain risk designation of Anthropic. These are employees from the very company that has stepped in to take Anthropic's contract after Anthropic refused it on ethical grounds.
Let that sit for a moment.
The letter makes a precise and important argument. A supply chain risk designation applied to an American company for refusing to remove ethical guardrails has had no precedent. This label has historically been reserved for foreign adversaries. When the US government applies it to an American company for holding an ethical line, I believe it reaches far beyond Anthropic. Even the employees of OpenAI stand by Anthropic’s decision.
They're trying to divide each company with fear that the other will give in.
That sentence should sit with every enterprise leader making AI-related decisions right now. More than politics, this is about policy. That’s where we need to look deeper and understand why this matters in the context of AI ethics and security. My take on it is below.
AI Ethics, Autonomous Weapons and The Hidden Iceberg
How all this started and how this poses a risk for you? Late February 2026, US Defense Secretary Pete Hegseth gave Anthropic the ultimatum. Strip away the safety guardrails from the Claude AI model to allow "any lawful use," or lose a $200 million contract. Add to that the threat of being branded a national supply chain risk, which already happened. But that’s not the bigger story. There is a hidden iceberg that no one’s talking about.
The Pentagon vs. Anthropic story is not about AI entering weapons systems. That ship sailed years ago.
Lockheed Martin, Palantir, Shield AI, etc., these companies have been using AI/ML in missiles, targeting systems, and autonomous weapons infrastructure for years. Not secretly. Not controversially. Although the UN has been debating Lethal Autonomous Weapons Systems for over a decade, nobody seems to have come to any agreeable conclusion. Nearly twelve years of debate, three consecutive UN General Assembly resolutions with near-universal support, and the world's most dangerous autonomous weapons are still being built, deployed, and now powered by LLMs, entirely outside any binding legal framework.
Until LLMs entered the picture, the risks were still somewhat manageable, because the AI/ML in those systems was deterministic: rule-based, bounded, explainable.
That is no longer the case. That changed the moment companies like OpenAI and Anthropic entered the picture.
As I wrote in my full blog, and what I want you to sit with:
What’s different now is that companies like OpenAI and Anthropic provide the reasoning layer, the actual intelligent layer that allows these autonomous weapons to make a decision that can have dire consequences. The challenge however is that this “reasoning” layer is no longer deterministic. It is probabilistic by nature.
The systems now operate on large language models that can mislabel and misidentify targets, hallucinate, produce wrong outcomes, and still not be able to explain how they came to the decision that they came to.
😬 The AI running your enterprise tools and the AI making targeting decisions in defence systems are using the same reasoning models. Same hallucination risk. Same explainability gap.
That's where large language models fall apart. That's what makes it really scary. The Pentagon's dispute with Anthropic is not about whether AI belongs in weapons systems. It has been there for decades. It is about who now controls the decision layer inside those systems. And that decision layer can hallucinate.
In a customer service application, a hallucination is a problem you fix. In an autonomous targeting system, there is no fixing it after the fact. The supply chain risk designation is just the visible tip. This is the iceberg underneath it.
The AI systems now operate on probabilities. That is not a technical upgrade. It is a different category of risk entirely.
ICYMI:
The $200 Million Ultimatum and AI Ethics
The Pentagon vs. Anthropic standoff exposes something deeper than just ethical loopholes hiding in your AI governance right now. Read full story —>
Prompting “Security” into AI
A few months ago, Moltbook, an AI-agent social network, suffered a massive data breach. A misconfigured Supabase database exposed 1.5 million API keys and 35,000 user email addresses directly to the public internet. The root cause was vibe coding. Developers built fast, the AI assistant filled the gaps, and those gaps included the absence of row level security, a basic first-line-of-defence configuration that no system prompt caught, because system prompts are not capable of catching it.
This is not an isolated story.
I see day in and day out massive examples of people “vibe-coding” their entire start-ups, their organisations, their enterprise apps, and then wondering why they got exploited. I talk to a lot of organisations and I see within those organisations their developers, their employees, their engineers and even their leadership taking a complete YOLO approach to AI and AI security.

Image 1: One of Many Real-World Examples of People Believing You Can Just “Prompt” Security
I think it should be pretty evident for every cyber and risk professional and leader but if it's not or even if it is, I think it's just worth repeating. Repeat after me:
Prompting ≠ Governance.
Prompting ≠ Security
Prompting ≠ Ethics
On the other hand, prompting can indeed be used to hack you and bypass the guardrails that someone “prompted” in.
Wiz found that 20% of vibe-coded applications carry serious vulnerabilities or configuration errors. Not 20% of poorly built applications. 20% of applications built with AI assistance specifically.
There is a belief spreading fast across the era of AI-assisted development: that you can prompt security into an AI system. That if you write the right instructions into a system prompt, your AI is secured. It is not.
A prompt is a suggestion to a probabilistic engine. It can be overridden, ignored, injected around, and give hallucinated outcomes. Security does not work on probability. Security works on determinism. If your control can be bypassed by a sufficiently creative input, it is not a control.
I wrote this in my AI Ethics blog and it bears repeating here:
Guardrails work when they are deterministic and not when they are prompted. Without hooks, you are only vibe coding at scale. Hooks are one of the ways to get deterministic controls in place, interrupting and validating actions and decisions made by AI agents, in a way that creates reproducible, auditable and governable agentic behaviour.
Your developers are building faster than they are building securely. The AI helping them is not auditing their security architecture. It is simply completing their code. These are not the same job.
P.S. I’ll do a deep dive into agentic AI security controls in my upcoming editions of my newsletter, The Predictability Factor.
Your Bedroom is AI Training Data
A joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten published what Meta's marketing materials will never tell you.
When a user says "Hey Meta" to their Ray-Ban glasses, the footage is routed to human contractors at a data annotation firm called Sama, in Nairobi, Kenya. Those contractors are reviewing footage of users regularly, whether it’s users going to the toilet, getting undressed, revealing bank card details captured by “mistake”, during private conversations, or exchanging intimate moments that were never intended to be seen by anyone, let alone labeled for training the AI models.
While Meta markets the Ray-Ban glasses as “built with your privacy in mind.", you know it better than anyone else. Your data was never yours. The “Terms of Service” tells a completely different story, as it always does. But this goes even deeper.
While voice recordings are stated to be stored only with active consent for product improvements, the AI assistant automatically processes speech, text, images, and sometimes video simply to function. That data can be shared with third parties. Users cannot turn this processing off.
The Nairobi contractors, working for less than $2/hour, are under heavy non-disclosure agreements. Personal phones are not permitted on the floor. While some form of pseudo-anonymisation was “supposed” to blur faces in the footage, workers in Nairobi have confirmed it does not always work as intended. Some faces remain visible.
Your data was never truly yours. That is not new. But the intimacy of what is being captured, the ethical violations, and the opacity of where it goes is mind blowing. We are not just talking about your browser history. It is your bathroom. Your bedroom. Your undressed moments.
As AI and Augmented Reality get woven into your physical space (personal or professional), this specific story is a model that many other AI hardware companies may follow (if not already): Capture data at the point of human experience, route it offshore for labelling, use it to train the next model.
The enterprise risk does not end with Meta's legal exposure. The first question is what kind of data? And the next question is whether consent was ever given? I bet the answer to the latter is no.
Are you or someone you know wearing these glasses to client meetings, sensitive facilities, or corporate conversations? You might say the same happens with your mobile phone. The difference is that everyone in the room knows you have a phone. Most people wouldn’t know you are wearing a camera that’s sending their data to Kenya.
Most organisations have no policy governing what AI hardware captures inside your environment. This is where every AI and information security policy just broke. You almost certainly do not know what is being routed, by which device, to which contractor, in which country.
When you accepted Meta’s terms of service, did you know that you "consented" to this?
Until next time, this is Monica, signing off!

— Monica Verma

P.S. Please follow me/subscribe on Youtube, Linkedin, Spotify and Apple. It truly helps. Or book a 1-1 advisory call, if I can help you.
***




