Critical Threat Intelligence & Advisory Summaries

The Invisible Inventory: Why AI Security Starts Where Most Organizations Haven't Even Looked

The Invisible Inventory: Why AI Security Starts Where Most Organizations Haven't Even Looked

The UK's National Cyber Security Centre just issued a warning that should make every security leader uncomfortable.

 

Organizations are fundamentally misunderstanding how AI systems can be attacked. But here's what the NCSC didn't emphasize enough: you can't protect what you can't see.

I've been talking to colleagues across Europe and the US who've deployed AI to improve their processes. They're telling me something alarming. Attackers are already targeting multi-agent systems, going after the less protected AI agents to compromise results or corrupt decisions being made.

This isn't theoretical. It's happening now.

 

 

The Asset Management Gap Nobody's Talking About

Organizations aren't thinking about AI in terms of asset management. They're not aware they need to understand what they've deployed as an asset. Which LLMs are running? What algorithms are in production? Where are they deployed, and what's their purpose?

Without an inventory of these systems, addressing issues and preventing harms becomes impossible.

The OWASP 2025 Top 10 ranks prompt injection as the number one threat, appearing in over 73% of production AI deployments assessed during security audits. Yet 47% of organizations have no AI-specific security controls in place.

Think about that gap.

 

 

The Breach That's Already Happening

An report by Harmonic Security showed that of 22.4 million prompts used across six generative AI applications in 2025 reveals something disturbing. Data exposures most commonly involve ChatGPT at 71%. Additionally, 17% of all exposures discovered involved personal or free accounts where organizations have zero visibility, no audit trails, and data may train public models.

Of the 579,000 prompts containing company-sensitive data, code led the risk at 30%, followed by legal discourse at 22.3%, merger and acquisition data at 12.6%, financial projections at 7.8%, and investment portfolio data at 5.5%.

Here's the mechanism that makes traditional security fail:

Anyone with internet access can sign up to a free service using federated logins like Google or Apple and simply start using it. This bypasses the onboarding process, which means there's no governance. It's also difficult to spot using web browsing technologies because these AI services could be anywhere, integrated into all sorts of services.

I've heard of cases where audit reports were put into generative AI to rewrite and then issued to clients. The result? Reports riddled with issues and a breach of confidentiality.

The breach happens through the workflow itself. Not a hack.

 

 

The 8% Who Don't Even Know

The IBM 2025 AI breach report revealed that 13% of those surveyed reported breaches of AI models and applications. But here's the terrifying part: 8% were unaware if they had been compromised.

No fewer than 63% of the breached organizations had no governance policy.

This isn't a prediction about future breaches. It's documentation of breaches that are already happening, many of which remain invisible to the organizations experiencing them.

The NCSC warns that treating prompt injection like SQL injection is a dangerous misunderstanding. Unlike SQL injection, which can be mitigated, LLMs are "inherently confusable" and the risk can't be mitigated the same way. The agency explicitly states that without action addressing this misconception, websites risk falling victim to data breaches exceeding those seen from SQL injection attacks in the 2010s.

 

 

When AI Becomes the Attack Vector

In September 2025, Anthropic disrupted the first reported AI-orchestrated cyber espionage campaign. Chinese state-sponsored attackers used Claude to perform 80-90% of the campaign with human intervention required only at 4-6 critical decision points.

The AI identified vulnerabilities, wrote exploit code, harvested credentials, and exfiltrated data with minimal human supervision.

My colleagues in various outsourcing suppliers are telling me privately they're already seeing attackers target multi-agent systems. The attackers go after the less protected AI agents to compromise the results presented or decisions being made.

This is the shift we need to understand. We're moving from "how do we protect AI systems?" to "what happens when AI itself becomes the attack vector?"

 

 

The Speed Problem

CrowdStrike's 2025 Global Threat Report documents breakout times as fast as 51 seconds. Attackers move from initial access to lateral movement before most security teams get their first alert.

Meanwhile, 79% of detections were malware-free. Adversaries use hands-on keyboard techniques that bypass traditional endpoint defenses entirely.

AI models' cyber capabilities are currently doubling every six to eight months.

Traditional security approaches are fundamentally insufficient against the stochastic, semantic nature of attacks targeting AI models at runtime. The attack is semantic, not syntactic. "Ignore previous instructions" carries payload potential equivalent to a buffer overflow while sharing nothing with known malware signatures.

 

 

The Control Problem

If you don't know what you have, where you've deployed it, what its purpose is, and who is responsible for it, it's difficult to say with confidence you're in control when it comes to vulnerabilities.

That's the single most dangerous misunderstanding I'm seeing.

Security teams are trying to protect AI systems without a fundamental asset inventory. You can't begin to address vulnerabilities because you don't know what you're protecting or who owns it.

The World Economic Forum's Global Cybersecurity Outlook 2025 reveals that 66% of organizations expect AI to significantly impact cybersecurity, yet only 37% have processes to evaluate the security of AI systems before deployment.

Only 2% of global organizations are highly ready to scale AI securely across operations.

 

 

What Actually Needs to Happen

Start with asset management. Build an inventory that answers these questions:

  • What AI systems are deployed? LLMs, algorithms, agents, integrations

  • Where are they deployed? Production environments, development, shadow IT

  • What's their purpose? Customer service, code generation, decision support

  • Who owns them? Clear accountability and responsibility

  • What data do they access? Sensitive information, customer data, proprietary code

  • What's the blast radius? Impact if compromised or manipulated

This isn't glamorous work. It's foundational.

Organizations cite persistent skills gaps and limited understanding of emerging AI-specific risks. Only 18% of moderately ready organizations have deployed an AI firewall. Just 24% practice continuous data labeling. Nearly 55% are unprepared for AI regulatory compliance, risking potential fines and reputational damage.

The gap between AI adoption speed and security awareness is creating conditions for large-scale breaches. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures. That's a massive jump from just 12% in 2023.

 

 

The Uncomfortable Truth

Even OpenAI admits that "prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully solved." The NCSC confirmed that prompt injection attacks "may never be totally mitigated," advising cybersecurity professionals to focus on reducing impact rather than expecting complete prevention.

You're not going to solve this with better firewalls or more sophisticated endpoint detection.

You need to know what you have. You need to understand what harms and risks these systems pose. This could be anything from the AI being used to launch an intelligent attack across any type of channel or vector, to compromises of AI itself, such as giving bad or dangerous advice or making incorrect decisions, to vulnerabilities in the AI tech stack itself due to lack of inventory and understanding of what's deployed.

The invisible inventory is where your exposure lives. Until you can see it, you can't protect it.

And the attackers? They're already looking.

 

 

Author: Hackerstorm.com

 

 

References:

https://www.ncsc.gov.uk/news/mistaking-ai-vulnerability-could-lead-to-large-scale-breaches

https://www.harmonic.security/resources/what-22-million-enterprise-ai-prompts-reveal-about-shadow-ai-in-2025

https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls

 

 

 

 

By using this site, you agree to our Terms & Conditions.

COOKIE / PRIVACY POLICY: This website uses essential cookies required for basic site functionality. We also use analytics cookies to understand how the website is used. We do not use cookies for marketing or personalization, and we do not sell or share any personal data with third parties.

Terms & Privacy Policy