Organizations are investing heavily in AI security measures while overlooking a critical foundation: comprehensive visibility into their AI deployments.
Organizations are investing heavily in AI security measures while overlooking a critical foundation: comprehensive visibility into their AI deployments. Analysis of data breach patterns reveals that most organizations cannot accurately identify what AI systems are operating in their environments, where they're deployed, or who's responsible for them—creating a fundamental gap that undermines every other security control.
Not the AI you approved. Not the AI in your procurement pipeline.
The AI your people are using right now.
Because here's what the data shows: Analysis of 22.4 million prompts across six generative AI applications found that 579,000 prompts—2.6%—contained company-sensitive data. Code led exposures at 30%, followed by legal discourse at 22.3%, merger and acquisition data at 12.6%, financial projections at 7.8%, and investment portfolio data at 5.5%.
The kicker? 71% of these exposures happened through ChatGPT. And 17% occurred via personal or free accounts where organizations have zero visibility, no audit trails, and data may train public models.
This isn't a breach you can detect with traditional security tools. This is a breach happening through your workflow.
The same pattern keeps emerging from outsourcing suppliers across Europe and the US. They deployed AI to improve their processes. Then they started seeing something unexpected.
Attackers targeting multi-agent systems. Not going after the most protected agents. Going after the weakest ones to compromise results or corrupt decisions being made.
The problem? Most organizations can't tell you what AI agents they have running, where they're deployed, what their purpose is, or who's responsible for them.
You can't protect what you can't see. You can't patch what you don't know exists. You can't respond to an incident when you don't have an inventory of what's been compromised.
This is the foundational problem driving every other AI security failure organizations are experiencing.
Here's the mechanism that makes conventional cybersecurity fail against these exposures.
Traditionally, you'd need to purchase a service as an organization. That triggers an onboarding process. Procurement reviews it. IT evaluates it. Security assesses it. Governance gets established.
Now? Anyone with internet access signs up to a free service using federated logins like Google or Apple and starts using it immediately.
The entire onboarding process gets bypassed. No governance. No visibility. No control.
It's also difficult to spot using web browsing technologies because these AI services can be anywhere, integrated into all sorts of services. Cases have emerged where audit reports were put into generative AI to rewrite and then issued to clients—riddled with issues and causing breaches of confidentiality.
The breach happens through the workflow itself. Not through a hack. Not through a vulnerability exploit. Through normal use of AI tools that nobody knew existed in the environment.
When experts discuss AI-convinced breaches—where AI is manipulated to compromise its own organization from the inside—many treat it like a future threat.
It's not.
The IBM 2025 AI breach report revealed that 13% of surveyed organizations reported breaches of AI models and applications. Another 8% were unaware if they had been compromised.
Read that again. 8% don't even know if they've been breached.
And 63% of the breached organizations had no governance policy in place.
This isn't about sophisticated attack vectors. This is about organizations deploying technology they don't understand, can't inventory, and haven't secured.
Research shows that 94.4% of AI models tested were vulnerable to direct prompt injection and 83.3% to RAG backdoor attacks. Multi-agent systems showed a 100% compromise rate.
The vulnerability isn't theoretical. The breaches aren't coming. They're here.
Security teams are attempting to bolt AI security onto existing frameworks. They're applying 2015 thinking to 2026 problems.
You can't apply traditional security controls when you don't know what assets you're protecting.
Think about it this way: If you can't list every LLM deployed in your environment, every algorithm running as an asset, every AI agent making decisions or handling data—how can you say with confidence that you're in control when it comes to vulnerabilities?
You can't patch what you don't know exists. You can't monitor what you haven't inventoried. You can't respond to incidents involving systems you didn't know were deployed.
Asset management isn't a preliminary step. It's the foundation that makes every other security control possible.
This isn't about creating a spreadsheet. You need to know:
a. What AI you have deployed (models, agents, applications, integrations)
b. Where it's deployed (cloud services, on-premises, shadow IT, personal accounts)
c. What its purpose is (what decisions it makes, what data it accesses, what workflows it touches)
d. Who's responsible for it (ownership, accountability, incident response)
e. What data it touches (sensitive information, customer data, proprietary code)
f. How it connects to other systems (APIs, integrations, data flows)
Without this inventory, you're flying blind. Every security measure you implement is guesswork.
This isn't theoretical advice. Regulatory frameworks are already mandating AI asset inventories—many organizations just haven't noticed.
The UK ICO requires organizations to maintain an asset register holding details of all information assets including asset owners, asset location, retention periods, and security measures deployed. For AI tools and systems, the ICO specifies that an Information Asset Owner should be assigned in accordance with existing policies and Information Governance Roles and Responsibilities Guidance.
NIST's December 2025 Cybersecurity Framework Profile for Artificial Intelligence explicitly calls for inventorying software and systems to include "AI models, APIs, keys, agents, data . . . and their integrations and permissions." Most organizations now maintain AI Inventories that describe model purpose, data sources, risk exposure, integration points, deployment environments, and human-in-the-loop expectations.
The gap isn't in the guidance. The gap is in execution.
Organizations are waiting for perfect solutions while attackers exploit the AI systems they don't know they have. The frameworks recognize what security teams are still discovering: you can't secure AI without knowing where it is, what it does, and who's responsible for it.
This pattern has become increasingly evident as professionals privately share what they can't say publicly. They deployed AI to improve processes. They thought they had it under control. Then they discovered attackers were already inside, targeting the less protected AI agents to compromise results and corrupt decisions.
The NCSC warning about organizations "fundamentally misunderstanding" emergent vulnerabilities isn't abstract. The most dangerous misunderstanding emerging is this: Security teams think they can protect AI systems without knowing what AI systems they have.
You can't.
The exposure isn't in your perimeter defenses. It's in the AI tools your employees are using right now through personal accounts with federated logins. It's in the multi-agent systems making decisions without human oversight. It's in the audit reports being rewritten by ChatGPT and issued to clients.
Traditional security focuses on protecting networks, endpoints, and data infrastructure. It doesn't address the unique behaviors, failure modes, or governance needs of AI systems that blur the line between data and instructions.
Start with visibility. Before you buy another security tool, before you implement another control, before you write another policy—figure out what AI you actually have deployed.
Immediate actions:
a. Audit federated login usage to identify shadow AI tool adoption
b. Survey teams about what AI tools they're using (official and unofficial)
c. Map AI agents and integrations across your environment
d. Identify who owns each AI deployment and what data it accesses
e. Document the purpose and decision-making authority of each AI system
This isn't glamorous work. It won't make headlines. But it's the difference between having a security posture and having security theater.
Because right now, 8% of organizations don't even know they've been breached. 63% of breached organizations had no governance policy. And 17% of data exposures are happening through personal accounts you can't see.
You can't protect what you can't inventory. And you can't inventory what you don't know exists.
Start there.
Author: Hackerstorm.com
https://www.ncsc.gov.uk/news/mistaking-ai-vulnerability-could-lead-to-large-scale-breaches
https://www.knostic.ai/blog/gen-ai-security-statistics
COOKIE / PRIVACY POLICY: This website uses essential cookies required for basic site functionality. We also use analytics cookies to understand how the website is used. We do not use cookies for marketing or personalization, and we do not sell or share any personal data with third parties.