Half of Organizations Have No Visibility Into AI Agent Traffic

Half of Organizations Have No Visibility Into AI Agent Traffic
Half of Organizations Have No Visibility Into AI Agent Traffic

Salt Security’s H1 2026 State of AI and API Security Report landed this month with a figure that deserves more attention than it received: 48.9 percent of organizations have zero visibility into machine-to-machine traffic and cannot monitor what their AI agents are doing on their networks. Not reduced visibility. Not partial coverage. Zero. Nearly half of organizations deploying AI agents are doing so with no ability to observe the traffic those agents generate when they call tools, query databases, initiate transactions, or communicate with other agents.

The report, covering data from Salt Security’s platform across enterprise customers, identifies a structural mismatch between how organizations built their security monitoring infrastructure and how AI agents actually behave. Web Application Firewalls, the primary perimeter defense for API traffic, were designed around a specific model of interaction: a human user, operating through a browser or mobile application, making requests at human speed and volume. AI agents do not conform to that model in any dimension. They operate at machine speed, generate burst traffic patterns that would flag as attacks under human-usage baselines, chain API calls across multiple services in sequences that look nothing like normal user workflows, and operate continuously without the session patterns that WAF behavioral models use to establish baselines.

The result is a security layer that was built for one era of API usage and is being asked to protect a fundamentally different era without the architectural changes to match.

The Agentic Action Layer: A New Category of Attack Surface

Salt Security’s central analytical contribution in the H1 2026 report is the concept of the Agentic Action Layer as a distinct security domain. The traditional enterprise security model treats APIs as integration plumbing between systems, governed by standard WAF rules, rate limiting, authentication middleware, and behavioral anomaly detection calibrated for human traffic patterns. That model worked when humans were the primary API consumers.

The Agentic Action Layer describes something different. When an AI agent calls an API, it is not requesting information for a human to read. It is taking an action: moving money, provisioning infrastructure, modifying records, sending communications, initiating workflows in downstream systems. The API is not a data channel. It is the actuator through which the agent affects the real world. A security model that treats this traffic the same way it treats a user querying their account balance is not calibrated for the actual risk surface.

Salt Security’s data shows that only 23.5 percent of security leaders find their legacy security tools effective for their current API environment, a drop that correlates directly with the growth in AI agent traffic. The tools are not ineffective because they are broken. They are ineffective because the threat model they were built for, authenticated human users making structured requests, no longer describes the majority of API traffic at organizations with meaningful AI agent deployments. When a security tool built for human traffic patterns encounters an AI agent making 40 coordinated API calls in 800 milliseconds, it either flags the traffic as an attack (false positive, blocking legitimate agent operation) or it does not flag it at all (blind spot, missing actual agent misbehavior). The 48.9 percent visibility gap is the outcome of that miscalibration accumulated across millions of agent requests per day.

Why AI Agent Traffic Is Structurally Different from Human Traffic

Four properties of AI agent API traffic make existing security tooling inadequate without modification.

Volume and burst patterns. A human user makes API calls at human cognitive speed, which tops out at a few calls per second in intensive usage. An AI agent executing a multi-step workflow makes API calls at the speed of its underlying model’s inference and the latency of each API endpoint. A coding agent that identifies 15 files to review, fetches each one, runs static analysis on each, and queries a vulnerability database for each finding generates 45 to 60 API calls in under five seconds. The same pattern from a human IP address would trigger rate limiting and behavioral anomaly alerts. From an agent service account, most WAF systems either do not flag it or cannot correlate it correctly because the service account pattern differs from the expected human interaction model.

Credential scope and lateral movement. An AI agent that can access multiple systems has credentials scoped to all of them. When the agent legitimately moves from checking a GitHub repository to querying a Jira board to writing to a Confluence page to sending a Slack notification, that cross-system sequence is the agent working correctly. From a WAF’s perspective, it can look indistinguishable from lateral movement by a compromised account harvesting data across multiple systems. The intent difference is not detectable without understanding what task the agent was executing, which requires access to agent context that WAFs do not have.

Orchestrated sequences versus individual requests. Human API traffic is largely independent requests. Agent API traffic is orchestrated sequences where each call is a step in a multi-call workflow. The meaning of a single API call often depends on the calls before and after it. A security model that evaluates each API call independently misses the pattern level where coordinated attacks are detectable. The bw1.js worm that exfiltrated credentials by creating GitHub repositories under the victim’s account comprised individually normal GitHub API calls that were collectively a complete data exfiltration operation. A sequence-aware security model sees the pattern. An individual-request WAF does not.

Machine-to-machine authentication patterns. AI agents authenticate to APIs using service account tokens, API keys, or OAuth client credentials. These credentials do not expire between sessions the way browser session tokens do. They are often scoped broadly so the agent can access everything it might need. They are frequently stored in environment variables or secrets managers and injected at runtime, which means a compromised agent environment gives the attacker durable, broad-scope credentials that continue working until explicitly rotated. The 78.6 percent of security leaders who report increased executive scrutiny of AI risks are focused on the right problem. The 76.5 percent whose tools are not effective are running out of time to close the gap.

Agentic Security Posture Management: What Salt Is Proposing

Salt Security’s response to the visibility gap is a category they call Agentic Security Posture Management (AG-SPM), positioned as a complement to the API Security Posture Management that their platform already provides for human-generated API traffic.

AG-SPM builds what Salt describes as an Agentic Security Graph: a continuously updated map of the relationships between LLMs, MCP servers, and the foundational APIs those agents call. The graph answers questions that static security tooling cannot: which AI agents have access to which API endpoints, what credential scope each agent uses, which agent call sequences are expected versus anomalous, and which MCP servers are registered in agent configurations versus which ones are actually being called at runtime. The last question is the one Salt identifies as the Shadow MCP problem, where agents connect to MCP servers that were not formally registered in the organization’s agent inventory and therefore have no security review coverage.

The Shadow MCP concept connects directly to the client-side validation gap documented in the March 2026 arXiv research showing 5 of 7 major MCP clients accept tool metadata without validation. A Shadow MCP server that installs itself through a supply chain compromise or a malicious tool description can begin receiving agent traffic immediately, with no visibility into that traffic from the security team. AG-SPM’s runtime monitoring is the detection layer that would observe an agent calling an unregistered MCP endpoint and flag it as anomalous before the exfiltration completes.

The second component Salt describes is Agentic Detection and Response (ADR), which operates at the runtime layer during active agent sessions. ADR monitors the API calls agents make in real time, compares them against established behavioral baselines, and blocks calls that fall outside expected parameters before they execute. This is different from post-hoc log analysis. It is inline inspection of agent traffic with the ability to intervene in real time, which is what makes it relevant for agents executing financial transactions or infrastructure changes where the cost of a bad action is not recoverable by examining logs after the fact.

The WAF Replacement Question

Salt’s report argues that legacy Web Application Firewalls need to be replaced, not augmented, for organizations with significant AI agent deployments. This is a strong claim. WAFs represent decades of security investment and are deeply embedded in compliance frameworks including PCI DSS, SOC 2, and ISO 27001. Replacing them is not a decision security teams make without significant evidence that augmentation is insufficient.

The case for replacement rather than augmentation rests on the fundamental design assumption of WAF technology: traffic is generated by humans through browsers or standard client applications, and anomalous patterns relative to that baseline indicate attacks. AI agent traffic violates this assumption at the architecture level, not the configuration level. You can tune a WAF to not rate-limit your agent service accounts. You cannot tune it to understand multi-step agent orchestration sequences, correlate intent across 40 coordinated API calls, or detect the semantic difference between legitimate cross-system agent workflows and lateral movement by a compromised service account. Those capabilities require a fundamentally different inspection model.

The practical path for most organizations is not an immediate WAF replacement but a layered approach: maintain WAF coverage for the human-generated traffic it was built for while deploying agent-specific security tooling for AI traffic. The two layers use different detection models for different traffic types, with the agent security layer handling the machine-to-machine traffic that WAFs cannot effectively monitor. The 48.9 percent visibility gap is not a WAF configuration failure. It is a recognition that the security infrastructure designed for one era of API usage is being asked to protect a fundamentally different era without the architectural changes to match.

What the Agent Inventory Requires in Practice

Before deploying agent-specific security tooling, organizations need to solve a measurement problem that most have not yet addressed. Most security teams do not have an accurate inventory of which AI agents are running in their environment, what API endpoints those agents call, what credential scope they operate with, or what normal behavior looks like for each agent. Without that baseline, there is no way to define anomalous behavior, and detection becomes noise.

Building a functional agent inventory requires answers to six questions per deployed agent: What is the agent’s stated purpose? Which systems can it authenticate to and with what scope? Which MCP servers does it connect to and are those registered? What is its normal API call volume and sequence pattern? Who is accountable for its behavior? And what is the expected maximum scope of its autonomous actions before human escalation is required? None of these questions are answered by standard service account provisioning workflows. All of them are required to make agent-specific security monitoring meaningful rather than performative.

The inventory gap is the same structural problem that OX Security’s finding of 400% critical vulnerability density growth in AI-heavy environments points to. AI tools generate infrastructure faster than security teams can inventory it. AI agents connect to APIs faster than security teams can register and scope the credential access. The security posture management layer can only monitor what it knows about. Shadow AI, agents deployed by business units outside the security team’s visibility, are invisible to any tooling that relies on a registered agent inventory as its input.

The 78.6 percent of security leaders reporting increased executive scrutiny of AI risks are paying attention to the right signal. The gap between scrutiny and effective tooling is where breaches happen. Salt’s H1 2026 data makes the size of that gap concrete. The question for security teams is not whether to build agent-specific security posture management. It is how quickly the inventory discipline needed to make it effective can be established before the next supply chain compromise or credential exfiltration turns the visibility gap into an incident report.

Discover more from My Written Word

Subscribe now to keep reading and get access to the full archive.

Continue reading