On March 19, 2026, the security world was briefly shaken: Trivy, one of the most widely used open-source security scanning tools in the container ecosystem, was temporarily compromised through a supply-chain attack. Had the community not intervened quickly, automated systems would have continued running the compromised tool. The incident illustrates how quickly trust in technical systems can become a real risk.
In parallel, another development is gaining momentum in DACH: AI agents are being integrated into enterprise processes at increasing scale — via MuleSoft, via Salesforce Agentforce, or via custom APIs. These agents don't behave like classic users. They call APIs autonomously, make decisions based on their context, and generate call patterns at a frequency no human user could ever produce.
This combination is where the real challenge lies. Classic API security thinking was designed for human users and known applications, not for autonomous systems with dynamic behavior. The agentic era requires a security model that actually reflects this new reality.
The New Attack Surface
Traditional API security relied on a relatively clear context: a human user or known application sends structured requests, authenticates via OAuth, is limited by rate limits, and protected against typical attacks by input validation. AI agents shift several of these baseline assumptions simultaneously.
Who's actually calling? An AI agent often acts on behalf of a user — but it isn't that user. If a classic OAuth token grants the agent the same rights as the human, responsibility becomes blurred. Suddenly it's unclear who authorized what with which scope.
What's actually being requested? AI agents generate API calls dynamically from their reasoning, not rigidly. This can produce unexpected parameter combinations, hit edge cases in API logic, or trigger entire sequences of calls that were never envisioned in the original API specification.
At what frequency? An agent triggering 50 API calls per second isn't automatically malicious — that may simply be normal operation. Classic rate limits designed for human usage patterns often fall short in such scenarios or mistakenly block legitimate traffic.
Five AI-Specific Attack Patterns
1. Prompt Injection via API Responses The most dangerous new attack vector. An AI agent calls an API and processes its response. If that response contains manipulated instructions, the agent can be induced to take wrong or harmful actions. If an attacker controls a data source — a JIRA ticket, a CRM note, a web result — they can insert new commands to the agent via the API response. Without proper output validation, the agent blindly follows these instructions.
2. Token Stuffing and Scope Creep In many projects, OAuth tokens for AI agents are defined too broadly, because granting global read access is simpler than defining cleanly granular scopes. If such an agent is compromised, it can access data it should never have been able to see. The OWASP API Security Top 10 describes this principle as Broken Object Level Authorization. In the agent context, this risk is multiplied.
3. Authorization Bypass Through Chaining Individual API calls may be legitimate and authorized on their own. Combined, they create a problem. If a user ID is fetched first, then roles are read, and then permissions are modified in the next step, each individual call is technically permissible. The sequence as a whole would not be. Classic API security mechanisms rarely check sequence logic.
4. Data Exfiltration via Chained Calls If an attacker gains control over an agent prompt, they can have the agent systematically query information from internal systems and transmit it outward via external APIs — seemingly harmless webhooks, for example. This pattern, frequently described as Tool Poisoning in the MCP ecosystem, becomes particularly relevant here.
5. Denial-of-Service Through Agentic Loops AI agents can get caught in loops — for example, if they try to fix an error they themselves caused in the previous step. Without circuit breakers and agent-specific rate limiting, theoretically infinite API call cascades can develop that massively strain gateways and backend systems.
MuleSoft Security Capabilities for Agentic Traffic
The MuleSoft Anypoint Platform provides a strong foundation for API security. But for agentic scenarios, it isn't enough to simply keep standard policies unchanged. The key is deliberate configuration for AI-specific behavior.
JWT Validation with Agent Claims. Standard OAuth alone isn't sufficient. Tokens for AI agents should carry explicit claims such as agent_id, agent_type, or session_id. MuleSoft's JWT Validation Policy can be configured to check and log exactly these claims. This creates identity, traceability, and a cleaner separation between human user and agent.
Granular Rate Limiting per Agent Type. Not every agent needs the same limits. A document analysis agent typically produces different load profiles than a real-time support agent. MuleSoft enables header-based rate limiting. If this is aligned with a value like agent_type, different thresholds can be defined per agent class without treating all traffic uniformly.
Threat Protection Policies. The Anypoint Platform supports, among others, IP blocklisting, JSON Threat Protection, and XML Threat Protection. For AI agents, JSON Threat Protection is particularly relevant because it limits the depth and size of payloads. This becomes important when an agent potentially processes manipulated responses from external sources.
MuleSoft API Analytics. AI agents often leave characteristic patterns: unusual call sequences, atypically high frequencies, or unusual parameter combinations. Such anomalies can be detected, provided corresponding alerts and analyses are properly configured.
The Einstein Trust Layer: What It Protects — and What It Doesn't
For Salesforce Agentforce deployments, the Einstein Trust Layer is an important security component. Among other things, it ensures that prompts and responses are not used for model training (zero data retention), checks inputs and outputs for harmful content, masks personally identifiable data before LLM processing, and logs all AI interactions.
That said, its role should not be overstated. The Trust Layer does not protect against prompt injection in external data sources that the agent itself retrieves. It does not replace granular API authorization on the APIs used by Agentforce, and it does not prevent sequence-based authorization bypasses. It is therefore an important layer — but not a complete security solution. In practice, it must be supplemented by API gateway security at the MuleSoft layer.
Zero Trust for AI Agents
The Zero Trust principle applies even more strictly to AI agents than to human users. Every agent should receive its own service identity, ideally via a separate service account. Shared tokens between multiple agents or between human and agent unnecessarily increase risk.
Least Privilege. Every agent should receive only the API scopes required by its specific use case. An agent with read-only purpose doesn't need admin access. This rule sounds obvious, but it is violated particularly frequently in early agentic AI projects.
Short-Lived Tokens. A compromised token with a 24-hour TTL opens far too large a window for an attacker. A token with a 15-minute TTL drastically reduces that window and improves incident response capability.
Session Isolation. What an agent is allowed to see or do in Session A must not automatically affect Session B. Especially in multi-step processes, this separation is essential to prevent side effects and privilege escalation.
Auditability. Every API call by an agent should be logged with agent ID, session ID, timestamp, payload hash, and response code. These logs are important not only for security monitoring, but also for incident response, compliance evidence, and later forensics.
GDPR and NIS2: Regulatory Obligations
Organizations that let AI agents work with personal data operate fully within the scope of the GDPR. The use of such systems must be recorded in the record of processing activities. For high-risk processing, a Data Protection Impact Assessment (DPIA) may be required. Additionally, technical and organizational measures (TOMs) must also cover the agentic layer. This applies equally to data subject rights such as access or deletion, which must remain enforceable even when data has been processed by agents.
NIS2 is also relevant, but the legal situation in DACH should be described precisely. In Germany, the NIS2 implementation law has been in force since December 6, 2025. In Austria, the NISG 2026 was published at the end of 2025, with key provisions entering into force on October 1, 2026. Switzerland is not directly subject to the NIS2 Directive, but since April 1, 2025 it has had its own reporting obligation for cyberattacks on critical infrastructure. For API security in agentic scenarios, this means stronger requirements around supply chain security, robust access control and authentication, and clearly defined incident reporting processes. Under NIS2, this generally means an early warning within 24 hours and an incident notification within 72 hours.
Checklist: API Security Audit with AI Agents
• Establish separate service accounts for each AI agent type, document minimal OAuth scopes, and set token TTL to no more than 1 hour.
• Configure MuleSoft rate limiting per agent type, activate JSON/XML Threat Protection, and ensure API responses are validated rather than passed through blindly.
• Enable full audit logging for all agent API calls and integrate with a SIEM, including alerts for unusual frequencies.
• Define an incident response playbook for compromised agent tokens and verify whether a DPIA is required or has already been completed for agent-based data processing.
Conclusion
Integrating AI agents into production enterprise processes requires rethinking API security. Classic protection mechanisms remain important but are no longer sufficient for agentic traffic. Because autonomous systems call APIs faster, more dynamically, and often in more complex sequences than human users or classic applications.
Especially in environments with MuleSoft, Salesforce Agentforce, and custom APIs, what is needed is a security model that cleanly separates identities, grants permissions granularly, makes anomalies visible, and factors in regulatory requirements such as GDPR and NIS2.
The good news: many of the necessary foundations are already in place. What matters is aligning them consistently with agentic scenarios. For DACH companies, this will become a key differentiator in 2026 as first AI pilot projects evolve into reliable production systems.