/compare
Where Clampd fits in the AI security stack.
An honest comparison with other approaches to AI agent security, sourced from primary documentation. We use approach categories, not vendor names, because the categories matter more than any one product.
Methodology
Capability calls below were verified against primary documentation of representative products in each category as of May 2026. Where we could not retrieve a primary source, the cell is marked with ? rather than guessed. This page is updated when categories ship new capabilities; we'd rather be wrong-and-correctable than confidently wrong.
Honest framing: AI agent security is a well-funded, fast-moving category. Multiple vendors raised significant rounds around RSAC 2026; one prominent name was acquired by a Tier 1 security platform earlier in the year. We're not the only people who think this matters. Our wedge is the specific architectural choices below — not a claim of having the field to ourselves.
Approach 1
Prompt-only safety filters and content moderation APIs
What they do well
- Classify text into harm categories (hate, violence, self-harm, sexual, jailbreak)
- Cloud-hosted, single API call, fast to integrate
- Mature taxonomies (some products use the MLCommons hazard taxonomy)
- One product category we sampled has a tool-call alignment signal in preview
What they don't do
- Don't enforce, only classify (the calling app must act on the score)
- Stateless: no rolling per-agent baseline
- No tool descriptor integrity check
- No cryptographic per-call scope tokens
- No cross-agent correlation
- Cloud-only (data leaves your network)
Representative products: vendor moderation APIs, cloud content-safety services
Approach 2
Cloud AI gateways with built-in guardrails
What they do well
- Strong PII entity coverage (one major product detects 30+ entity types)
- Block + anonymise actions, parallel policy evaluation
- Native LLM gateway features: rate limiting, semantic caching, load balancing
- Two major product families now ship MCP-aware features: server portals, prompt-injection inspection on MCP traffic, shadow MCP discovery, DLP scanning, agent-API passthrough
What they don't do
- One major product's PII filter explicitly does NOT detect PII in tool-use output parameters (per their own docs)
- No tool descriptor integrity hashing
- No cryptographic per-call scope tokens (auth is IAM- or OAuth-based at the API boundary, not per-call params)
- No behavioural anomaly scoring per agent
- No cross-agent / delegation correlation engine
- No AP2 / x402 mandate validation
- Cloud-only and tied to that vendor's stack
Representative products: hyperscaler AI gateways and managed guardrail services
Approach 3
Open-source LLM safety classifiers
What they do well
- Self-hosted, runs on your hardware, no data egress
- Strong content classification (one model uses the 14-category MLCommons hazard taxonomy, including a Code Interpreter Abuse class)
- Extensible via DSLs and Python actions (one toolkit ships a dialogue-flow language)
- Apache-style or permissive licenses
What they don't do
- Latency cost: one toolkit's own paper reports ~3x the latency of an unguarded LLM call due to sequential rail evaluation
- Stateless models: no rolling per-agent baseline
- No tool descriptor integrity hashing in any toolkit we sampled
- No cryptographic per-call scope tokens
- Tool-call security is left to the developer to wire up via custom actions
- No structured audit-trail schema by default
- No AP2 / x402 mandate validation
Representative products: open-source LLM safety classifiers and dialogue-rail toolkits
Approach 4
DIY hooks in agent frameworks
What they do well
- Maximum flexibility: you implement exactly what you need
- Frameworks expose callback / interrupt hooks at tool-call boundaries
- Self-hosted by default; everything runs in your process
- Free
What they don't do (by default)
- Nothing. Whatever capability you want, you build it.
- No detection rules ship with the framework
- No descriptor hashing, no scope tokens, no behavioural baseline, no cross-agent correlation, no payment validation
- No structured audit schema
- Production requires ongoing maintenance: rules, dashboards, kill switches, compliance reports, tests
Representative implementations: framework callback APIs, interrupt patterns, custom Python middleware
Approach 5
Tool-call firewall (Clampd)
What we do
- Inline gateway between agent and tool with a 9-stage pipeline
- 263 detection rules across 12 categories; 44ยตs in-process evaluation
- Tool descriptor integrity hash (SHA-256) checked on every call
- Cryptographic per-call scope tokens (Ed25519-signed) bound to (category, subcategory, operation)
- Behavioural anomaly scoring per agent (EMA-based) with auto-suspend
- Cross-agent / delegation chain correlation
- AP2 mandate validation and x402 HTTP 402 interception built in
- Self-hosted, source-available, no telemetry by default
- Python and TypeScript SDKs; MCP proxy mode
- Compliance report templates (HIPAA, GDPR, SOC2, PCI-DSS) plus CCPA tags on rules
Where we don't lead
- One cloud gateway category catalogues 30+ PII entity types out of the box; we cover the standard set, not as wide a taxonomy
- Open-source LLM safety classifiers run as a single model call; we run a multi-stage pipeline (microseconds in the engine, single-digit ms end to end)
- If you only need text content moderation and never call tools, a moderation API is simpler
- If you need vendor-managed cloud and never want to operate infra, a hosted AI gateway has lower ops
Representative product:
Clampd (this site)
Capability matrix
| Capability |
Prompt-only filters |
Cloud AI gateways |
OSS LLM classifiers |
DIY framework hooks |
Clampd |
| Prompt injection / jailbreak detection |
~ varies |
โ |
โ |
~ DIY |
โ |
| PII detection in tool params + responses |
โ |
~ limited (one major product excludes tool output by docs) |
~ DIY |
~ DIY |
โ |
| Tool descriptor integrity hash |
โ |
โ |
โ |
~ DIY |
โ |
| Cryptographic per-call scope tokens (Ed25519-signed) |
โ |
โ (IAM/OAuth at API boundary, not per-call params) |
โ |
~ DIY |
โ |
| Behavioural anomaly scoring (per-agent rolling baseline) |
โ stateless |
โ |
โ stateless |
~ DIY |
โ |
| Cross-agent / delegation correlation |
โ |
โ |
โ |
~ DIY |
โ |
| AP2 + x402 payment mandate validation |
โ |
โ |
โ |
~ DIY |
โ |
| Self-hosted (no third-party data egress) |
โ |
โ |
โ |
โ |
โ |
| Sub-stage latency telemetry |
โ |
~ aggregate metrics |
โ |
~ DIY |
โ |
| Structured audit-event schema |
~ caller-side |
~ via cloud monitor |
โ |
~ DIY |
โ |
| Multi-language SDK (Python + TypeScript) |
โ |
โ |
~ Python via HF |
โ |
โ |
| MCP server proxy mode |
โ |
~ two product families now (one shipped MCP server portals + AI Security for Apps in WAF) |
โ |
~ DIY |
โ |
| Compliance report templates (HIPAA / GDPR / SOC2 / PCI) |
โ |
โ (platform certs exist, no templates in product) |
โ |
~ DIY |
โ |
Legend: โ verified yes ยท โ verified no ยท ~ partial / depends ยท ? unverified (excluded above)
Honest tradeoffs
No tool fits every job. Where the other approaches are better than us, here's where.
Pick A1
If you only do text moderation, never call tools, and want the lowest possible integration cost. A single moderation API call is simpler than running an inline gateway.
Pick A2
If you're already deep in one hyperscaler's stack and you'd rather have a managed service than operate infrastructure. Cloud AI gateways win on ops simplicity if you accept vendor lock-in.
Pick A3
If your security need is content classification on a single model call, you have GPU, and tool-call awareness is not a requirement. OSS classifiers excel at the LLM I/O boundary.
Pick A4
If your security model is unique enough that no off-the-shelf solution fits and you have eng capacity to build, maintain, and audit security primitives yourself. Maximum flexibility, maximum maintenance.
Pick A5 (Clampd)
If you're shipping AI agents that call real tools (DBs, APIs, MCP servers, payment endpoints), need tool descriptor integrity, scope enforcement, behavioural baselines, and cross-agent correlation, and you'd rather not build all of that from scratch.
Try Clampd in 60 seconds
One line of Python or TypeScript. Works with OpenAI, Anthropic, LangChain, CrewAI, Google ADK, and any MCP server. Self-hosted, source-available, no telemetry by default.
pip install clampd
npm install @clampd/sdk
Get Started โ
Why Clampd