Wrap your OpenAI, Anthropic, LangChain, CrewAI, or Google ADK client with one function call. Every tool invocation goes through the Clampd gateway: 263 detection rules, scope tokens, behavioural baseline, kill switch. Drop in, ship.
pip install clampd
npm install @clampd/sdk
The whole pitch in five lines of Python.
import clampd
from openai import OpenAI
client = clampd.openai(OpenAI(), agent_id="my-agent", secret="ags_...")
# All tool calls now go through the Clampd gateway. No other code change needed.
response = client.chat.completions.create(model="gpt-4o", messages=[...], tools=[...])
The wrapper intercepts chat.completions.create, identifies tool-call blocks in the response, sends each call to the Clampd gateway, and either lets it through (with a scope-bound token) or raises ClampdBlockedError. No middleware to wire up, no proxy to configure beyond the gateway URL.
One line of code. Every call inspected on the way in and on the way out.
Tool descriptor hashing uses byte-identical canonicalisation across all four languages. A tool registered from Python and invoked from TypeScript produces the same SHA-256 hash, so rug-pull detection works in mixed-language deployments.
# OpenAI
client = clampd.openai(OpenAI(), agent_id="my-agent")
# Anthropic / Claude
client = clampd.anthropic(Anthropic(), agent_id="my-agent")
# LangChain (callback handler โ guards every tool)
agent.invoke(input, config={"callbacks": [clampd.langchain(agent_id="my-agent")]})
# Google ADK (before_tool_callback)
agent = Agent(tools=[search], before_tool_callback=clampd.adk(agent_id="my-agent"))
# CrewAI
guard = clampd.crewai(agent_id="my-agent")
agent = Agent(role="researcher", step_callback=guard.step_callback, tools=[search_tool])
# Generic decorator (any Python function)
@clampd.guard("database.query", agent_id="my-agent")
def run_query(sql): ...
Every tool call evaluated against built-in rules: SQL injection, command injection, SSRF, prompt injection, PII leakage, path traversal, reverse shells, schema injection. Sub-50µs in-process eval.
Approved tool calls get a short-lived Ed25519-signed token bound to (tool, params). The agent never holds raw downstream credentials. Tools verify via JWKS.
SHA-256 over (name, description, parameters) sent with every call. If a tool's schema mutates between approval and call (MCP rug pull), the gateway returns a typed descriptor_hash_mismatch: denial.
Set check_response=True and the SDK scans tool responses for PII, secrets, and data anomalies before they enter the LLM context.
clampd.agent("orchestrator") as decorator/context manager. All @clampd.guard calls inside automatically inherit the delegation chain. Cycle detection + max-depth enforced.
Streaming responses (OpenAI, Anthropic) wrapped so tool-call blocks are intercepted as they emerge from the stream. No need to disable streaming for security.
# Environment variables (all SDKs honour these)
CLAMPD_AGENT_ID=my-agent
CLAMPD_GATEWAY_URL=http://localhost:8080 # or https://gateway.clampd.dev
CLAMPD_API_KEY=clmpd_demo_key
CLAMPD_AGENT_SECRET=ags_... # or per-agent: CLAMPD_SECRET_my_agent=...
# Or in code:
clampd.init(
agent_id="orchestrator",
gateway_url="https://gateway.clampd.dev",
api_key="clmpd_...",
agents={ # multi-agent setup
"orchestrator": os.environ["ORCH_SECRET"],
"research-agent": os.environ["RESEARCHER_SECRET"],
},
)
Self-hosted gateway free for under 25 agents. Hosted gateway available. Source-available, no telemetry by default.
Get a gateway โ All products