Skip to main content
OpenClaw is an open-source AI assistant with persistent memory and multi-platform access. Routing it through Portkey gives you request logs, cost tracking, automatic failovers, and team controls.

Get Connected

Already have Portkey configured? Skip to step 3 — use your existing provider slug and API key.
1. Add your provider to Portkey Open Model Catalog, click Add Provider, enter your API key, and create a slug (e.g., anthropic-prod). 2. Create a Portkey API key Go to API Keys, click Create, and copy the key. 3. Configure OpenClaw
openclaw config edit
Add the portkey provider:
{
  agents: {
    defaults: { model: { primary: "portkey/@anthropic/claude-sonnet-4-5" } }
  },
  models: {
    mode: "merge",
    providers: {
      portkey: {
        baseUrl: "https://api.portkey.ai/v1",
        apiKey: "${PORTKEY_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "@anthropic/claude-sonnet-4-5", name: "Claude Sonnet 4.5" }
        ]
      }
    }
  }
}
4. Set your API key
openclaw config set env.PORTKEY_API_KEY "pk-xxx"

See Your Requests

Run openclaw and make a request. Open the Portkey dashboard — you should see your request logged with cost, latency, and the full payload.
You can also verify your setup by listing available models:
curl -s https://api.portkey.ai/v1/models \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" | jq '.data[].id'

Add More Models

Configure Models

Add models from any provider you’ve configured in Portkey:
models: [
  { id: "@anthropic/claude-opus-4-5", name: "Claude Opus 4.5" },
  { id: "@anthropic/claude-sonnet-4-5", name: "Claude Sonnet 4.5" },
  { id: "@openai/gpt-5.2", name: "GPT-5.2" }
]
Switch your default model by updating:
agents: {
  defaults: { model: { primary: "portkey/@openai/gpt-5.2" } }
}

Track with Metadata

Add headers to group requests by session or tag by team/project:
// Add to your portkey provider config
headers: {
  "x-portkey-trace-id": "session-auth-refactor",
  "x-portkey-metadata": "{\"team\":\"backend\",\"project\":\"api-v2\"}"
}
These appear in the dashboard for filtering and analytics.

Make It Reliable

Create configs in Portkey Configs and attach them to your API key. When the config is attached, these features apply automatically to all requests.
Automatically switch providers when one is down:
{
  "strategy": { "mode": "fallback" },
  "targets": [
    { "provider": "@anthropic-prod" },
    { "provider": "@openai-backup" }
  ]
}

Control Costs

Budget Limits

Set spending limits in Model Catalog → select your provider → Budget & Limits:
  • Cost limit: Maximum spend per period (e.g., $500/month)
  • Token limit: Maximum tokens (e.g., 10M/week)
  • Rate limit: Maximum requests (e.g., 100/minute)
Requests that exceed limits return an error rather than proceeding.

Guardrails

Add input/output checks to filter requests:
{
  "input_guardrails": ["pii-check"],
  "output_guardrails": ["content-moderation"]
}
See Guardrails for available checks.

Roll Out to Teams

Attach Configs to Keys

When deploying to a team, attach configs to API keys so developers get reliability and cost controls automatically.
  1. Create a config with fallbacks, caching, retries, and guardrails
  2. Create an API key and attach the config
  3. Distribute the key to developers
Developers use a simple config — all routing and reliability logic is handled by the attached config. When you update the config, changes apply immediately.

Enterprise Options

  • SaaS: Everything on Portkey cloud
  • Hybrid: Gateway on your infra, control plane on Portkey
  • Air-gapped: Everything on your infra
In hybrid mode, the gateway has no runtime dependency on the control plane — routing continues even if the connection drops.
  • JWT: Bring your own JWKS URL for validation
  • Service keys: For production applications
  • User keys: For individual developers with personal budget limits
Create keys via UI, API, or Terraform.
Override default pricing for negotiated rates or custom models in Model Catalog → Edit model.

Troubleshooting

ProblemSolution
Requests not in dashboardVerify base URL is https://api.portkey.ai/v1 and API key is correct
401 errorsRegenerate Portkey key; check provider credentials in Model Catalog
Model not foundUse @provider/model format (e.g., @anthropic/claude-sonnet-4-5)
Rate limitedAdjust limits in Model Catalog → Budget & Limits
Status · Discord · Docs