API Reference & Setup
One API key for Claude Code, OpenCode, OpenClaw, Codex CLI, Gemini CLI, Cursor, and direct API access. Claude & GPT models at 50% off official pricing -- pay as you go, no subscription required.
1. Getting Started
Zenn.Engineering is a drop-in API proxy for Anthropic, OpenAI, and Google AI models. You use a single ck_-prefixed API key across all supported tools and SDKs. No code changes required — just set the base URL and API key.
Top up credits, then create a key from your dashboard.
Point your tool to https://zenn.engineering/api/v1
Works instantly with Claude Code, OpenCode, OpenClaw, and more.
Base URLs
| Provider | Base URL |
|---|---|
| Claude (Anthropic) | https://zenn.engineering/api/v1 |
| Codex (OpenAI) | https://zenn.engineering/api/v1/codex |
| Gemini (Google) | https://zenn.engineering/api/v1/gemini |
2. Claude Code
Anthropic's official CLI for Claude. Set two environment variables and it works as a drop-in replacement — same CLI, same models, 50% off official pricing.
Step 1: Set environment variables
Add to your shell profile (~/.zshrc or ~/.bashrc):
export ANTHROPIC_BASE_URL=https://zenn.engineering/api/v1 export ANTHROPIC_API_KEY=ck_YOUR_API_KEY
Step 2: Restart terminal & run
# Default model (Sonnet 4.5) claude # Use Opus 4.6 claude --model claude-opus-4-6
How it works
Claude Code sends the API key via the x-api-key header (Anthropic SDK native behavior) and appends /messages to the base URL automatically. The anthropic-version and anthropic-beta headers are forwarded to the upstream API.
3. OpenCode
Multi-provider AI coding agent. Configure once — access Claude, Codex, and Gemini models through one JSON config.
Step 1: Install
npm i -g opencode-ai
Step 2: Create config
Create or edit ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anthropic": {
"options": {
"baseURL": "https://zenn.engineering/api/v1",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"claude-sonnet-4-6": { "name": "Claude Sonnet 4.6" },
"claude-opus-4-6": { "name": "Claude Opus 4.6" }
}
},
"zenn-codex": {
"npm": "@ai-sdk/openai-compatible",
"name": "Zenn Codex",
"options": {
"baseURL": "https://zenn.engineering/api/v1/codex",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"gpt-5.3-codex": { "name": "GPT-5.3 Codex" }
}
},
"zenn-gemini": {
"npm": "@ai-sdk/openai-compatible",
"name": "Zenn Gemini",
"options": {
"baseURL": "https://zenn.engineering/api/v1/gemini",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"gemini-3-pro-preview": { "name": "Gemini 3.0 Pro" },
"gemini-3-flash-preview": { "name": "Gemini 3.0 Flash" }
}
}
}
}Step 3: Start coding
cd your-project opencode # Switch models with /model
How it works
OpenCode sends the API key via the Authorization: Bearer header. The anthropic provider uses the native Anthropic SDK format, while zenn-codex and zenn-gemini use the OpenAI-compatible provider adapter.
4. OpenClaw
Open-source autonomous AI coding agent. Configure Zenn as a custom provider for Claude, Codex, and Gemini access through one config file.
Step 1: Install
curl -fsSL https://openclaw.ai/install.sh | bash
Step 2: Configure providers
Create or edit ~/.openclaw/openclaw.json:
{
"models": {
"providers": {
"zenn-claude": {
"baseUrl": "https://zenn.engineering/api/v1",
"apiKey": "ck_YOUR_API_KEY",
"api": "anthropic-messages",
"models": [
{ "id": "claude-sonnet-4-6", "name": "Claude Sonnet 4.6" },
{ "id": "claude-opus-4-6", "name": "Claude Opus 4.6" }
]
},
"zenn-codex": {
"baseUrl": "https://zenn.engineering/api/v1/codex",
"apiKey": "ck_YOUR_API_KEY",
"api": "openai-responses",
"models": [
{ "id": "gpt-5.3-codex", "name": "GPT-5.3 Codex" }
]
},
"zenn-gemini": {
"baseUrl": "https://zenn.engineering/api/v1/gemini",
"apiKey": "ck_YOUR_API_KEY",
"api": "openai-completions",
"models": [
{ "id": "gemini-3-pro-preview", "name": "Gemini 3.0 Pro" },
{ "id": "gemini-3-flash-preview", "name": "Gemini 3.0 Flash" }
]
}
}
}
}Step 3: Start coding
cd your-project openclaw
API format reference
| Provider | api value | Description |
|---|---|---|
| Claude | anthropic-messages | Anthropic Messages API format |
| Codex | openai-responses | OpenAI Responses API format |
| Gemini | openai-completions | OpenAI Chat Completions format |
5. Codex CLI
OpenAI's Codex CLI for GPT-5.x code generation, routed through Zenn.Engineering's /v1/responses endpoint at 50% off official pricing. Codex CLI speaks the OpenAI Responses API by default, so configure it as a custom provider.
Step 1 — Install Codex CLI
npm install -g @openai/codex # or: brew install codex
Step 2 — Generate an API key
At zenn.engineering/manage-api-keys, create a key. It must start with ck_. OpenAI's sk-… keys will not work.
Step 3 — Write ~/.codex/config.toml
model = "gpt-5-codex" model_provider = "zenn" [model_providers.zenn] name = "Zenn" base_url = "https://zenn.engineering/api/v1" env_key = "ZENN_API_KEY" wire_api = "responses"
Important: base_url stops at /api/v1 (no trailing slash, no /responses). Codex CLI appends the path itself. wire_api = "responses" is required.
Step 4 — Export the API key
export ZENN_API_KEY="ck_your_key_here" # To persist across shells, append to your rc file: echo 'export ZENN_API_KEY="ck_your_key_here"' >> ~/.zshrc # or ~/.bashrc
The env var name (ZENN_API_KEY) must match env_key in the config.
Step 5 — Run
codex # or for one-shot: codex exec "refactor this function"
Verify the endpoint is reachable
curl https://zenn.engineering/api/v1/responses
Returns JSON with the supported OpenAI-format models. If this fails, Codex CLI cannot connect either.
Troubleshooting
- 401 unauthorized: key doesn't start with
ck_, or$ZENN_API_KEYis empty (runecho $ZENN_API_KEYto verify). - missing env ZENN_API_KEY: export it in the shell where you launch
codex, or append to~/.zshrcandsourceit. - stuck on "connecting":
wire_apimissing or set to anything other than"responses". - MCP client failed to start: pre-existing MCP server config in
~/.codex/config.toml. Usually a warning and safe to ignore; to silence it, addmcp_servers = {}to the config. - 402 insufficient credits: top up at zenn.engineering/settings. Minimum balance to call the API is 200 credits ($2.00).
6. Gemini CLI
Google's Gemini CLI with full streaming support, routed through Zenn.Engineering at discounted pricing.
Set environment variables
export GEMINI_API_BASE_URL=https://zenn.engineering/api/v1/gemini export GEMINI_API_KEY=ck_YOUR_API_KEY
Run
gemini --model gemini-3-pro-preview "explain this code"
7. Cursor IDE
Use Claude models in Cursor IDE via Zenn.Engineering.
Configure in Cursor Settings
Go to Cursor Settings → Models → Anthropic
Anthropic API Key
ck_YOUR_API_KEYOverride Anthropic Base URL
https://zenn.engineering/api/v1Available models
Select claude-sonnet-4-6, claude-haiku-4-5, or claude-opus-4-6 in the model picker.
8. Direct API Usage
The Zenn.Engineering API is fully compatible with the Anthropic Messages API. Use it directly with cURL, the Anthropic SDK (Node.js/Python), or any HTTP client.
cURL
curl -X POST https://zenn.engineering/api/v1/messages \
-H "Authorization: Bearer ck_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude!"}
]
}'Node.js / TypeScript
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: 'ck_YOUR_API_KEY',
baseURL: 'https://zenn.engineering/api/v1',
});
const message = await client.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude!' }
],
});Python
import anthropic
client = anthropic.Anthropic(
api_key="ck_YOUR_API_KEY",
base_url="https://zenn.engineering/api/v1",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
],
)9. Models & Pricing
All pricing is credit-based (100 credits = $1.00 USD). Prices shown are per million tokens (MTok). Claude & GPT models are priced at 50% of official pricing (50% off). Prices match the effective rate of Claude Max ($200/mo) and Codex Pro plans — pay as you go, no subscription required.
Claude (Anthropic) — 50% off official
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| claude-sonnet-4-6 | $1.50 | $7.50 | 50% off | Starter |
| claude-haiku-4-5 | $0.40 | $2.00 | 50% off | Starter |
| claude-opus-4-6 | $7.50 | $37.50 | 50% off | Opus |
OpenAI / GPT — 50% off official
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| gpt-5 | $1.25 | $5.00 | 50% off | Starter |
| gpt-5.1 | $1.25 | $5.00 | 50% off | Starter |
| gpt-5.2 | $2.50 | $7.50 | 50% off | Starter |
| gpt-5-codex | $1.25 | $5.00 | 50% off | Starter |
| gpt-5-codex-mini | $0.15 | $0.63 | 50% off | Starter |
| gpt-5.1-codex | $1.25 | $5.00 | 50% off | Starter |
| gpt-5.1-codex-mini | $0.15 | $0.63 | 50% off | Starter |
| gpt-5.1-codex-max | $1.25 | $5.00 | 50% off | Starter |
| gpt-5.2-codex | $2.50 | $7.50 | 50% off | Starter |
| gpt-5.3-codex | $2.50 | $7.50 | 50% off | Starter |
| gpt-5.3-codex-spark | $2.50 | $7.50 | 50% off | Starter |
Gemini (Google) — 10% off official
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| gemini-3-pro-official | $1.76 | $10.56 | 10% off | Starter |
| gemini-3-pro-preview-official | $1.76 | $10.56 | 10% off | Starter |
| gemini-3-flash-official | $0.44 | $2.64 | 10% off | Starter |
| gemini-3-flash-preview-official | $0.44 | $2.64 | 10% off | Starter |
| gemini-3.1-pro | $0.05 | $0.30 | Nominal | Starter |
| gemini-3.1-pro-preview-official | $1.76 | $10.56 | 10% off | Starter |
| gemini-3.1-fast | $0.55 | $3.30 | Low cost | Starter |
| gemini-3.1-thinking | $0.55 | $3.30 | Low cost | Starter |
| gemini-3.1-flash-lite-preview-official | $0.22 | $1.32 | Low cost | Starter |
| gemini-2.5-pro-official | $1.10 | $8.80 | 10% off | Starter |
| gemini-2.5-flash-official | $0.26 | $2.20 | Low cost | Starter |
| gemini-2.5-flash-lite-official | $0.09 | $0.35 | Ultra low | Starter |
| gemini-2.0-flash-official | $0.13 | $0.53 | Ultra low | Starter |
| gemini-2.0-flash-lite-official | $0.07 | $0.26 | Ultra low | Starter |
Other Models (DeepSeek, Qwen, GLM, Kimi, MiniMax)
| Model ID | Provider | Input / MTok | Output / MTok | Tier |
|---|---|---|---|---|
| deepseek-v3.2 | DeepSeek | $0.44 | $1.76 | Starter |
| glm-5 | Zhipu | $0.44 | $1.98 | Starter |
| qwen3.5-plus | Alibaba | $0.44 | $2.64 | Starter |
| qwen3.5-flash | Alibaba | $0.13 | $1.32 | Starter |
| qwen3-max | Alibaba | $0.77 | $3.08 | Starter |
| kimi-k2.5 | Moonshot | $0.44 | $2.31 | Starter |
| MiniMax-M2.5 | MiniMax | $0.23 | $0.92 | Starter |
Free / Promotional
| Model ID | Input / MTok | Output / MTok | Tier |
|---|---|---|---|
| nano-banana-2 | $0.05 | $0.30 | Starter |
10. Authentication
All API keys use the ck_ prefix. The proxy accepts multiple authentication header formats:
| Header | Format | Used By |
|---|---|---|
| x-api-key | ck_... | Claude Code, Anthropic SDK |
| Authorization | Bearer ck_... | OpenCode, OpenAI SDK, cURL |
| anthropic-api-key | ck_... | Alternative Anthropic header |
| x-goog-api-key | ck_... | Gemini CLI compatibility |
Forwarded headers
The proxy forwards anthropic-version (defaults to 2023-06-01) and anthropic-beta headers to the upstream Anthropic API. Streaming is fully supported via Server-Sent Events (SSE).
11. Rate Limits & Errors
Rate limits
Requests per hour
1,000
Minimum credit balance
200 credits ($2.00)
Streaming timeout
300s (5 min)
Rate limit info is returned in response headers: x-ratelimit-limit, x-ratelimit-remaining, x-ratelimit-reset.
Error responses
All errors return JSON:
{
"error": {
"type": "error_type",
"message": "Human-readable description"
}
}| Status | Type | Cause |
|---|---|---|
| 400 | invalid_request_error | Missing required parameters |
| 401 | authentication_error | Invalid or missing API key |
| 402 | insufficient_credits | Credit balance below $2.00 |
| 403 | access_denied | Model not available at your tier |
| 429 | rate_limit_error | Rate limit exceeded (1,000/hr) |
12. Tiers
Tiers are determined by cumulative top-up amount. Higher tiers unlock additional models.
Unlocks:
- Claude Sonnet 4.6
- GPT-5.2 / GPT-5.3 Codex
- Gemini 3.0 Pro & Flash
- API key creation
Everything in Starter, plus:
- Claude Opus 4.6
- Priority queue
13. Audio Generation (Fish Audio)
Text-to-speech, voice cloning, and speech recognition powered by Fish Audio. Requires Starter tier ($19.99+).
Endpoint
POST https://zenn.engineering/api/v1/audio/generations GET https://zenn.engineering/api/v1/audio/generations (list models)
Available Models
| Model ID | Type | Credits | Price | Required Input |
|---|---|---|---|---|
| audio-tts | Text to Speech | 2 | $0.020 | text |
| audio-clone | Voice Clone | 2 | $0.020 | text + audio |
| audio-asr | Speech Recognition | 1 | $0.010 | audio |
Text to Speech
curl -X POST https://zenn.engineering/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-tts",
"text": "Hello, welcome to Zenn!",
"format": "mp3"
}'Optional parameters: format (mp3, wav, opus), temperature, reference_id (existing voice model ID).
Voice Clone
curl -X POST https://zenn.engineering/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-clone",
"text": "Say this in the cloned voice",
"audio": "https://example.com/reference-voice.mp3"
}'The audio field is a URL to the reference voice sample for zero-shot cloning.
Speech Recognition (ASR)
curl -X POST https://zenn.engineering/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-asr",
"audio": "https://example.com/speech.mp3",
"language": "en"
}'Response Format
TTS / Voice Clone returns an audio URL:
{
"model": "audio-tts",
"created": 1709654321,
"data": [{ "url": "https://storage.zenn.run/generated/audio/..." }]
}ASR returns transcribed text:
{
"model": "audio-asr",
"created": 1709654321,
"text": "Hello, welcome to Zenn!",
"duration": 2.5
}Ready to start?
One key works across Claude Code, OpenCode, OpenClaw, Codex CLI, Gemini CLI, and Cursor. Top up credits and create your API key.
