Use l3k with Claude Code
Anthropic's Claude Code CLI works against any backend that speaks the Anthropic Messages API. l3k speaks OpenAI Chat Completions, so we point Claude Code at a tiny local router that translates between the two — no code changes, no patched binary.
Claude Code sends `POST /v1/messages` (Anthropic shape). l3k accepts `POST /v1/chat/completions` (OpenAI shape). The router accepts the first and forwards the second, including streaming, tool calls and image parts.
Option A — claude-code-router (recommended)
Single npm install, ~10 lines of config, ships a `ccr` command that boots the router and Claude Code together.
- 1. Install
npm install -g @musistudio/claude-code-router
- 2. Configure
Create
~/.claude-code-router/config.json{ "Providers": [ { "name": "l3k", "api_base_url": "https://alloneia.com/v1/chat/completions", "api_key": "sk-l3k-YOUR_KEY", "models": [ "anthropic/claude-3.5-haiku", "anthropic/claude-3.5-haiku" ] } ], "Router": { "default": "l3k,anthropic/claude-3.5-haiku", "background": "l3k,anthropic/claude-3.5-haiku", "think": "l3k,anthropic/claude-3.5-haiku", "longContext":"l3k,anthropic/claude-3.5-haiku" } }The two model slugs above are pulled live from your active catalog. Replace with any model id from /docs.
- 3. Run
# Inside any project ccr code # ccr boots the local router and launches Claude Code pointed at it. # All requests now go: Claude Code → ccr → l3k → upstream provider.
Option B — LiteLLM proxy
If you already run LiteLLM for other tools, register l3k as an OpenAI-compatible upstream and point Claude Code at LiteLLM's Anthropic endpoint:
# litellm config.yaml
model_list:
- model_name: anthropic/claude-3-5-sonnet
litellm_params:
model: openai/anthropic/claude-3.5-haiku
api_base: https://alloneia.com/v1
api_key: sk-l3k-YOUR_KEY
# Run the proxy
litellm --config config.yaml --port 4000
# Point Claude Code at it
export ANTHROPIC_BASE_URL=http://localhost:4000
export ANTHROPIC_AUTH_TOKEN=any-non-empty-string
claudeLiteLLM exposes `/v1/messages` automatically when the registered model name starts with `anthropic/…`.
Verify it's working
After your first prompt:
- Tokens land on /usage in the dashboard within a few seconds. If they don't, the router isn't reaching l3k — check the error shape against /docs/errors.
- Each request shows the API key prefix on /keys (last used timestamp updates).
- `curl https://alloneia.com/v1/models -H 'Authorization: Bearer sk-l3k-…'` returns the catalog this key is allowed to use.
Troubleshooting
- 401 invalid_api_key — Wrong key, key revoked, or key expired. Issue a fresh one from the dashboard.
- 404 model_not_found — The slug in your config isn't in the live catalog. Hit `/v1/models` or check /docs.
- 402 insufficient_balance — Top up at https://www.alloneia.com/billing. Claude Code makes large parallel requests — keep at least a few dollars of headroom.
- 429 rate_limit_exceeded — Per-key RPM limit hit. Raise it from the dashboard or split work across two keys.