One API Key for Claude, GPT, Gemini, MiniMax and all major AI models
Replace `YOUR_API_KEY` with your real key; you should receive a list of available models:
curl -s "https://api.aibottoken.com/v1/models" \
-H "Authorization: Bearer YOUR_API_KEY"
Send a test chat request:
curl -s "https://api.aibottoken.com/v1/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-3-5-sonnet",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 64
}'
These are our featured models. Run GET /v1/models to see all models available to your Key.
| Provider | Family | Model ID | Tags | Note |
|---|---|---|---|---|
| Anthropic | Claude 3.7 | anthropic/claude-3-7-sonnet | reasoningcodingwriting | Flagship reasoning model |
| Anthropic | Claude 3.5 | anthropic/claude-3-5-sonnet | codingwritinganalysis | Best value for everyday dev work |
| Anthropic | Claude 3.5 | anthropic/claude-3-5-haiku | fastcheap | Ultra-fast & cheap for batch tasks |
| Anthropic | Claude 3 | anthropic/claude-3-opus | writinganalysis | Excellent for long docs & quality writing |
| OpenAI | GPT-4o | openai/gpt-4o | visionmultimodalgeneral | Multimodal flagship with vision support |
| OpenAI | GPT-4o | openai/gpt-4o-mini | fastcheapgeneral | Low-cost, high performance – great starting point |
| OpenAI | o4 | openai/o4-mini | reasoningmathcoding | OpenAI's reasoning series, great for math & code |
| OpenAI | GPT-4.1 | openai/gpt-4.1 | long-contextgeneral | Long-context flagship |
| OpenAI | GPT-4.1 | openai/gpt-4.1-mini | fastcheap | Long-context mini, cost-efficient |
| Gemini 2.5 | google/gemini-2.5-pro | multimodallong-contextgeneral | Google's top multimodal model, 1M ctx | |
| Gemini 2.5 | google/gemini-2.5-flash | fastcheapmultimodal | Ultra-fast inference for everyday tasks | |
| Gemini 2.0 | google/gemini-2.0-flash | fastcheap | Stable & fast for high-concurrency workloads | |
| MiniMax | MiniMax-01 | minimax/minimax-01 | long-contextgeneralchinese | 4M context length, optimized for Chinese |
| MiniMax | MiniMax-Text | minimax/minimax-text-01 | textcheap | Text-focused, cost-efficient |
⚠️ Important: Model selection works differently in each client. Most agent-style tools (Claude Code, OpenClaw) do not have a UI model picker — you specify the model via environment variables or config. Only GUI clients (Cherry Studio, Continue) let you pick models from a dropdown.
Claude Code uses the Anthropic subscription by default. To route requests through your own API Key, set these environment variables before launching:
export ANTHROPIC_API_KEY="YOUR_API_KEY"
export ANTHROPIC_BASE_URL="https://api.aibottoken.com"
# Then launch Claude Code
claude
Claude Code has no graphical model selector. To switch models, use one of these methods:
/model in the Claude Code chat, then enter the Model ID, e.g. anthropic/claude-3-5-sonnet.export ANTHROPIC_MODEL="anthropic/claude-3-5-sonnet" (supported in some versions).model field directly in your code or SDK calls.After launching, send a message in Claude Code, then check the "Usage Logs" in your dashboard to confirm requests are arriving. If they appear, integration is successful.
Add the following to your OpenClaw Gateway config (~/.openclaw/.env):
ANTHROPIC_API_KEY=YOUR_API_KEY
ANTHROPIC_BASE_URL=https://api.aibottoken.com
OpenClaw selects the model via config — no UI picker needed. Set in ~/.openclaw/.env:
ANTHROPIC_MODEL=anthropic/claude-3-5-sonnet
To use GPT or other models via OpenAI-compatible config:
OPENAI_MODEL=openai/gpt-4o
In the client's "Provider" settings, select "OpenAI Compatible" and fill in:
Base URL: https://api.aibottoken.com/v1
API Key: YOUR_API_KEY
Cherry Studio and Continue support in-app model selection. Steps:
Cherry Studio:
OpenAI Compatibleanthropic/claude-3-5-sonnetContinue (~/.continue/config.json):
~/.continue/config.jsonmodels array with the model field set to the Model ID{
"models": [
{
"title": "Claude 3.5 Sonnet",
"provider": "openai",
"model": "anthropic/claude-3-5-sonnet",
"apiBase": "https://api.aibottoken.com/v1",
"apiKey": "YOUR_API_KEY"
},
{
"title": "GPT-4o",
"provider": "openai",
"model": "openai/gpt-4o",
"apiBase": "https://api.aibottoken.com/v1",
"apiKey": "YOUR_API_KEY"
},
{
"title": "Gemini 2.5 Pro",
"provider": "openai",
"model": "google/gemini-2.5-pro",
"apiBase": "https://api.aibottoken.com/v1",
"apiKey": "YOUR_API_KEY"
}
]
}
If you call the API directly or use the OpenAI SDK, specify the model field in the request body:
import OpenAI from 'openai'
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.aibottoken.com/v1',
})
// Use any supported model ID
const response = await client.chat.completions.create({
model: 'anthropic/claude-3-5-sonnet', // or gpt-4o, gemini-2.5-pro, etc.
messages: [{ role: 'user', content: 'Hello!' }],
})
This is the most flexible approach — you can use a different model on every request with no UI required.
Call GET /v1/models to see the exact model IDs available to your key. Make sure the model parameter matches exactly.
Save it immediately and use the "Download JSON Config" button as a backup. Lost keys require a new purchase.
Use the "View Usage" link on the purchase success page – it contains your unique token.
Wait 5–10 seconds and refresh. If the issue persists, contact support with your Stripe transaction ID.