Configure LLM Keys
Add an AI provider API key so your agent can think and respond.
Your agent needs an API key from an AI provider to generate responses. Without one, your agent is deployed but can't do anything — it has no "brain" yet.
What is an LLM key?
AI agents don't have built-in intelligence. They call external AI models (like Claude, GPT-4, or Gemini) to think and respond. To access these models, you need an API key from the provider — a password-like string that lets your agent make requests.
You choose and pay for your own AI provider. This means you control your costs, your model choice, and your data.
Which provider should I use?
Recommended: OpenRouter — one API key gives you access to 200+ models from OpenAI, Anthropic, Google, Meta, and more. Pay-as-you-go, no minimums, easy to start.
| Provider | Best for | Pricing | Get API Key |
|---|---|---|---|
| OpenRouter | Trying multiple models with one key | Pay per token, from $0 | openrouter.ai/keys |
| OpenAI | GPT-4o, o3, DALL-E | Pay per token, $5 min | platform.openai.com/api-keys |
| Anthropic | Claude 4, Claude 3.5 Sonnet | Pay per token, $5 min | console.anthropic.com |
| Google AI | Gemini 2.5 Pro, free tier | Free tier available | aistudio.google.com/apikey |
| DeepSeek | DeepSeek-V3/R1 (very cheap) | Pay per token | platform.deepseek.com |
| xAI | Grok | Pay per token | console.x.ai |
Get an OpenRouter API key
OpenRouter is the easiest way to get started because you get access to hundreds of models with a single key, and you can start with as little as $1.
Create an account
Go to openrouter.ai and click Sign In. You can sign up with Google, GitHub, or email.
Add credits
After signing in, click your profile icon (top-right) and go to Credits. Or go directly to openrouter.ai/credits.
Click Add Credits and add at least $5 to start. You can use a credit card or cryptocurrency. Credits are pay-as-you-go — you only get charged for what your agent actually uses.
How far does $5 go? With a mid-range model like Claude 3.5 Sonnet or GPT-4o-mini, $5 covers thousands of messages. Cheaper models like Llama or Mistral stretch even further.
Generate an API key
Go to openrouter.ai/keys and click Create Key.
Give it a name like "RunClaw" and click Create. Copy the key — it starts with sk-or-.
Add your key to RunClaw
- Go to your RunClaw dashboard
- Click on your instance
- Go to Settings > Configuration
- Find the LLM provider section
- Select your provider (e.g., "OpenRouter") and paste your API key
- Choose a model (e.g.,
anthropic/claude-3.5-sonnetfor a good all-rounder) - Click Save
The key is written directly to your VPS. It never passes through RunClaw's servers.
- Go to your RunClaw dashboard
- Click on your instance
- Go to Settings > Agent Zero Config
- Set your Chat model provider (e.g.,
openrouter) and model name (e.g.,anthropic/claude-3.5-sonnet) - Paste your API key
- Click Save
Agent Zero has four model slots: Chat, Utility, Browser, and Embedding. The utility and browser models default to your chat model if not set separately.
Where your keys are stored
Your LLM API keys live on your VPS:
- OpenClaw:
openclaw.jsonon the host filesystem - Agent Zero:
settings.jsonand.envinside the Docker container
After setup, when you update keys through the dashboard, the new values are relayed through RunClaw's sidecar connection to your VPS and written directly to the config files. They pass through RunClaw in transit but are not stored in the database.
During initial setup, if you provide LLM keys in the provisioning wizard, they are temporarily encrypted in RunClaw's database so they can be injected into your server via cloud-init. They are deleted from the database after provisioning completes.
You can verify where your keys live by SSHing into your VPS and checking the config files directly. You have full root access.
Model recommendations
Not sure which model to pick? Here are some starting points:
| Use case | Recommended model | Provider |
|---|---|---|
| General assistant | anthropic/claude-3.5-sonnet | OpenRouter |
| Coding tasks | anthropic/claude-sonnet-4-6 or deepseek/deepseek-chat | OpenRouter |
| Budget-friendly | meta-llama/llama-3.1-8b-instruct | OpenRouter |
| Maximum capability | anthropic/claude-opus-4-6 or openai/gpt-4o | OpenRouter |
Troubleshooting
You're all set. Your agent is deployed, connected, and ready to work.
Back to Getting Started