OpenClaw Engineering, Chapter 4: Managing the Gateway and Models
Your gateway is running. Now comes configuration. Deployment is about getting the software running; management is about configuring it correctly and keeping it healthy. This chapter walks you through the tools that let you control OpenClaw: the interactive setup wizard, diagnostics to find problems, and the configuration file that holds all your settings.
The onboard wizard: interactive setup
Start every fresh OpenClaw installation with openclaw onboard. This interactive command guides you through setup, asking logical questions and storing your answers in a configuration file. It's designed to be friendly and walk you through each step without requiring you to understand JSON or environment variables.
The wizard asks for model provider credentials first. A model provider is a company like Anthropic (Claude), OpenAI (GPT), or Google (Gemini) that offers AI models through an API. You need at least one. If you're using Claude, go to console.anthropic.com, create an API key, and paste it into the wizard. The wizard will validate it works by making a test API call. If validation fails, double-check you copied the entire key correctly and that your account has billing enabled.
Next, the wizard asks about messaging platforms. Do you want Telegram? You'll need a bot token from BotFather. WhatsApp? You'll need a WhatsApp Business Account and access token. Discord, Slack, same idea. The wizard lets you skip platforms you're not using now and add them later. For each platform you enable, you provide credentials, and the wizard validates them.
Then comes agent behavior. What's your agent's name? What's its system prompt? Should it remember conversations across sessions? How many concurrent conversations should it handle? These settings become your baseline configuration, though you can change them later by editing the configuration file directly.
Finally, the wizard asks where to store your configuration. By default, it's ~/.openclaw (your home directory) on Unix-like systems. This folder contains all your settings, logs, and data. The wizard shows you the location and asks if you want to change it. Most people don't.
When you complete the wizard, it creates your initial openclaw.json configuration file. At this point, you haven't started the gateway yet. You've just set up the configuration. The next step is to run the gateway and test it.
If you want to re-run the wizard (maybe you added a new API key or changed your system prompt), run openclaw onboard again. It detects you're already onboarded and asks if you want to reconfigure. Your new answers override the old ones. This is useful as your setup evolves over time.
One critical security note: during onboarding, you're entering sensitive information. Do this on a trusted machine where no one is looking. Your API keys are stored locally in openclaw.json, which should be readable only by you. Run chmod 600 ~/.openclaw/openclaw.json to lock it down. Never commit this file to version control. Always add it to .gitignore.
The doctor command: diagnostics and troubleshooting
Before you start the gateway, run openclaw doctor. This health-check tool performs diagnostics: Are all your API keys valid? Can we reach the model providers? Are the messaging platform credentials correct? Is the configuration file properly formatted? The output is a detailed report of what's working and what isn't.
If everything is good, you'll see green checkmarks and a summary. If there are issues, they'll be highlighted in red with suggestions for fixing them. Common issues include wrong API keys (wrong or expired), invalid tokens from messaging platforms, no internet connectivity, or syntax errors in the configuration file.
The doctor also supports specific checks. Run openclaw doctor --provider anthropic to check only the Anthropic provider, or openclaw doctor --channel telegram to check only Telegram. This is useful if you want to focus on a specific problem. The --verbose flag shows more detail, including actual API requests and responses, which helps when debugging.
In production, run the doctor periodically (e.g., every 30 minutes via cron) and feed the results to a monitoring service. If the doctor detects an issue, you get notified automatically. This catches problems before they affect your users.
openclaw.json: the source of truth
The openclaw.json file is where all your settings live. It's a JSON file (human-readable structured data) that contains everything your agent needs: model credentials, channel credentials, agent behavior, gateway settings. You can edit this file directly with any text editor, though most people use the onboard wizard for initial setup and then edit the file for fine-tuning.
A basic openclaw.json looks like this:
{ "agent": { "name": "MyAgent", "systemPrompt": "You are a helpful assistant...", "maxConcurrentConversations": 10 }, "models": { "default": "claude-3-5-sonnet", "providers": { "anthropic": { "apiKey": "sk-ant-v3-xxx", "models": ["claude-3-5-sonnet", "claude-3-opus"] } } }, "channels": { "telegram": { "enabled": true, "botToken": "123456789:ABCdef..." } } }
The agent section controls agent behavior: name, system prompt, max concurrent conversations, whether to remember past conversations, how long to remember them. The models section lists your AI models. The default is the model used by default. providers contains credentials for each model provider. The channels section enables/disables messaging platforms.
Here's a more complete production example with advanced features like model routing, database configuration, and rate limiting:
The routingRules section is powerful. It lets you route different types of queries to different models. For example, queries matching "urgent" or "emergency" go to Claude Opus (the most intelligent but slower model), while everything else goes to Sonnet. Pattern matching uses regex. The pattern .* matches anything and acts as a catch-all.
Notice the ${VAR_NAME} syntax. This is variable substitution. Instead of hardcoding API keys in the file, you reference environment variables. When the gateway starts, it reads this and substitutes the actual value. This is more secure because your configuration file doesn't contain actual secrets.
Other useful settings: temperatureDefault controls randomness. Lower (close to 0) makes responses deterministic. Higher (close to 1) makes them creative. For customer support, use lower. For creative tasks, higher. maxTokensDefault limits how much the model can generate, preventing runaway responses and saving costs.
Editing the file is straightforward. Open it in a text editor, make your changes, save. If the gateway is running, you usually need to restart it for changes to take effect. Some settings are hot-reloadable, but it's safer to assume you need a restart.
Common editing tasks: Change the agent's name by editing agent.name. Update the system prompt by editing agent.systemPrompt. Add a new model provider by adding an entry to models.providers. Enable a new channel by setting its enabled flag to true. Change the gateway port by editing gateway.port. Always restart the gateway after editing.
For production setups, don't hardcode sensitive values. Use a .env file (a text file with lines like ANTHROPIC_API_KEY=sk-ant-xxx) and reference it in your docker-compose.yml. The .env file is never committed to version control because it contains secrets. You provide different values for different environments (development, staging, production) without changing the code.
What's next
Now your gateway is configured and running. Chapter 5 covers connecting multiple channels: how to set up Telegram, WhatsApp, Discord, and Slack so your agent can receive messages from users across all these platforms and respond intelligently. We'll walk through each platform's setup process and explain how the gateway routes messages to the right place.
📖 Get the complete book
All thirteen chapters and four appendices: the full Gateway and PiEmbeddedRunner walk-through, the Markdown brain spec, channel adapters for Telegram / WhatsApp / Discord / Slack, the SKILL.md authoring guide, the Lobster workflow language, multi-agent orchestration patterns, OpenClaw-RL training signals, the agentic zero-trust architecture, and the post-ClawHavoc supply-chain hardening playbook.
Sho Shimoda
I share and organize what I’ve learned and experienced.カテゴリー
タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us