Switching Model Providers — Nemotron, Claude, OpenAI & OpenRouter
NemoClaw installs with NVIDIA's Nemotron as the default model, accessed via your free NVIDIA API key. But you can switch to Claude, OpenAI, a local Ollama model, or OpenRouter — at any time, without reinstalling. The key is understanding how OpenShell's provider registry keeps API keys out of the sandbox, and how inference routing works.
How Provider Switching Works
In a plain OpenClaw install, API keys live in the config file. In NemoClaw, they're kept outside the sandbox in OpenShell's provider registry. The sandbox never sees your actual API key — it calls a virtual endpoint called inference.local, and OpenShell proxies that call to whichever provider you've routed it to, injecting the real key at the boundary.
| Component | Where it lives | What it does |
|---|---|---|
| Provider registry | Host (outside sandbox) | Stores API keys, provider type, base URL |
| Inference routing | OpenShell layer | Routes inference.local calls to a specific provider |
| openclaw.json model config | Inside sandbox | Tells OpenClaw which model ID to request (e.g. claude-sonnet-4-6) |
A switch requires changes on both sides: update the routing (OpenShell, on host) and update the model ID (openclaw.json, inside sandbox). You don't need to reinstall anything.
Switching to Claude (Anthropic)
1. Register the Claude provider in OpenShell (on host)
# Export your API key first (or add it to ~/.bashrc)
export ANTHROPIC_API_KEY="sk-ant-..."
# Register Claude as a provider
openShell provider add \
--name claude \
--type anthropic \
--key "$ANTHROPIC_API_KEY"
# Verify it was added
openShell provider list
# claude anthropic api.anthropic.com ✓ active
2. Route inference to Claude
openShell inference route set --provider claude
# Confirm
openShell inference route show
# current: claude (anthropic)
3. Add the Anthropic API domain to your policy (if not already present)
# Check existing policy
openShell policy show --active | grep anthropic
# If missing, add it:
cat >> ~/.openShell/policies/includes/model-providers.yaml << 'EOF'
allow:
- host: "api.anthropic.com"
ports: [443]
comment: "Anthropic Claude API"
EOF
openShell policy reload
4. Update the model ID inside the sandbox
# Connect to the sandbox
claw connect nemoclaw
# Set Claude Sonnet as the primary model
openclaw config set agents.defaults.model.primary "anthropic/claude-sonnet-4-6"
# Optionally add fallbacks and the model allowlist
openclaw config set agents.defaults.model.fallbacks '["anthropic/claude-haiku-4-5"]'
openclaw config set agents.defaults.models '{
"anthropic/claude-sonnet-4-6": {"alias": "Sonnet"},
"anthropic/claude-haiku-4-5": {"alias": "Haiku"}
}'
# Restart the gateway
openclaw gateway restart
5. Verify
openclaw run "What model are you running on?"
# Response should mention Claude or Anthropic
Switching to OpenAI
# On host — register provider
export OPENAI_API_KEY="sk-..."
openShell provider add \
--name openai \
--type openai \
--key "$OPENAI_API_KEY"
# Route inference
openShell inference route set --provider openai
# Add policy rule if needed
cat >> ~/.openShell/policies/includes/model-providers.yaml << 'EOF'
- host: "api.openai.com"
ports: [443]
comment: "OpenAI API"
EOF
openShell policy reload
# Inside sandbox — update model ID
claw connect nemoclaw
openclaw config set agents.defaults.model.primary "openai/gpt-4.1"
openclaw gateway restart
Switching to Local Ollama
Local Ollama doesn't need an API key — just a policy rule allowing the sandbox to call localhost:
# On host — register Ollama provider (no key needed)
openShell provider add \
--name ollama-local \
--type ollama \
--base-url http://localhost:11434
# Route inference
openShell inference route set --provider ollama-local
# Policy rule (if not already present)
cat >> ~/.openShell/policies/includes/local-inference.yaml << 'EOF'
allow:
- host: "localhost"
ports: [11434]
comment: "Local Ollama"
- host: "127.0.0.1"
ports: [11434]
comment: "Local Ollama (IP)"
EOF
openShell policy reload
# Inside sandbox — update model ID
claw connect nemoclaw
openclaw config set agents.defaults.model.primary "ollama/qwen2.5:14b"
openclaw gateway restart
See the Local GPU Inference Setup guide for how to install Ollama and pull models first.
Using OpenRouter (Access Any Model)
OpenRouter is a proxy that gives you access to Claude, OpenAI, Mistral, Gemini, and 200+ other models through a single API key. Useful if you want to switch models frequently without managing multiple provider registrations:
# Register OpenRouter
export OPENROUTER_API_KEY="sk-or-..."
openShell provider add \
--name openrouter \
--type openai-compatible \
--key "$OPENROUTER_API_KEY" \
--base-url https://openrouter.ai/api/v1
openShell inference route set --provider openrouter
# Policy rule
# (add api.openrouter.ai to your policy if not already present)
# Inside sandbox — use any OpenRouter model ID
claw connect nemoclaw
openclaw config set agents.defaults.model.primary "anthropic/claude-sonnet-4-6"
# OpenRouter accepts the same model IDs as native providers
openclaw gateway restart
Setting Up Fallback Chains
Register multiple providers and configure OpenShell to fall back automatically if the primary is unreachable:
# Register both providers
openShell provider add --name claude --type anthropic --key "$ANTHROPIC_API_KEY"
openShell provider add --name openai --type openai --key "$OPENAI_API_KEY"
# Set fallback chain in OpenShell routing
openShell inference route set \
--provider claude \
--fallback openai \
--fallback-on "rate-limit,timeout,error-5xx"
The model IDs inside openclaw.json handle the application-level fallback (which model to try if the primary model fails):
# Inside sandbox (claw connect nemoclaw)
openclaw config set agents.defaults.model '{
"primary": "anthropic/claude-sonnet-4-6",
"fallbacks": ["openai/gpt-4.1", "anthropic/claude-haiku-4-5"]
}'
The two fallback layers are independent — OpenShell handles provider-level routing, openclaw.json handles model-level escalation. Combined, this means: if Claude Sonnet fails, try GPT-4.1; if the Anthropic provider is down entirely, route through the OpenAI provider automatically.
Model ID Reference
| Provider | openclaw.json model ID | Notes |
|---|---|---|
| Anthropic | anthropic/claude-sonnet-4-6 | Best all-round model for 2026 |
| Anthropic | anthropic/claude-haiku-4-5 | Cheap and fast — use for heartbeats |
| Anthropic | anthropic/claude-opus-4-6 | Most capable — use sparingly |
| OpenAI | openai/gpt-4.1 | Strong reasoning at mid price |
| OpenAI | openai/gpt-4.1-mini | Budget option — comparable to Haiku |
| NVIDIA | nvidia/nemotron-4-mini-instruct | Default NemoClaw model (free NVIDIA key) |
| NVIDIA | nvidia/llama-3.3-nemotron-super-70b-instruct | High quality — uses NVIDIA API credits |
| Ollama (local) | ollama/qwen2.5:14b | Good local model; adjust tag as needed |
| Ollama (local) | ollama/llama3.2:3b | Lightest — heartbeats and simple tasks |
Switching Back to Nemotron
The NVIDIA provider is registered automatically during install. To switch back:
# On host
openShell inference route set --provider nvidia
# Inside sandbox
claw connect nemoclaw
openclaw config set agents.defaults.model.primary "nvidia/nemotron-4-mini-instruct"
openclaw gateway restart
Your NVIDIA API key remains registered — you don't need to re-enter it unless you revoked it on build.nvidia.com.
← Back to NemoClaw hub · See also: Local GPU Inference Setup · Cost Optimisation Guide · OpenClaw Configuration Reference