Secrets & Credentials — never in prompts, never in memory
API keys, OAuth tokens, passwords. Where they live, how they leak, and how to rotate them when (not if) they do.
The threat
Agents that read wide will eventually read a secret — your .env file, a config JSON, a past conversation. If that content ends up in the model's context, it can be exfiltrated via prompt injection, logged by the provider, or stored in persistent memory.
What to do about it
-
1. Secrets live in environment variables, never in SOUL.md or system prompts
SOUL.md, CLAUDE.md, and system prompts get sent to the model on every turn. Put secrets in .env and reference them by name only.
-
2. chmod 600 your .env files
A compromised skill running as you can read anything you can. Filesystem permissions don't save you, but they stop casual leaks.
-
3. Use short-lived tokens where possible
OAuth refresh tokens are better than long-lived API keys. Platform-specific scoped tokens (GitHub fine-grained PATs, AWS STS) are better still.
-
4. Rotate after any suspected exposure
If a secret ever appears in an agent transcript, treat it as leaked. Rotate immediately; don't wait for evidence of misuse.
-
5. Never paste secrets into cloud agents without checking retention policy
Claude Cowork, Hermes, ChatGPT — each has different retention defaults. Assume anything you paste is stored unless the docs explicitly say otherwise.
Real-world examples
- A user pasted a production Stripe key into a Claude conversation to debug. The conversation was stored. Key had to be rotated across 3 services.
- A SOUL.md file was committed to a public GitHub repo containing an OpenAI API key. Scraped and abused within 4 hours.
Examples are illustrative, composited from public incident reports and community posts.
Applies to
OpenClaw · NemoClaw · IronClaw · Hermes · Claude Cowork · ChatGPT
← Back to the security hub · See also the hardening checklist.