If you’ve deployed a container in the last five years, you’ve almost certainly done this:
env:
- name: DB_PASSWORD
value: "s3cret-hunter2"
Or maybe the slightly more respectable version with a secretKeyRef. Either way, the secret ends up in the same place: the process environment. And that’s the problem.
I’m not here to scare you. Environment variables for secrets are the default pattern in nearly every tutorial, quickstart, and getting-started guide. The Twelve-Factor App told us to put config in env vars, and we collectively interpreted “config” to include database passwords, API tokens, and private keys. It works, it’s easy, and it’s everywhere. But it has real, specific downsides that are worth understanding — because the fix is not much harder.
Why environment variables leak
The risk isn’t theoretical. There are at least six distinct ways environment-variable secrets escape their intended scope, and most of them are operating-as-designed — not bugs.
Everything in the container can read them
On Linux, every process stores its environment in /proc/<pid>/environ. Any process running as the same UID — or any process at all if you’re running as root — can read it. If an attacker gets code execution inside your container (a vulnerable dependency, a server-side request forgery, a debug endpoint you forgot to remove), the first thing they’ll do is cat /proc/1/environ. That’s your entire secret set, in one command.
This isn’t a container escape or a privilege escalation. It’s just how Linux works.
Inspection APIs show them in plaintext
Run docker inspect on a container and scroll to the Env block. Every environment variable is right there, in plain text. In Kubernetes, kubectl describe pod does the same thing. Anyone with RBAC access to read pods — which is a much larger group than “people who should see production credentials” — can see your secrets.
This is the one that catches people off guard. You carefully put the secret into a Kubernetes Secret object (which is already just base64, not encryption), then injected it as an env var, and now it’s visible to anyone who can describe the pod. You did extra work to end up in roughly the same place.
Anything with Docker socket access can read them too
This one is easy to miss. Any container that mounts /var/run/docker.sock — and a surprising number do — can call the Docker API and inspect every other container on the host, including their full environment blocks. That means your monitoring agent (Datadog, cAdvisor, Prometheus exporters), your CI runner, your log shipper, and your management UI all potentially have access to every env-var secret on the machine.
These tools mount the socket for legitimate reasons: they need container metadata for autodiscovery, metrics labeling, and health monitoring. But the Docker API has no fine-grained access control. If you can read the socket, you can read everything — there’s no “just metrics, not env vars” permission level. A vulnerability or misconfiguration in any one of those agents becomes a credential exfiltration vector for every secret on the host.
Logging frameworks love to dump them
This is the one that keeps me up at night. Spring Boot logs active profiles and config properties at startup. Django’s debug mode shows the full environment. .NET’s IConfiguration can be logged with a single call. Error reporting tools like Sentry and Application Insights routinely capture environment variables in crash reports.
Even if your framework is well-behaved, all it takes is one log.Printf("env: %v", os.Environ()) in a debugging session that makes it to production. The secret is now in your log aggregator, which has a different retention policy and a different access control model than your secret store. Congratulations, your blast radius just tripled.
Child processes inherit everything
Environment variables are inherited by every child process. Your health check? It has the database password. Your init container? It has the API token. That sidecar you added for log shipping? It has everything. This isn’t a bug — it’s how process inheritance works. But it violates least-privilege in a way that’s easy to overlook.
Rotation means restarting
If your secret is an environment variable, updating it means restarting the container. In Kubernetes, that means rolling the deployment. That’s fine for planned rotations, but it’s painful during an incident when you need to revoke a compromised credential right now and don’t want to take a service interruption to do it.
The fix: mount secrets as files
The alternative is straightforward: instead of injecting secrets into the process environment, mount them as files. The secret lands at a known path (conventionally /run/secrets/), and the application reads it from disk.
This addresses every issue above. The secret doesn’t appear in /proc/<pid>/environ, docker inspect, or kubectl describe. Logging frameworks don’t dump file contents by default. Child processes don’t automatically get access to files they don’t open. And in Kubernetes, the kubelet syncs secret updates to mounted volumes automatically — no restart required.
You also get filesystem permissions. You can set the file to mode 0400 (owner-read only), which is a control you simply don’t have with environment variables.
Here’s what it looks like across three runtimes.
Kubernetes
spec:
containers:
- name: app
volumeMounts:
- name: secrets
mountPath: /run/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: app-secrets
defaultMode: 0400 # owner-read only
items:
- key: db-password
path: db-password
The secret appears at /run/secrets/db-password inside the container. The defaultMode: 0400 means only the file owner can read it. No env var, no inspection output, no process inheritance.
Docker Compose
services:
app:
image: myapp:latest
secrets:
- db_password
environment:
- APP_LOG_LEVEL=info # non-sensitive config stays as env vars
secrets:
db_password:
file: ./secrets/db_password.txt
Docker mounts the secret at /run/secrets/db_password on a tmpfs — it never touches the container’s writable layer. It doesn’t appear in docker inspect, and it’s read-only by default.
Podman
# Store the secret
echo -n "hunter2" | podman secret create db_password -
# Run with the secret mounted
podman run --rm \
--secret db_password,target=/run/secrets/db_password,mode=0400 \
myapp:latest
Podman’s secret support works identically in rootless mode, which is a nice bonus — the secret lives in the user’s local store, not a system-wide daemon.
What stays as an env var
Not everything needs to move. The rule is simple: secrets go in files, configuration goes in env vars.
| Goes in a file | Stays as an env var |
|---|---|
| API tokens | Base URLs |
| Database passwords | Log levels |
| Private keys | Feature flags |
| Webhook secrets | Port numbers |
| TLS certificates | Cache toggle (on/off) |
If someone seeing the value would be a security incident, it’s a secret. If it’s just configuration that happens to vary between environments, an env var is fine.
The _FILE convention
Here’s a practical pattern for the transition: many official Docker images — Postgres, MySQL, MariaDB, Redis — support a _FILE suffix convention. Instead of:
environment:
POSTGRES_PASSWORD: "s3cret"
You write:
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
The image’s entrypoint reads the file and sets the value internally, so the application code doesn’t need to change. It’s a clean bridge between “we read env vars” and “we read files.”
This is a pattern I think more applications should adopt. BlogFlow — the engine that powers this site — added _FILE support in PR #135. The implementation is small enough to show here — it’s a helper function that checks for a _FILE variant, reads the file if it exists, and falls back to the plain env var:
func getSecret(envKey string) string {
// Prefer the _FILE variant — read the secret from disk
if filePath := os.Getenv(envKey + "_FILE"); filePath != "" {
data, err := os.ReadFile(filePath)
if err == nil {
return strings.TrimSpace(string(data))
}
}
// Fall back to the direct env var for backward compatibility
return os.Getenv(envKey)
}
That’s the entire change on the application side. The deployment side is the volume mount shown above.
Small change, smaller blast radius
None of this is exotic. File mounts are a well-supported, stable feature in every major container runtime. The patterns are documented upstream by Kubernetes, Docker, and Podman. The effort to switch is measured in minutes, not days.
What you get for those minutes is a meaningfully smaller blast radius. A compromised process can’t read secrets from its own environment. An overly broad RBAC policy doesn’t accidentally expose credentials. A crash dump shipped to your log aggregator doesn’t contain your database password. And when you need to rotate a credential at 2 AM, you can do it without restarting the service.
It’s one of those changes where the cost is low and the payoff compounds over time. If you’re starting a new deployment, do it from the beginning. If you’re maintaining an existing one, pick the most sensitive secret and move it first. You don’t have to do everything at once — but you should probably stop putting passwords in environment variables.