Skip to main content
Back to blog
October 19, 2025|10 min read|Antoine Duno

Docker Security: Hardening Containers for Production

Docker security requires hardening at every layer — from the base image and Dockerfile to runtime configuration, secrets management, and network isolation. This guide covers every critical control with practical examples.

ZeriFlow Team

1,402 words

Docker Security: Hardening Containers for Production

Docker security is a multi-layered discipline that most teams address too late — usually after an incident. A default Docker setup runs containers as root, with full write access to the filesystem, shared networking, and secrets passed as environment variables in plaintext. Each of these defaults represents a significant security risk in production.

This guide covers every layer of Docker security: non-root user configuration, read-only filesystems, secrets management, network isolation, and image hardening. At the end, we cover how to verify the security of whatever public endpoint your containers expose.

Scan your publicly exposed endpoint with ZeriFlow to verify HTTPS configuration, security headers, and TLS strength — the outside-in view that matters to your users and attackers alike.


Non-Root Users: The Most Important Docker Security Change

By default, processes inside Docker containers run as root (UID 0). If an attacker exploits a vulnerability in your application, they get root access inside the container — and potentially root access to the host if the container is misconfigured.

In your Dockerfile:

dockerfile
FROM node:20-alpine

# Create a non-root user and group
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Set working directory
WORKDIR /app

# Copy dependency files first (for layer caching)
COPY package*.json ./

# Install dependencies as root (before switching user)
RUN npm ci --only=production

# Copy application code
COPY --chown=appuser:appgroup . .

# Switch to non-root user
USER appuser

EXPOSE 3000
CMD ['node', 'server.js']

For Python applications:

dockerfile
FROM python:3.12-slim

RUN groupadd -r appgroup && useradd -r -g appgroup appuser

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY --chown=appuser:appgroup . .
USER appuser

CMD ['python', 'app.py']

Verify the running user:

bash
docker exec CONTAINER_ID whoami
# Should output: appuser (not root)

Read-Only Filesystem: Preventing Runtime Modification

A read-only filesystem prevents an attacker from writing malware, modifying application code, or creating persistence mechanisms inside your container. Most well-designed applications do not need to write to their own filesystem at runtime.

Run containers with read-only filesystem:

bash
docker run --read-only --tmpfs /tmp --tmpfs /var/run myimage:latest

The --tmpfs flags create in-memory temporary filesystems for directories that legitimately need write access (temp files, PID files).

In docker-compose.yml:

yaml
services:
  app:
    image: myapp:latest
    read_only: true
    tmpfs:
      - /tmp
      - /var/run
    volumes:
      - uploads:/app/uploads:rw  # Named volume for legitimate write path

In Kubernetes (for completeness):

yaml
securityContext:
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  runAsUser: 1000

Secrets Management: Never Use Environment Variables for Credentials

Environment variables are the most common way to pass secrets to containers — and one of the most insecure. Environment variables are: - Visible to all processes in the container - Included in docker inspect output - Often logged accidentally in debug output - Accessible to any process that can read /proc/self/environ

Docker Secrets (Docker Swarm):

bash
# Create a secret
echo 'my-database-password' | docker secret create db_password -

# Use the secret in a service
docker service create   --secret db_password   --env DB_PASSWORD_FILE=/run/secrets/db_password   myapp:latest

In your application, read the secret from the file:

python
import os

def get_db_password():
    secret_file = os.environ.get('DB_PASSWORD_FILE', '/run/secrets/db_password')
    try:
        with open(secret_file, 'r') as f:
            return f.read().strip()
    except FileNotFoundError:
        # Fall back to environment variable in development
        return os.environ.get('DB_PASSWORD')

Docker Compose with external secret managers: For production outside Swarm mode, use a dedicated secrets manager: - AWS Secrets Manager: Use the AWS SDK to fetch secrets at startup - HashiCorp Vault: Use the Vault agent sidecar pattern - Doppler / Infisical: Mount secrets as files using their Docker integrations

At minimum — never commit .env files:

bash
# .gitignore
.env
.env.*
!.env.example

Network Isolation: Principle of Least Privilege for Networking

Docker's default bridge network allows all containers to communicate with each other. In a multi-service application, your frontend container should not be able to directly connect to your database container — only your backend API container should.

docker-compose.yml with network isolation:

yaml
version: '3.8'

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true  # No external internet access

services:
  nginx:
    image: nginx:alpine
    networks:
      - frontend
    ports:
      - '443:443'
      - '80:80'

  app:
    image: myapp:latest
    networks:
      - frontend    # Can receive traffic from nginx
      - backend     # Can connect to database
    # No exposed ports — only accessible via nginx

  database:
    image: postgres:16
    networks:
      - backend     # Only accessible from backend network
    # No exposed ports to host

The internal: true flag on the backend network prevents containers in that network from making outbound internet connections — your database container cannot exfiltrate data even if compromised.

Disabling inter-container communication on the default bridge: In /etc/docker/daemon.json:

json
{
    'icc': false,
    'live-restore': true
}

icc: false disables inter-container communication on the default bridge network. Containers can only communicate if explicitly connected to the same user-defined network.


Image Hardening: Minimal Base Images and Regular Scanning

Your container image is the foundation of your security. A bloated image with unnecessary packages is a larger attack surface.

Use minimal base images: - alpine variants: typically 5-15MB, minimal package set - distroless images (Google): no shell, no package manager, just the runtime - scratch: empty base image for compiled binaries (Go, Rust)

dockerfile
# Multi-stage build: build in full image, deploy in minimal image
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o server .

# Distroless final image — no shell, no package manager
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /
USER nonroot:nonroot
CMD ['/server']

Scan images for vulnerabilities:

bash
# Using Trivy (free, open source)
trivy image myapp:latest

# Using Docker Scout (built into Docker Desktop)
docker scout cves myapp:latest

# Using Snyk
snyk container test myapp:latest

Integrate these scans into your CI/CD pipeline. Fail builds when critical or high vulnerabilities are found in your image.


Verifying Your Exposed Endpoint With ZeriFlow

All container security controls are internal — what matters to your users is what the exposed endpoint returns. A perfectly hardened container can still serve weak TLS or missing security headers if the web server or reverse proxy is misconfigured.

Run a ZeriFlow scan on your domain to verify: - HTTPS is correctly configured - Security headers (HSTS, CSP, X-Frame-Options, etc.) are present - TLS version and cipher strength meet modern standards - No server version information is disclosed - Certificate chain is complete and not expiring


FAQ

### Q: Should I run Docker as a non-root user on the host? A: Yes. The Docker daemon runs as root on the host. The rootless Docker mode (dockerd-rootless-setuptool.sh install) runs the daemon and containers as a non-root user, significantly reducing the host-level blast radius of a container escape. It requires a Linux kernel 5.11+ and is the recommended approach for production deployments on modern kernels.

### Q: Is Docker Compose suitable for production? A: Docker Compose is suitable for small-to-medium single-host deployments. For multi-host or high-availability requirements, consider Docker Swarm or Kubernetes. The security principles in this guide apply equally to both — Compose uses the same Docker runtime as Swarm and Kubernetes nodes.

### Q: What is a container escape and how do I prevent it? A: A container escape occurs when an attacker exploits a vulnerability in the Docker runtime or kernel to break out of the container namespace and gain access to the host. Prevention: keep Docker and the kernel updated, run as non-root, use read-only filesystems, disable privileged mode (--privileged), drop unnecessary Linux capabilities (--cap-drop=ALL --cap-add=CHOWN,SETUID,SETGID), and use seccomp profiles.

### Q: How do I handle database migrations that need write access? A: Run migrations as a separate init container or job before your main application starts. The init container can run with write access to the database, while your main application container runs read-only with least-privilege database credentials.


Conclusion

Docker security is a layered discipline. Non-root users, read-only filesystems, proper secrets management, and network isolation each address a different failure mode — together they dramatically reduce the risk of a compromised container leading to a larger incident.

Start with non-root users and secrets management — these two changes have the highest impact for the least effort. Then add network isolation for multi-service deployments and read-only filesystems for stateless services.

Verify the security of your exposed endpoint with ZeriFlow after deploying. The free scan checks the HTTP/HTTPS interface your containers expose to the world — the attack surface that matters most from the outside.

Ready to check your site?

Run a free security scan in 30 seconds.

Related articles

Keep reading