Docker changed how we build and deploy software. Containers provide consistency from development through production, eliminating the "works on my machine" problem. This guide covers practical Docker usage for web developers.
Why Containers Matter
A Docker container packages your application with its dependencies, runtime, and configuration. The same container runs identically whether it's on your laptop, a colleague's machine, or a production server.
This consistency solves real problems:
- New developers can run the project in minutes, not hours
- Staging environments match production exactly
- Deployments are predictable and repeatable
Writing Efficient Dockerfiles
A Dockerfile describes how to build your container image. Order matters;Docker caches each layer, so put things that change less frequently at the top:
FROM node:20-alpine
# Dependencies change less often than code
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Code changes frequently - keep it last
COPY . .
RUN npm run build
CMD ["npm", "start"]
Key practices:
Use specific base image tags: node:20-alpine not node:latest. You want reproducible builds.
Minimize layers: Each RUN command creates a layer. Combine related commands:
# Bad - three layers
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# Better - one layer
RUN apt-get update && \
apt-get install -y curl && \
rm -rf /var/lib/apt/lists/*
Use .dockerignore: Exclude files that shouldn't be in the image:
node_modules
.git
*.log
.env
Multi-Stage Builds
Multi-stage builds create smaller production images by separating build dependencies from runtime:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
The final image only contains what's needed to run, not build tools or development dependencies. This reduces image size and attack surface.
For even smaller images, copy only production dependencies:
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
Docker Compose for Local Development
Docker Compose defines multi-container applications. A typical web app needs the application, database, and maybe a cache:
# docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
DATABASE_URL: postgres://user:pass@db:5432/myapp
depends_on:
- db
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
postgres_data:
The volumes section mounts your code into the container so changes reflect immediately. The named volume postgres_data persists database data between container restarts.
Start everything with:
docker compose up
Environment Configuration
Never bake secrets into images. Use environment variables:
# Good - environment variable
ENV DATABASE_URL=""
Pass values at runtime:
docker run -e DATABASE_URL=postgres://... myapp
Or use a .env file with Compose:
services:
app:
env_file:
- .env
For production, use your platform's secrets management (AWS Secrets Manager, Kubernetes Secrets, etc.).
Debugging Containers
View logs:
docker logs container_name
docker compose logs app
Get a shell inside a running container:
docker exec -it container_name sh
Inspect container details:
docker inspect container_name
Check what's using disk space:
docker system df
Clean up unused resources:
docker system prune
Production Considerations
Run as non-root user:
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -D appuser
USER appuser
Health checks let orchestrators know when containers are ready:
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
Resource limits prevent runaway containers:
services:
app:
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
Logging: Write logs to stdout/stderr. Let the container platform handle log aggregation.
Security Best Practices
Scan images for vulnerabilities:
docker scout cves myimage:tag
Use minimal base images: Alpine Linux images are much smaller than Debian-based images and have fewer vulnerabilities.
Don't run as root: Create a dedicated user for your application.
Keep images updated: Rebuild regularly to pick up security patches in base images.
Use multi-stage builds: The final image shouldn't contain build tools, package managers, or source code.
Common Docker Compose Patterns
Override files for different environments:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
Wait for dependencies:
services:
app:
depends_on:
db:
condition: service_healthy
db:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 5s
timeout: 5s
retries: 5
Shared networks for microservices:
networks:
backend:
driver: bridge
services:
api:
networks:
- backend
worker:
networks:
- backend
Conclusion
Docker provides the consistency that modern development and deployment require. Start with a simple Dockerfile, add Compose for local development, and use multi-stage builds for production.
The investment in containerizing your application pays off in faster onboarding, reproducible environments, and more reliable deployments. Once you've worked with containers, you won't want to go back.