Kubernetes has become the standard for container orchestration, but it can be intimidating for application developers. This guide focuses on the concepts and tools you need as a developer, without diving into cluster administration.
Why Kubernetes?
Kubernetes solves problems that emerge when running containerized applications at scale:
- Service discovery: How do containers find each other?
- Load balancing: How do we distribute traffic?
- Scaling: How do we handle increased load?
- Self-healing: What happens when containers fail?
- Rolling updates: How do we deploy without downtime?
Core Concepts
Pods
A Pod is the smallest deployable unit;one or more containers that share storage and network.
Here is a basic Pod definition that runs a single container. You would use this pattern when you need to understand how Pods work, though in practice you will rarely create Pods directly.
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-app
labels:
app: web
spec:
containers:
- name: app
image: myapp:1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
Notice how environment variables can reference Kubernetes Secrets using secretKeyRef. This keeps sensitive data out of your Pod definitions and allows you to manage credentials separately.
In practice, you rarely create Pods directly. You use Deployments which manage Pods for you.
Deployments
Deployments manage replica sets and rolling updates. They are the workhouse of Kubernetes and what you will use for most application workloads.
The following Deployment creates three replicas of your application with resource limits and health checks. This is closer to a production-ready configuration.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: app
image: myapp:1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
The resources section is critical for production. Requests tell Kubernetes how much CPU and memory to reserve, while limits cap what the container can use. Without these, a single misbehaving Pod could consume all cluster resources. You will also notice the probes are configured with different timing; readiness starts quickly to accept traffic, while liveness waits longer to avoid killing slow-starting applications.
Key settings:
- replicas: Number of Pod instances
- resources: CPU/memory requests and limits
- readinessProbe: When is the Pod ready for traffic?
- livenessProbe: Is the Pod still healthy?
Services
Services provide stable networking for Pods. Since Pods are ephemeral and their IP addresses change, you need a Service to provide a consistent endpoint.
This Service exposes your Pods on port 80 internally within the cluster. You can think of it as a load balancer that automatically discovers and routes to healthy Pods.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
type: ClusterIP
The selector matches Pods with the label app: web, so this Service will automatically route traffic to any Pod from your Deployment. When Pods are added or removed, the Service updates its endpoints automatically.
Service types:
- ClusterIP: Internal only (default)
- NodePort: Expose on each node's IP
- LoadBalancer: Provision cloud load balancer
ConfigMaps and Secrets
Configuration separate from code is essential for running the same container across environments. ConfigMaps store non-sensitive configuration, while Secrets store sensitive data like passwords and API keys. Both can be mounted as files or exposed as environment variables.
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
CACHE_TTL: "3600"
---
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
api-key: YXBpLWtleS12YWx1ZQ== # base64 encoded
Note that Secrets are only base64 encoded, not encrypted. For production, consider using a secrets management solution like Vault or enabling encryption at rest in your cluster.
To use these in your Deployment, reference them with envFrom. This approach injects all keys as environment variables, which is simpler than referencing each key individually.
spec:
containers:
- name: app
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
Ingress
Route external HTTP traffic to Services. While Services handle internal networking, Ingress manages external access with features like SSL termination and path-based routing.
This Ingress routes traffic from app.example.com to your web-app Service with TLS enabled. You will need an Ingress controller installed in your cluster for this to work.
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
tls:
- hosts:
- app.example.com
secretName: tls-secret
The ingressClassName specifies which Ingress controller handles this resource. Different controllers like nginx, traefik, or cloud-specific controllers may have different annotation options. The TLS section references a Secret containing your certificate and key.
Local Development
Minikube
Single-node Kubernetes for local development. Minikube is a great choice when you want a full Kubernetes experience on your laptop.
These commands get you from zero to a running application in minikube. The process is straightforward once you understand the workflow.
# Start cluster
minikube start
# Point Docker to minikube's daemon
eval $(minikube docker-env)
# Build image in minikube
docker build -t myapp:local .
# Apply manifests
kubectl apply -f k8s/
# Access service
minikube service web-app
The minikube docker-env command is the key trick here. It lets you build images directly into minikube's Docker daemon, avoiding the need to push to a registry. This speeds up your development cycle significantly.
Kind (Kubernetes in Docker)
Runs Kubernetes in Docker containers. Kind is faster to start than minikube and is excellent for CI/CD pipelines where you need a fresh cluster for each test run.
# Create cluster
kind create cluster --name dev
# Load local image
kind load docker-image myapp:local --name dev
# Apply manifests
kubectl apply -f k8s/
Docker Desktop
Enable Kubernetes in Docker Desktop settings for the simplest setup.
Essential kubectl Commands
These are the commands you will use daily when working with Kubernetes. Bookmark this section for quick reference.
# Get resources
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get all
# Describe for details
kubectl describe pod web-app-xxxxx
kubectl describe deployment web-app
# Logs
kubectl logs web-app-xxxxx
kubectl logs -f web-app-xxxxx # Follow
kubectl logs web-app-xxxxx --previous # Previous container
# Execute commands in pod
kubectl exec -it web-app-xxxxx -- /bin/sh
kubectl exec web-app-xxxxx -- php artisan migrate
# Port forwarding for debugging
kubectl port-forward pod/web-app-xxxxx 8080:8080
kubectl port-forward service/web-app 8080:80
# Apply changes
kubectl apply -f deployment.yaml
# Scale
kubectl scale deployment web-app --replicas=5
# Rollback
kubectl rollout undo deployment web-app
kubectl rollout history deployment web-app
When things go wrong, kubectl describe and kubectl logs are your best friends. Start with describe to see events, then check logs for application errors. The events section at the bottom of describe output often reveals scheduling issues or image pull problems.
Organizing Manifests
Directory Structure
As your application grows, you will need a structured approach to managing Kubernetes manifests. This layout uses Kustomize to manage environment-specific configurations.
k8s/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── development/
│ │ ├── kustomization.yaml
│ │ └── replica-patch.yaml
│ ├── staging/
│ │ └── kustomization.yaml
│ └── production/
│ ├── kustomization.yaml
│ └── resources-patch.yaml
Kustomize
Built into kubectl for environment-specific configurations. Kustomize lets you define a base configuration and apply patches per environment without duplicating YAML files.
The base kustomization lists your core resources, while overlays customize them for each environment. This approach keeps your manifests DRY while allowing necessary variations.
# k8s/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
# k8s/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- replica-patch.yaml
images:
- name: myapp
newTag: v1.2.3
Apply your production configuration with a single command. Kustomize processes all the patches and outputs the final YAML.
# Apply with kustomize
kubectl apply -k k8s/overlays/production
Debugging Tips
Pod Won't Start
When a Pod is stuck, the first step is always to describe it and check the events section at the bottom of the output.
# Check events
kubectl describe pod <pod-name>
# Common issues:
# - ImagePullBackOff: Can't pull image (wrong name, no auth)
# - CrashLoopBackOff: Container keeps crashing (check logs)
# - Pending: No node has resources (check resource requests)
Application Errors
Once the Pod is running but misbehaving, you need to get inside and investigate. These commands give you direct access to the running container.
# Get logs
kubectl logs <pod-name>
# Shell into container
kubectl exec -it <pod-name> -- /bin/sh
# Check environment
kubectl exec <pod-name> -- printenv
The printenv command is particularly useful for verifying that ConfigMaps and Secrets are mounted correctly.
Network Issues
Networking problems can be tricky to debug. These commands help you verify that Services are routing traffic correctly.
# Check service endpoints
kubectl get endpoints <service-name>
# Test DNS from a pod
kubectl run test --rm -it --image=busybox -- nslookup web-app
# Test connectivity
kubectl run test --rm -it --image=curlimages/curl -- curl http://web-app:80
If endpoints show no IPs, check that your Pod labels match the Service selector. A common mistake is a typo in the label name.
Health Checks
Liveness vs Readiness
- Liveness: Is the container running? Failure = restart
- Readiness: Is the container ready for traffic? Failure = remove from service
This distinction is crucial. A slow-starting application might fail liveness checks before it is ready, causing restart loops. Use separate endpoints or timing for each probe.
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
The readiness probe starts earlier with initialDelaySeconds: 5 because you want to start routing traffic as soon as possible. The liveness probe waits longer with initialDelaySeconds: 15 to give the application time to fully initialize. Consider implementing separate health endpoints that check different things for each probe type.
Startup Probes
For slow-starting applications, use a startup probe to prevent premature liveness failures. This is particularly useful for applications that need to load large datasets or establish connections on startup.
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 30
periodSeconds: 10
With these settings, Kubernetes will wait up to 5 minutes (30 failures x 10 seconds) for the application to start before liveness checks begin. Once the startup probe succeeds, liveness probes take over.
Deployment Strategies
Rolling Update (Default)
Rolling updates gradually replace old Pods with new ones, ensuring zero downtime. This is the safest strategy for most applications.
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
The maxUnavailable setting controls how many Pods can be down during the update, while maxSurge controls how many extra Pods can be created. Adjust these based on your capacity requirements and the minimum number of replicas needed for reliable service.
Blue-Green
Deploy new version alongside old, then switch traffic.
Canary
Route percentage of traffic to new version, gradually increase.
Common Patterns
Sidecar Containers
Additional container in same Pod. Sidecars are useful for cross-cutting concerns like logging, monitoring, or proxying.
This example shows a log shipper sidecar that reads logs from a shared volume. Both containers can write to and read from the shared storage.
spec:
containers:
- name: app
image: myapp:1.0.0
- name: log-shipper
image: fluent/fluent-bit
volumeMounts:
- name: logs
mountPath: /var/log/app
Both containers share the same network namespace, so they can communicate via localhost. They also share the Pod lifecycle, starting and stopping together.
Init Containers
Run before main containers. Use init containers to ensure dependencies are available before your application starts.
This init container waits for a database to be reachable before allowing the main application to start. This prevents your application from crashing due to missing dependencies.
spec:
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z db 5432; do sleep 1; done']
containers:
- name: app
image: myapp:1.0.0
Init containers run sequentially and must succeed before the main containers start. This is cleaner than building retry logic into your application and separates infrastructure concerns from application code.
Conclusion
Kubernetes has a learning curve, but understanding Pods, Deployments, Services, and ConfigMaps covers most application developer needs. Start with local development using minikube or Docker Desktop, learn kubectl basics, and gradually explore more advanced features as needed. The investment in learning Kubernetes pays off in standardized, scalable deployments.