Container networking enables communication between containers, between containers and hosts, and between containers and external networks. Understanding container networking is essential for troubleshooting connectivity issues and designing scalable architectures.
Containers provide process isolation but need network access. The networking model determines how containers get IP addresses, how they find each other, and how traffic flows. Different environments; Docker, Kubernetes, cloud providers; implement these concepts differently but share common principles.
Network Namespaces
Linux network namespaces provide network isolation. Each namespace has its own network interfaces, routing tables, and firewall rules. Containers run in their own network namespace, isolated from the host and other containers.
By default, a container's namespace has only a loopback interface. Additional interfaces connect it to other namespaces or the host network.
To understand namespace isolation, you can compare the network interfaces visible inside a container versus those on the host. You'll see completely different sets of interfaces, demonstrating the isolation that namespaces provide.
# View container's network namespace
docker exec <container> ip addr
# Shows interfaces visible to the container
# Compare to host
ip addr
# Shows different interfaces
Virtual ethernet pairs (veth) connect namespaces. One end lives in the container's namespace; the other in a bridge or the host namespace. Traffic entering one end exits the other.
Docker Networking
Docker provides several network drivers for different use cases.
Bridge networking is the default. Docker creates a virtual bridge (docker0) on the host. Each container gets a veth pair; one end in the container, one attached to the bridge. Containers on the same bridge can communicate. NAT enables outbound internet access.
Creating a custom bridge network gives you DNS-based container discovery. Containers can reach each other using their names rather than IP addresses, which is essential since container IPs change when containers restart.
# Create a custom bridge network
docker network create --driver bridge my-app-network
# Run containers on the network
docker run -d --name api --network my-app-network myapp/api
docker run -d --name worker --network my-app-network myapp/worker
# Containers can reach each other by name
docker exec api ping worker
Host networking removes isolation. The container shares the host's network namespace, using host interfaces directly. This eliminates NAT overhead but sacrifices isolation.
When you need maximum network performance and don't require container isolation, host networking lets the container bind directly to host ports without any network virtualization overhead.
# Container uses host networking
docker run --network host nginx
# nginx binds directly to host ports
Overlay networking spans multiple hosts. Docker Swarm uses overlay networks to connect containers across hosts. VXLAN encapsulation tunnels traffic between hosts.
Kubernetes Networking
Kubernetes networking follows a flat model: every pod gets an IP address, and every pod can reach every other pod without NAT. This simplifies application design but requires network implementation that supports it.
The Container Network Interface (CNI) standardizes how Kubernetes configures pod networking. CNI plugins implement the actual networking: Calico, Cilium, Flannel, Weave, and others.
In Kubernetes, pods are the atomic unit of deployment and networking. Each pod gets its own IP address from the cluster's pod CIDR range, and that IP is routable from any other pod in the cluster.
# Pod gets an IP from the cluster's pod CIDR
apiVersion: v1
kind: Pod
metadata:
name: api
spec:
containers:
- name: api
image: myapp/api
ports:
- containerPort: 8080
# Pod is reachable at its assigned IP from any other pod
Services provide stable networking for pods. Pods are ephemeral; their IPs change when they restart. Services provide stable IPs and DNS names that route to healthy pods.
When you need reliable connectivity to a set of pods, a Service provides a stable endpoint that automatically load balances across all matching pods. Other pods connect to the Service name, and Kubernetes handles the routing.
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
targetPort: 8080
# Other pods reach the API via api-service.namespace.svc.cluster.local
Service Discovery
Containers need to find each other. Hardcoding IP addresses doesn't work when containers are ephemeral. Service discovery provides dynamic lookup.
DNS-based discovery is most common. Docker embedded DNS resolves container names within networks. Kubernetes CoreDNS resolves service names.
In your application code, you can simply use service names as hostnames. The container runtime or orchestrator handles resolving those names to current IP addresses, abstracting away the dynamic nature of container networking.
// Connect to service by name
$redis = new Redis();
$redis->connect('redis', 6379); // DNS resolves to container IP
// In Kubernetes, use service name
$db = new PDO('pgsql:host=database-service;dbname=myapp', $user, $pass);
Environment variables provide another discovery method. Docker links and Kubernetes service injection set environment variables with connection information.
Network Policies
Network policies control traffic flow between pods. By default, all pods can communicate. Policies restrict this, implementing microsegmentation within the cluster.
A NetworkPolicy acts as a firewall for pods, specifying which sources can send traffic and which destinations can receive it. This example creates a policy that restricts the API pod to only accept traffic from frontend pods and only send traffic to database pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432
This policy allows the API pod to receive traffic only from frontend pods on port 8080 and send traffic only to database pods on port 5432. All other traffic is denied.
Load Balancing
Load balancing distributes traffic across multiple containers. Several mechanisms exist at different layers.
Kubernetes Services use kube-proxy for basic load balancing. kube-proxy programs iptables or IPVS rules that distribute traffic to pod endpoints.
A ClusterIP Service provides internal load balancing within the cluster. Traffic sent to the Service IP is distributed across all healthy pods matching the selector, providing both load distribution and failover.
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 80
targetPort: 8080
type: ClusterIP
# Traffic to api:80 is distributed across all api pods
Ingress controllers provide HTTP-aware load balancing. They terminate HTTP/HTTPS and route based on hostname and path.
External load balancers (cloud provider or hardware) distribute traffic to nodes, which then route to pods.
Debugging Network Issues
Container networking problems are common and frustrating. Systematic debugging identifies issues.
When troubleshooting container connectivity, start from inside the container and work outward. Check that the container has network interfaces, can resolve DNS names, and can establish connections to the target service.
# Check container has network interface
docker exec <container> ip addr
# Verify DNS resolution
docker exec <container> nslookup other-service
# Test connectivity
docker exec <container> curl -v http://other-service:8080/health
# Check if port is listening
docker exec <container> netstat -tlnp
# Trace network path
docker exec <container> traceroute other-service
In Kubernetes, check pod status, service endpoints, and network policies.
Kubernetes adds additional layers to debug. A Service might exist but have no endpoints if no pods match its selector. Network policies might be blocking traffic that should be allowed. These commands help you check each layer of the networking stack.
# Pod networking details
kubectl describe pod <pod-name>
# Service endpoints
kubectl get endpoints <service-name>
# Network policies affecting pod
kubectl get networkpolicies -o yaml | grep -A 20 "podSelector"
# DNS debugging
kubectl run debug --image=busybox -it --rm -- nslookup <service-name>
Performance Considerations
Container networking adds overhead compared to host networking. Veth pairs, bridges, NAT, and encapsulation all consume CPU cycles.
For latency-sensitive applications, consider:
- Host networking (sacrifices isolation)
- High-performance CNI plugins (Cilium with eBPF)
- Kernel bypass technologies (DPDK, SR-IOV)
Measure network performance in your environment. Synthetic benchmarks don't capture application-specific patterns.
Before making decisions about networking performance, measure actual throughput and latency in your specific environment. The iperf3 tool provides reliable network performance measurements between containers.
# Network latency test between containers
kubectl run iperf-server --image=networkstatic/iperf3 -- -s
kubectl run iperf-client --image=networkstatic/iperf3 --rm -it -- -c iperf-server
Conclusion
Container networking provides the connectivity distributed applications need. Network namespaces isolate containers. Bridges and overlays connect them. DNS enables discovery. Network policies control traffic flow.
When networking works, it's invisible. When it doesn't, understanding the layers helps diagnose problems. Know your network model; Docker bridge, Kubernetes CNI; and how traffic flows through it. This knowledge is invaluable when connectivity fails at 3 AM.