Kubernetes Network Model: The Building Blocks
Kubernetes networking can seem opaque, but mastering the fundamentals is essential for any DevOps or SRE team deploying real workloads. At its core, Kubernetes makes the following guarantees for pod communication:
- Every pod gets its own IP address, routable within the cluster
- Pods can communicate without NAT
- Services provide stable virtual IPs for internal discovery
However, these basic guarantees are just the beginning. Once you have real users, multiple teams, and compliance requirements, you need to think about: Ingress Controllers (for traffic coming into the cluster), Service Mesh (for east-west traffic, resilience, and security), and Network Policies (for zero-trust enforcement).
Ingress Controllers: Managing North-South Traffic
Ingress controllers are the standard way to manage external (north-south) access to services in a Kubernetes cluster. According to the Kubernetes documentation and analysis from CodiLime, an Ingress is a Kubernetes API object that provides HTTP and HTTPS routing to services based on hostnames, paths, and more.
But the Ingress resource itself is just a set of rules. To make it work in production, you need an actual Ingress Controller implementation running as a deployment in your cluster. Common options include NGINX Ingress Controller, Kong, and cloud-provider-specific controllers.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
This minimal example routes all traffic with the path /testpath to a backend service named test on port 80. To enable this, you must have an Ingress Controller (e.g., NGINX) running with the nginx-example ingressClassName.
Key production notes:
- Always use SSL/TLS termination at the Ingress (never allow plain HTTP for sensitive workloads)
- Restrict external access using
spec.rules.hostand firewall/load balancer settings outside the cluster - Monitor the health and logs of your Ingress Controller pod(s)
For deeper coverage of reverse proxy concepts and secure SSL termination, see our Nginx Reverse Proxy Configuration: Complete Guide.
Service Mesh: East-West Networking, Security, and Observability
While Ingress Controllers manage north-south (external-to-internal) traffic, a Service Mesh is designed to handle east-west (internal-to-internal) communication between services within the cluster. As described by CodiLime and the DEV Community, a service mesh is:
- A system for securing, observability, and controlling service-to-service traffic at Layer 7 (HTTP, HTTP/2, gRPC)
- Typically implemented via sidecar proxies injected into each pod (e.g., Envoy in Istio, Linkerd-proxy in Linkerd)
- Configured and managed via a dedicated control plane (e.g., Istio control plane)
Main features of a service mesh:
- Mutual TLS (mTLS) encryption for all internal service traffic
- Traffic shaping, retries, timeouts, circuit breaking
- Deep observability (metrics, distributed tracing, logs)
- Fine-grained access policy between services

Service mesh is not enabled by default in Kubernetes. You need to install and configure an implementation such as Istio, Linkerd, or Kuma. The control plane manages configuration, while sidecar proxies intercept and manage all traffic to/from the pod.
Example of Istio destination rule for traffic policy:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: example-destination
spec:
host: myservice.default.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
connectionPool:
http:
http1MaxPendingRequests: 100
maxRequestsPerConnection: 10
outlierDetection:
consecutive5xxErrors: 2
interval: 1m
baseEjectionTime: 30s
maxEjectionPercent: 100
Production caveats:
- Service mesh introduces overhead (CPU, memory, and latency due to sidecar proxies)
- Upgrading and troubleshooting mesh issues can be complex
- Security is only as strong as your mesh policy and certificate management
Network Policies: Zero Trust and Fine-Grained Security
Kubernetes Network Policies are the primary way to enforce zero-trust networking and control which pods/services can communicate with each other. As detailed in Spacelift's guide and the official docs, network policies work by defining allow/deny rules for ingress and egress at the pod level.
Important facts:
- Network policies are additive. If any policy allows a connection, it is permitted. There is no policy ordering or precedence.
- By default, all traffic is allowed (unless a policy applies to a pod).
- Network policies require a compatible CNI plugin (e.g., Calico, Cilium) to be enforced.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress: []
This policy denies all egress (outbound) traffic from pods in the default namespace. Use this as a baseline, then layer on allow policies for specific destinations.
Best practices from Spacelift and Snyk:
- Start with a default deny-all policy, then explicitly allow only required traffic
- Use precise
podSelectorandnamespaceSelectorfields to minimize over-permissiveness - Regularly review and test your network policies as your app architecture evolves
Comparison: Ingress Controller vs. Service Mesh vs. Network Policy
| Component | Main Purpose | Traffic Direction | Key Features | Security Controls | Notes |
|---|---|---|---|---|---|
| Ingress Controller | Expose services to external clients via HTTP/S | North-South | Path/host-based routing, SSL termination, rate limiting (by implementation) | External access control, TLS termination | Requires a running controller (e.g., NGINX, Kong) |
| Service Mesh | Secure, observe, and manage internal service-to-service traffic | East-West | mTLS, retries, circuit breakers, distributed tracing, policy enforcement | mTLS between services, policy-driven access | Introduces sidecar overhead, complex upgrades |
| Network Policy | Restrict pod-to-pod and pod-to-external communications | East-West, Egress | Allow/deny rules for ingress/egress, label/namespace selectors | Zero-trust segmentation | Requires CNI support, policies are additive |
Production Hardening: Monitoring, Logging, and Security
Deploying any of these network primitives in production isn’t just about writing YAML. You need to monitor, validate, and continuously harden your configuration. Here’s what teams operating real clusters must do:
- Ingress Controllers: Enable full access logging and error logging. Monitor HTTP 4xx/5xx rates, SSL cert expiry, and backend health. Use Prometheus for metrics and alerting.
- Service Mesh: Integrate mesh telemetry with Prometheus (metrics), Jaeger/Zipkin (tracing), and Fluentd (logs). Regularly rotate mTLS certificates and audit mesh policy changes.
- Network Policies: Periodically test policies by attempting unauthorized traffic flows and inspect network logs for denied connections. Use tools like
kubectl execwithcurlornetcatfor verification.
# Example: Test denied egress from a pod
kubectl exec -it test-pod -- curl -m 2 http://example.com
# Should timeout or fail if egress policy is set correctly
Never leave defaults in place—especially for external access. Always limit blast radius by namespace, label, and CIDR as tightly as possible. For more on secure state and configuration, see our Terraform State Management Best Practices.
Troubleshooting: Common Pitfalls and Debugging Steps
Even with robust configuration, network issues are a top cause of outages in Kubernetes. Here’s what trips up most teams:
- Ingress not routing: Check Ingress Controller pod logs for errors, verify
ingressClassNamematches controller, ensure Services are healthy - Service mesh breakage: Sidecar injection failures, mTLS misconfiguration, or control plane downtime can cause cascading service failures
- Network policy blocks: Pods “stuck” can be due to missing allow rules—test connectivity with
kubectl execand review all applicable policies - CNI plugin limitations: Features like network policy enforcement only work if your CNI supports it (e.g., Calico, Cilium).
kubectl get pods -n kube-systemand check CNI pod logs for failures
When debugging, always:
- Check resource status:
kubectl get ingress,svc,pod,networkpolicy -A - Describe resources for events:
kubectl describe ingress <name> - Inspect pod logs:
kubectl logs <controller-pod> -n kube-system
For advanced NAT/debugging scenarios, review peer-to-peer and NAT traversal tips in our post on TCP hole punching.
Key Takeaways
Key Takeaways:
- Ingress Controllers route external (north-south) traffic into the cluster, but require an actual controller deployment and careful SSL/TLS configuration
- Service Meshes secure and monitor internal (east-west) communications, enabling advanced policies and observability at the cost of complexity and resource overhead
- Network Policies enforce zero-trust at the network layer, but must be paired with a compatible CNI and reviewed regularly to avoid accidental outages
- Production readiness means full monitoring, logging, and regular policy review—not just writing manifest files
- Real-world Kubernetes networking is about defense in depth—use all three controls in combination to minimize risk and maximize reliability
For deeper dives and ongoing updates, see the following authoritative sources:
- Kubernetes Ingress Documentation
- Spacelift: Kubernetes Network Policy Guide
- CodiLime: Service Mesh vs Kubernetes Ingress
For more on modular architecture and operational separation of concerns (parallels to network layering), see our analysis of separating Wayland components.

