Service Mesh
A service mesh is an infrastructure layer that manages communication between microservices. It handles traffic management, security, and observability transparently for the application.
What Is a Service Mesh?
A service mesh is a dedicated infrastructure layer for service-to-service communication in microservices architectures. It is placed as a transparent proxy between services and handles tasks like load balancing, encryption, authentication, retry logic, and observability – without requiring changes to the application itself.
How Does a Service Mesh Work?
Sidecar Proxy Pattern
The classic service mesh pattern is based on sidecar proxies. Alongside each service container, a proxy container (typically Envoy) is automatically injected. This sidecar proxy intercepts all incoming and outgoing network traffic and applies configured policies. The application itself communicates only with the local proxy.
Control Plane and Data Plane
A service mesh consists of two layers. The data plane encompasses all sidecar proxies and processes actual traffic. The control plane centrally configures and manages the proxies. You define rules in the control plane, and the data plane enforces them.
Popular Service Mesh Implementations
- Istio: The most widespread service mesh with comprehensive features for traffic management, security, and observability. Complex but very powerful.
- Linkerd: A lightweight, Kubernetes-focused service mesh with low overhead and easy operation.
- Cilium Service Mesh: eBPF-based service mesh that operates without sidecar proxies, making it particularly performant.
Core Functions of a Service Mesh
Traffic Management
A service mesh enables fine-grained control over network traffic: canary deployments, A/B testing, traffic splitting, circuit breaking, and intelligent retry handling. For example, you can direct 5% of traffic to a new version and automatically roll back on errors.
Mutual TLS (mTLS)
Service meshes automatically encrypt communication between services with mTLS. Each service receives its own certificate that is regularly rotated. This ensures that only authenticated services can communicate with each other – zero trust at the network level.
Observability
The service mesh automatically generates metrics, traces, and logs for every service-to-service communication. You gain a complete overview of latencies, error rates, and dependencies without code changes.
When Do I Need a Service Mesh?
A service mesh is worthwhile with more than 10-15 services, when you need fine-grained traffic control, automatic mTLS encryption, or comprehensive observability without code changes. For smaller setups, the overhead is often too large – Kubernetes-native mechanisms like Network Policies and Ingress Controllers suffice.
Service Mesh for Mid-Market Companies
For mid-market companies, we recommend introducing a service mesh only when the microservices landscape has reached a critical size. Start with Linkerd for a lightweight entry point or evaluate Cilium Service Mesh if you already use Cilium as your CNI. The investment pays off through simplified security and better observability.
Frequently asked questions about Service Mesh
An API gateway is the central entry point for external traffic (north-south). A service mesh manages internal communication between services (east-west). Both complement each other – the API gateway for external clients, the service mesh for internal networking.
A sidecar-based service mesh typically adds 1-3ms latency per hop. For most applications, this is negligible. eBPF-based solutions like Cilium Service Mesh minimize overhead by working at the kernel level.
A service mesh significantly simplifies mTLS but is not the only option. You can implement mTLS manually with cert-manager and application-level TLS – but this is considerably more complex to manage and scale.
For getting started, we recommend Linkerd due to its lower complexity and resource consumption. Istio offers more features but also requires more operational expertise. The choice depends on your requirements and team size.
Related terms
Related services
DevSecOps
Hardened security integrated into every layer of the infrastructure stack.
Kubernetes
Container orchestration at scale — we design, operate, and manage production-ready Kubernetes clusters.
Observability
Full-stack monitoring and alerting that predicts outages before users are affected.
Edge Networking
Global CDN optimization and BGP routing for business-critical applications.
Last updated: April 2026