Back to Blog

Implementing Network Policies for Multi-Tenant Kubernetes Clusters

Implementing Network Policies for Multi-Tenant Kubernetes Clusters

In a standard Kubernetes deployment, the network is "flat" by default. Every pod can communicate with every other pod across the entire cluster, regardless of the namespace they inhabit. While this connectivity simplifies service discovery and microservices orchestration, it represents a catastrophic security flaw in multi-tenant environments. In a multi- Namespace architecture-where different teams, customers, or workloads share the same control plane-this lack of isolation allows for unrestricted lateral movement. If an attacker compromises a single public-facing web server, the flat network provides a direct path to sensitive databases or internal APIs in adjacent namespaces.

To mitigate this, we must implement a "Zero Trust" networking model using Kubernetes `NetworkPolicies`. This post explores the technical implementation of these policies, the necessity of CNI-level enforcement, and the operational complexities of managing a hardened multi-tenant cluster.

The Mechanics of Kubernetes Network Policies

It is a common misconception that Kubernetes natively enforces network isolation. The Kubernetes API provides the `NetworkPolicy` resource, but the enforcement logic resides entirely within the Container Network Interface (CNI) plugin. If you are using a CNI that does not support `NetworkPolicy` (such as standard Flannel), applying these resources will result in no change to your traffic flow; your cluster remains wide open.

To implement effective isolation, you must use a CNI capable of policy enforcement, such as Calico, Cilium, or Azure CNI with Cilium/Azure Policy.

The "Default Deny" Paradigm

The most critical concept in Kubernetes networking is that `NetworkPolicy` is additive and operates on an "allow-list" principle. By default, all ingress and egress traffic is permitted. However, as soon as a `NetworkPolicy` selects a specific pod via a `podSelector`, that pod becomes "isolated." Any traffic not explicitly permitted by a rule is dropped.

The industry standard for multi-tenancy is to implement a Default Deny posture for every namespace. This ensures that no communication can occur unless a developer explicitly defines the required flows.

#### Implementing a Default Deny All Policy

The following YAML demonstrates a baseline policy that should be applied to every tenant namespace. This policy selects all pods in the namespace and denies all ingress and egress traffic.

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: default-deny-all

namespace: tenant-a

spec:

podSelector: {} # Selects all pods in the namespace

policyTypes:

  • Ingress
  • Egress

```

Once this is applied, the namespace is a "black hole." No pods can talk to each other, and no pods can reach the internet or the Kubernetes DNS service.

Building the Allow-List: A Layered Approach

After establishing a `Default Deny` baseline, you must incrementally "punch holes" in the firewall to allow legitimate traffic. A robust implementation follows three distinct layers:

1. Intra-Namespace Communication

The first requirement is allowing pods within the same namespace to communicate (e.g., a frontend pod talking to a backend pod).

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-intra-namespace

namespace: tenant-a

spec:

podSelector: {}

ingress:

  • from:
  • podSelector: {} # Allows traffic from any pod in this namespace

```

2. Controlled Inter-Namespace Communication

In multi-tenant clusters, certain shared services (like a central logging agent or a shared database) might reside in a different namespace. We use `namespaceSelector` to permit this. Warning: Never rely solely on `podSelector` for cross-namespace traffic, as labels can be spoofed if a tenant has permission to modify their own pods.

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-shared-db-access

namespace: tenant-a

spec:

podSelector:

matchLabels:

app: web-server

ingress:

  • from:
  • namespaceSelector:

matchLabels:

kubernetes.io/metadata.name: shared-services

podSelector:

matchLabels:

app: postgres-db

```

3. Egress Control and the DNS Trap

A common mistake when implementing Egress policies is forgetting that pods need to resolve service names via CoreDNS. If you block all Egress, `nslookup` will fail, and your applications will crash. You must explicitly allow Egress to the `kube-system` namespace on port 53 (UDP/TCP).

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-dns-egress

namespace: tenant-a

spec:

podSelector: {}

egress:

  • to:
  • namespaceSelector:

matchLabels:

kubernetes.io/metadata.name: kube-system

ports:

  • protocol: UDP

port: 53

  • protocol: TCP

port:

```

Conclusion

As shown across "The Mechanics of Kubernetes Network Policies", "Building the Allow-List: A Layered Approach", a secure implementation for implementing network policies for multi-tenant kubernetes clusters depends on execution discipline as much as design.

The practical hardening path is to enforce admission-policy enforcement plus workload isolation and network policy controls, protocol-aware normalization, rate controls, and malformed-traffic handling, and least-privilege cloud control planes with drift detection and guardrails-as-code. This combination reduces both exploitability and attacker dwell time by forcing failures across multiple independent control layers.

Operational confidence should be measured, not assumed: track mean time to detect and remediate configuration drift and detection precision under peak traffic and adversarial packet patterns, then use those results to tune preventive policy, detection fidelity, and response runbooks on a fixed review cadence.

Related Articles

Explore related cybersecurity topics:

Recommended Next Steps

If this topic is relevant to your organisation, use one of these paths: