Implement Kubernetes network policies with Calico CNI and OPA Gatekeeper for security enforcement

Advanced 45 min Apr 05, 2026 146 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Secure your Kubernetes cluster with Calico CNI network policies and OPA Gatekeeper admission control. This tutorial shows you how to implement pod isolation, policy enforcement, and admission validation for production-grade security.

Prerequisites

  • Kubernetes cluster with admin access
  • kubectl configured
  • Helm 3 installed
  • Basic Kubernetes networking knowledge

What this solves

Kubernetes network policies and admission controllers provide essential security layers for production clusters. Network policies control traffic flow between pods and namespaces, while admission controllers validate resources before they're created. This tutorial implements Calico CNI for network policy enforcement and OPA Gatekeeper for policy validation, giving you comprehensive security controls over pod communication and resource creation.

Prerequisites

  • Kubernetes cluster with admin access (kubeadm recommended)
  • kubectl configured for cluster access
  • Helm 3 installed on your system
  • Basic understanding of Kubernetes networking concepts
Note: This tutorial assumes you have a working Kubernetes cluster. If you need to set one up, follow our Kubernetes installation guide.

Step-by-step installation

Install Calico CNI with network policy support

Calico provides both networking and network policy capabilities for Kubernetes. Download and apply the Calico manifest to enable network policy enforcement.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml

Configure Calico installation

Create a custom Calico installation configuration to enable network policy features and optimize for your environment.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
  nodeMetricsPort: 9091
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

Apply Calico configuration

Deploy the Calico configuration and wait for all components to become ready.

kubectl apply -f /tmp/calico-custom-resources.yaml
kubectl wait --for=condition=Ready pods --all -n calico-system --timeout=300s

Verify Calico installation

Check that all Calico pods are running and the API server is accessible.

kubectl get pods -n calico-system
kubectl get nodes -o wide

Install OPA Gatekeeper using Helm

Add the Gatekeeper Helm repository and install it to enable admission control policies.

helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update

Deploy Gatekeeper with custom values

Create a values file to configure Gatekeeper with appropriate resource limits and audit settings.

replicas: 3
revisionHistoryLimit: 10
controllerManager:
  resources:
    limits:
      cpu: 1000m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 256Mi
audit:
  resources:
    limits:
      cpu: 1000m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 256Mi
postInstall:
  labelNamespace:
    enabled: true

Install Gatekeeper

Deploy Gatekeeper using Helm with the custom configuration values.

helm install gatekeeper gatekeeper/gatekeeper \
  --namespace gatekeeper-system \
  --create-namespace \
  --values /tmp/gatekeeper-values.yaml

Wait for Gatekeeper deployment

Verify that all Gatekeeper components are running before proceeding with policy configuration.

kubectl wait --for=condition=Ready pods --all -n gatekeeper-system --timeout=300s
kubectl get pods -n gatekeeper-system

Configure network policies for pod isolation

Create test namespaces

Set up separate namespaces to demonstrate network policy isolation between different application tiers.

kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace database

Deploy test applications

Create sample applications in each namespace to test network policy enforcement.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-app
  namespace: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
        tier: web
    spec:
      containers:
      - name: nginx
        image: nginx:1.25
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-app
  namespace: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
        tier: api
    spec:
      containers:
      - name: httpd
        image: httpd:2.4
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: database-app
  namespace: database
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
        tier: data
    spec:
      containers:
      - name: postgres
        image: postgres:16
        env:
        - name: POSTGRES_DB
          value: testdb
        - name: POSTGRES_USER
          value: testuser
        - name: POSTGRES_PASSWORD
          value: testpass123
        ports:
        - containerPort: 5432

Apply test applications

Deploy the test applications and verify they're running in their respective namespaces.

kubectl apply -f /tmp/test-apps.yaml
kubectl get pods -n frontend
kubectl get pods -n backend
kubectl get pods -n database

Create default deny network policy

Implement a default deny-all policy to block traffic between namespaces by default.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: frontend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: database
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Apply default deny policies

Deploy the default deny policies to establish a secure baseline for network traffic.

kubectl apply -f /tmp/default-deny.yaml

Create selective allow policies

Define network policies that allow specific traffic flows between application tiers.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      tier: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: frontend
    ports:
    - protocol: TCP
      port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: database
spec:
  podSelector:
    matchLabels:
      tier: data
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: backend
    ports:
    - protocol: TCP
      port: 5432
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: frontend
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to: []
    ports:
    - protocol: UDP
      port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to: []
    ports:
    - protocol: UDP
      port: 53

Label namespaces for policy targeting

Add labels to namespaces so network policies can reference them in selectors.

kubectl label namespace frontend name=frontend
kubectl label namespace backend name=backend
kubectl label namespace database name=database

Apply allow policies

Deploy the selective allow policies to enable necessary communication between application tiers.

kubectl apply -f /tmp/allow-policies.yaml

Configure OPA Gatekeeper constraints

Create constraint template for required labels

Define a Gatekeeper constraint template to enforce that all pods have required security labels.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        openAPIV3Schema:
          type: object
          properties:
            labels:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels
        
        violation[{"msg": msg}] {
          required := input.parameters.labels
          provided := input.review.object.metadata.labels
          missing := required[_]
          not provided[missing]
          msg := sprintf("Missing required label: %v", [missing])
        }

Apply constraint template

Deploy the constraint template to make it available for creating specific constraints.

kubectl apply -f /tmp/required-labels-template.yaml

Create required labels constraint

Create a constraint that enforces specific labels on all pods for security classification.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: must-have-security-labels
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces: ["kube-system", "gatekeeper-system", "calico-system"]
  parameters:
    labels: ["tier", "app"]

Create constraint template for network policy requirements

Define a template to ensure namespaces have network policies for security.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequirenetworkpolicy
spec:
  crd:
    spec:
      names:
        kind: K8sRequireNetworkPolicy
      validation:
        openAPIV3Schema:
          type: object
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequirenetworkpolicy
        
        violation[{"msg": msg}] {
          input.review.kind.kind == "Namespace"
          not input.review.object.metadata.labels["network-policy"]
          msg := "Namespace must have network-policy label set to 'enabled'"
        }

Apply constraint templates

Deploy both constraint templates and create the network policy constraint.

kubectl apply -f /tmp/required-labels-constraint.yaml
kubectl apply -f /tmp/require-networkpolicy-template.yaml

Create network policy constraint

Enforce that new namespaces must indicate they have network policies configured.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireNetworkPolicy
metadata:
  name: namespace-must-have-network-policy
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
    excludedNamespaces: ["kube-system", "gatekeeper-system", "calico-system", "kube-public", "default"]

Apply network policy constraint

Deploy the constraint to enforce network policy requirements on new namespaces.

kubectl apply -f /tmp/require-networkpolicy-constraint.yaml

Test policy enforcement

Test network policy isolation

Verify that network policies are blocking unauthorized traffic between namespaces.

# Get pod names for testing
FRONTEND_POD=$(kubectl get pods -n frontend -o jsonpath='{.items[0].metadata.name}')
BACKEND_POD=$(kubectl get pods -n backend -o jsonpath='{.items[0].metadata.name}')

Test blocked connection (should fail)

kubectl exec -n frontend $FRONTEND_POD -- wget -qO- --timeout=5 http://$BACKEND_POD.backend.svc.cluster.local || echo "Connection blocked by network policy"

Test allowed connection after updating policies

kubectl exec -n frontend $FRONTEND_POD -- nslookup kubernetes.default.svc.cluster.local

Test Gatekeeper constraint validation

Attempt to create resources that violate the configured constraints to verify enforcement.

# Try to create a pod without required labels (should fail)
kubectl run test-pod --image=nginx -n frontend --dry-run=server

Try to create a namespace without network-policy label (should fail)

kubectl create namespace test-namespace --dry-run=server

Create compliant resources

Test creating resources that meet all policy requirements to ensure legitimate workloads can deploy.

apiVersion: v1
kind: Namespace
metadata:
  name: test-compliant
  labels:
    network-policy: "enabled"
---
apiVersion: v1
kind: Pod
metadata:
  name: compliant-pod
  namespace: test-compliant
  labels:
    app: test
    tier: web
spec:
  containers:
  - name: nginx
    image: nginx:1.25

Apply compliant resources

Deploy resources that satisfy all constraints to verify they're accepted.

kubectl apply -f /tmp/compliant-test.yaml
kubectl get pods -n test-compliant

Verify your setup

# Check Calico components
kubectl get pods -n calico-system
kubectl get networkpolicies --all-namespaces

Check Gatekeeper components

kubectl get pods -n gatekeeper-system kubectl get constraints kubectl get constrainttemplates

Verify network policy enforcement

calicoctl get networkpolicy --all-namespaces

Check constraint violations

kubectl get k8srequiredlabels kubectl get k8srequirenetworkpolicy
Note: You may need to install calicoctl separately to use the calicoctl commands. Alternatively, use kubectl get networkpolicies to view standard Kubernetes network policies.

Advanced configuration

Configure global network policies

Create cluster-wide policies that apply across all namespaces for baseline security.

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: deny-all-except-dns
spec:
  selector: all()
  types:
  - Egress
  egress:
  - action: Allow
    protocol: UDP
    destination:
      ports:
      - 53
  - action: Allow
    protocol: TCP
    destination:
      ports:
      - 53

Create admission webhook bypass

Configure Gatekeeper to allow emergency access while maintaining audit logging.

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
  name: config
  namespace: gatekeeper-system
spec:
  match:
    - excludedNamespaces: ["kube-system", "gatekeeper-system"]
      processes: ["*"]
  validation:
    traces:
      - user:
          kind:
            group: "*"
            version: "*"
            kind: "*"
  readiness:
    statsEnabled: true

Apply advanced configurations

Deploy the advanced policies and configurations for production use.

kubectl apply -f /tmp/gatekeeper-config.yaml

Common issues

SymptomCauseFix
Network policies not enforcingCNI doesn't support network policiesVerify Calico is installed: kubectl get pods -n calico-system
Pods can't resolve DNSNetwork policy blocks DNS egressAdd DNS egress rules to network policies
Gatekeeper webhook failsCertificate or connectivity issuesCheck webhook status: kubectl get validatingadmissionwebhooks
Constraints not enforcingTemplate not properly appliedVerify template exists: kubectl get constrainttemplates
Legitimate pods rejectedOverly restrictive constraintsReview constraint match criteria and excluded namespaces
Network policy connectivity issuesIncorrect label selectorsVerify pod and namespace labels match policy selectors

Security considerations

Important: Network policies are only effective if your CNI plugin supports them. Verify that Calico is properly installed and configured before relying on network policies for security.

Monitor policy violations

Set up monitoring to track and alert on policy violations for security oversight.

# View recent constraint violations
kubectl get events --field-selector reason=FailedAdmission --all-namespaces

Check Gatekeeper audit results

kubectl logs -n gatekeeper-system -l control-plane=audit-controller

Consider integrating with your monitoring stack to track policy enforcement metrics and create alerts for security violations.

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.