Kubernetes Security Hardening: CIS Benchmark Implementation Guide
Kubernetes security hardening guide: CIS Benchmark v1.8 controls, kube-bench, OPA Gatekeeper, Pod Security Standards, network policies, and etcd encryption.
Kubernetes clusters running default configurations are not production-ready from a security standpoint. The default install prioritizes functionality and ease of use over security posture. The CIS Kubernetes Benchmark — the industry’s standard hardening guide — identifies over 100 controls that need to be deliberately configured.
This guide covers the most critical Kubernetes security hardening steps: running kube-bench to establish a baseline, implementing admission controls with OPA Gatekeeper or Kyverno, applying Pod Security Standards, enforcing network policies, and hardening the API server and etcd.
CIS Kubernetes Benchmark Overview
The CIS Kubernetes Benchmark (v1.8 for Kubernetes 1.28+) organizes controls into sections:
- Control Plane Components — API server, controller manager, scheduler configuration
- etcd — data encryption, TLS, access controls
- Control Plane Configuration — RBAC, Pod Security Admission
- Worker Nodes — kubelet configuration, file permissions
- Kubernetes Policies — network policies, Pod Security Standards, RBAC policies
Controls are classified as:
- Level 1 — basic security hygiene, minimal operational impact
- Level 2 — additional hardening, may require configuration changes or have some operational impact
For most production clusters, Level 1 compliance is the minimum bar. Level 2 is appropriate for clusters handling regulated data (HIPAA, PCI DSS, FedRAMP).
Step 1: Run kube-bench to Establish Your Baseline
kube-bench is an open-source tool from Aqua Security that runs the CIS Kubernetes Benchmark tests against your cluster and produces a scored report.
# Run kube-bench as a Job on your cluster
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
# Wait for completion and view results
kubectl logs -f $(kubectl get pods -l app=kube-bench -o name)
For managed clusters (EKS, GKE, AKS), use the managed version — these have different test profiles since you don’t control the control plane:
# EKS-specific profile
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-eks.yaml
# GKE-specific profile
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-gke.yaml
The output categorizes findings as PASS, FAIL, WARN, and INFO. A fresh unmanaged cluster typically scores 40-60% PASS on Level 1 controls. Your target is 90%+ before calling a cluster production-ready for regulated workloads.
Focus on FAILs first. Common first-run failures:
- Anonymous authentication enabled on API server
- Insecure port open on API server
- Profiling enabled on controller manager
- No audit policy configured
- etcd not encrypted at rest
- kubelet read-only port open
Top 10 CIS Controls to Implement First
Control 1.2.1: Ensure that the --anonymous-auth argument is set to false
Anonymous authentication to the API server means unauthenticated requests can reach the cluster. This should always be disabled in production.
# kube-apiserver manifest (control plane)
spec:
containers:
- command:
- kube-apiserver
- --anonymous-auth=false
On managed clusters, this is typically already set. Verify with kube-bench.
Control 1.2.6: Ensure that the --kubelet-certificate-authority argument is set
The API server must verify kubelet certificates to prevent man-in-the-middle attacks against the kubelet API.
Control 1.2.16: Ensure that the admission control plugin PodSecurityAdmission is enabled
Pod Security Admission (PSA) replaced the deprecated PodSecurityPolicy and enforces Pod Security Standards at the admission layer.
Control 2.1: Ensure etcd is configured with TLS
All etcd communication must be encrypted in transit:
- etcd
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
Control 3.2.1: Ensure that a minimal audit policy is created
Audit logs are your forensic trail for security incidents. Without an audit policy, you’re flying blind:
# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log secret access at request level
- level: Request
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Log pod exec/attach at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/attach", "pods/portforward"]
# Log all other requests at Metadata level
- level: Metadata
omitStages:
- RequestReceived
Pod Security Standards
Pod Security Standards (PSS) define three security profiles:
- Privileged — no restrictions (equivalent to no policy)
- Baseline — prevents known privilege escalations (recommended minimum)
- Restricted — heavily restricted, follows current hardening best practices
Apply PSS via namespace labels (enforced by Pod Security Admission, built into Kubernetes since 1.25):
# Set namespace to enforce Baseline, warn and audit on Restricted
kubectl label namespace production \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted
Restricted profile requirements (these will break many legacy workloads — check before enforcing):
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
runAsNonRoot: true
Migration path: start with warn mode in production to see which workloads would fail, fix them, then move to enforce.
Admission Controllers: OPA Gatekeeper and Kyverno
Admission controllers validate or mutate resources before they’re stored in etcd. PSA is one admission controller — but for more complex policies, you need OPA Gatekeeper or Kyverno.
When to use which:
- Kyverno — simpler YAML-native policies, easier to learn, good for 80% of use cases
- OPA Gatekeeper — more powerful, uses Rego (a policy language), better for complex conditional logic
Essential Kyverno policies for hardening:
# Block privileged containers
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
spec:
validationFailureAction: enforce
rules:
- name: check-privileged
match:
resources:
kinds: ["Pod"]
validate:
message: "Privileged containers are not allowed."
pattern:
spec:
containers:
- =(securityContext):
=(privileged): "false"
---
# Require non-root containers
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-run-as-non-root
spec:
validationFailureAction: enforce
rules:
- name: check-non-root
match:
resources:
kinds: ["Pod"]
validate:
message: "Containers must not run as root."
pattern:
spec:
containers:
- securityContext:
runAsNonRoot: true
Network Policies: Default Deny
Kubernetes does not restrict pod-to-pod communication by default. Any pod in any namespace can reach any other pod on any port. This is a flat network model that violates the principle of least privilege.
Implement a default-deny network policy in every namespace, then explicitly allow required communication:
# Default deny all ingress and egress in a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- Egress
Then explicitly allow what’s needed:
# Allow the frontend to reach the backend on port 8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
---
# Allow DNS resolution (required for all pods)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
egress:
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
Note: NetworkPolicy requires a CNI that supports it — Calico, Cilium, or Weave Net. The default kubenet and flannel CNIs don’t enforce NetworkPolicy.
etcd Encryption at Rest
etcd stores all Kubernetes secrets in plaintext by default. Anyone with access to etcd has access to all secrets, tokens, and certificates in your cluster.
Enable encryption at rest with an EncryptionConfiguration:
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
- configmaps # Optional, adds protection for ConfigMaps too
providers:
- aescbc: # AES-CBC with PKCS7 padding
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback for reading unencrypted data
Configure the API server to use this file:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/encryption-config.yaml
After enabling, rotate existing secrets to re-encrypt them:
# Force re-encryption of all secrets
kubectl get secrets -A -o json | kubectl replace -f -
For managed clusters, use the cloud provider’s managed encryption:
- EKS: Enable envelope encryption with AWS KMS
- GKE: Application-layer secrets encryption with Cloud KMS
- AKS: Customer-managed keys with Azure Key Vault
API Server Hardening Flags
Key API server flags for security:
- kube-apiserver
- --anonymous-auth=false
- --audit-log-path=/var/log/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=3
- --audit-log-maxsize=100
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --enable-admission-plugins=NodeRestriction,PodSecurity
- --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- --tls-min-version=VersionTLS12
Security Hardening Checklist
- kube-bench Level 1 compliance at 90%+
- etcd TLS configured
- etcd encryption at rest enabled for secrets
- API server anonymous auth disabled
- Audit logging configured with retention
- Pod Security Standards enforced at Baseline or Restricted
- Default-deny NetworkPolicy in all production namespaces
- OPA Gatekeeper or Kyverno admission policies deployed
- No privileged containers in production
- All containers running as non-root
- RBAC reviewed (no cluster-admin for workloads)
- Container image scanning in CI/CD pipeline
CIS Benchmark Compliance Done Right
Security hardening is not a one-time project — it requires continuous enforcement and regular re-assessment as the cluster changes.
→ K8s Security Hardening service at kubernetes.ae — we run the full CIS benchmark assessment, implement required controls, and configure ongoing policy enforcement with Kyverno or OPA Gatekeeper.
Get Expert Kubernetes Help
Talk to a certified Kubernetes expert. Free 30-minute consultation — actionable findings within days.
Talk to an Expert