- Tech with Ajit
- Posts
- Kubernetes Anti-Patterns – What NOT to Do in Kubernetes
Kubernetes Anti-Patterns – What NOT to Do in Kubernetes
Learn how to avoid the most common Kubernetes mistakes that lead to inefficiencies, security risks, and operational headaches
👋 Hey Tech Enthusiasts,
Kubernetes is an incredibly powerful tool, but it’s also easy to misuse. Many teams unknowingly follow anti-patterns that lead to poor performance, security vulnerabilities, and operational nightmares.
With over 5 years of hands-on experience working with Kubernetes across BFSI, healthcare and other industries, I have seen teams fall into the same common pitfalls. Misconfigurations, security oversights, and inefficient deployment strategies can turn Kubernetes from a powerful tool into a maintenance nightmare.
In this edition, I will highlight 10 Kubernetes anti-patterns that can hurt your cluster’s performance, security, and scalability. Let’s dive in! 👇
📌 Running Everything as a Single Monolithic Pod
🔴 The Problem:
Deploying an entire monolithic application inside a single pod kills scalability and resilience.
apiVersion: v1
kind: Pod
metadata:
name: monolithic-app
spec:
containers:
- name: app
image: my-monolithic-app:latest
💡 The Fix:
Break down the application into microservices and use Deployments instead of running everything in a single Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: user-service:1.0
📌 Not Using Resource Requests and Limits
🔴 The Problem:
Without CPU and memory limits, a single pod can consume all node resources, causing crashes.
containers:
- name: my-app
image: my-app:latest
💡 The Fix:
Always define requests and limits for CPU and memory.
containers:
- name: my-app
image: my-app:latest
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
📌 Pulling Images from an External Repository Without ImagePullSecrets
🔴 The Problem:
If you are using a private container registry, Kubernetes won’t be able to pull images without authentication, leading to image pull errors.
💡 The Fix:
Create an ImagePullSecret and reference it in your deployment.
Step 1: Create an ImagePullSecret
kubectl create secret docker-registry my-registry-secret \
--docker-server=<REGISTRY_URL> \
--docker-username=<USERNAME> \
--docker-password=<PASSWORD> \
--docker-email=<EMAIL>
Step 2: Use the Secret in Your Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-private-app
spec:
replicas: 2
selector:
matchLabels:
app: my-private-app
template:
metadata:
labels:
app: my-private-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: my-private-app
image: my-private-repo.com/my-app:v1.2.3
📌 Hardcoding Configurations in Pod Definitions
🔴 The Problem:
Embedding environment variables directly in Pod manifests makes updates cumbersome.
env:
- name: DB_HOST
value: "mysql.default.svc.cluster.local"
- name: API_KEY
value: "my-secret-key"
💡 The Fix:
Use ConfigMaps for non-sensitive data and Secrets for sensitive data.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DB_HOST: "mysql.default.svc.cluster.local"
---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
API_KEY: bXktc2VjcmV0LWtleQ== # Base64 encoded
---
containers:
- name: my-app
image: my-app:latest
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
We can take a step further and move all secrets to a third-party secrets provider like HashiCorp Vault or AWS Secrets Manager. More on this in my future posts.
📌 Using the ‘latest’ Tag for Images
🔴 The Problem:
Using latest can lead to inconsistent deployments and version mismatches.
containers:
- name: my-app
image: my-app:latest
💡 The Fix:
Tag images explicitly (e.g., my-app:v1.2.3) and use proper CI/CD pipelines.
containers:
- name: my-app
image: my-app:v1.2.3
📌 Deploying Applications with Root Privileges
🔴 The Problem:
Running containers as root increases security risks.
securityContext:
privileged: true
💡 The Fix:
Use a non-root user in the Dockerfile and Kubernetes security policies.
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
readOnlyRootFilesystem: true
📌 Ignoring Liveness & Readiness Probes
🔴 The Problem:
Without health checks, your app can fail silently and never restart.
livenessProbe: {}
readinessProbe: {}
💡 The Fix:
Use livenessProbe and readinessProbe to ensure app availability.
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 3
periodSeconds: 5
📌 Poorly Configured RBAC Permissions
🔴 The Problem:
Granting cluster-admin access to all users is a security disaster.
💡 The Fix:
Use Role-Based Access Control (RBAC) with the least privilege principle.
kind: Role
metadata:
name: read-only
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
📌 Not Implementing Proper Logging & Monitoring
🔴 The Problem:
Without centralized logging and monitoring, debugging issues becomes difficult.
💡 The Fix:
Use Prometheus, Grafana, Fluentd, and ELK for observability.
📌 Manually Scaling Applications Instead of Auto-Scaling
🔴 The Problem:
Manually scaling pods is inefficient.
💡 The Fix:
Use Horizontal Pod Autoscaler (HPA) for automatic scaling.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
These are just some of the common Kubernetes anti-patterns that teams fall into. By avoiding these pitfalls and following best practices, you can build scalable, resilient, and secure Kubernetes applications.
This is not an exhaustive list; we will dive deeper into the remaining points in the upcoming newsletter editions.
Complete YAML file incorporating all best practices:
# 1. ConfigMap for Application Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DB_HOST: "mysql.default.svc.cluster.local"
APP_MODE: "production"
---
# 2. Secret for Sensitive Data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
API_KEY: bXktc2VjcmV0LWtleQ== # Base64 encoded
---
# 3. ImagePullSecret for Private Container Registry
apiVersion: v1
kind: Secret
metadata:
name: my-registry-secret
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: <BASE64_ENCODED_DOCKER_CONFIG>
---
# 4. Deployment for a Microservice
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: my-app
image: my-private-repo.com/my-app:v1.2.3
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 3
periodSeconds: 5
---
# 5. Service for Exposing the Application
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
---
# 6. Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
---
# 7. RBAC: Least Privilege Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: read-only
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
---
# 8. RBAC: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-binding
subjects:
- kind: User
name: my-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: read-only
apiGroup: rbac.authorization.k8s.io
What’s Included in This Master YAML?
ConfigMap & Secrets for storing configurations securely
ImagePullSecret for pulling images from private repositories
Service for internal communication (ClusterIP)
Deployment with best practices:
Resource Requests & Limits to prevent resource exhaustion
Security Context for non-root execution
Liveness & Readiness Probes for health checks
HPA (Horizontal Pod Autoscaler) for automatic scaling
RBAC (Role & RoleBinding) to enforce least privilege
🔄 If you found this helpful, share this newsletter with your colleagues! 🚀