May 08, 2026
Kubernetes is a software virtualization technology developed by Google that enables DevOps teams to easily manage software components. It delivers a number of key benefits, such as:
-
Auto scaling capacity based on load
-
A related feature of this is self-healing, which automatically creates new instances of servers when an issue is detected
-
-
Simple version management, deployments, rollbacks, etc.
-
Portability of software components through containerization and Docker images
Why Kubernetes? Kubernetes solves the problem of managing hundreds or thousands of containers in production and separates the software from the hardware, enabling it to easily transition between operating system nodes.
In this post, we’ll explore the basics of how Kubernetes is built, how to get it up and running on bare-metal Linux servers, and some common software deployment scenarios. Of course, the easiest way to deploy a Kubernetes cluster is to leverage a public cloud provider such as AWS, GCP, or Azure, but effectively managing the cloud Kube deployments requires a basic understanding of how Kubernetes works.
Cluster Architecture Overview
Control Plane Components
| Component | Description |
|---|---|
| API Server | Central management point; all kubectl commands go through here. Validates and processes REST requests. |
| etcd | Distributed key-value store; single source of truth for all cluster state. |
| Scheduler | Assigns new Pods to Nodes based on resource availability and constraints. |
| Controller Manager | Runs controllers (Node, Deployment, ReplicaSet, etc.) that reconcile desired vs actual state. |
| Cloud Controller Manager | Integrates with cloud provider APIs for LoadBalancers, storage, etc. |
Installation
Tools Overview

Installing kubectl
Listing 1: Install kubectl on Linux
# Download latest stable release
curl -LO "https://dl.k8s.io/release/$(curl -L -s \
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# Install binary
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Verify installation
kubectl version --client
Local Development: Minikube
Listing 2: Install and start Minikube
# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start cluster (uses Docker driver by default)
minikube start --driver=docker --cpus=2 --memory=4g
# Check cluster status
minikube status
kubectl get nodes
# Enable useful addons
minikube addons enable ingress
minikube addons enable metrics-server
minikube addons enable dashboard
# Open dashboard in browser
minikube dashboard
Production Cluster: kubeadm
Listing 3: Bootstrap a production cluster with kubeadm
# --- On ALL nodes ---
sudo apt-get update && sudo apt-get install -y containerd
sudo systemctl enable containerd && sudo systemctl start containerd
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# --- On CONTROL PLANE node ONLY ---
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f \
https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# --- On WORKER nodes ---
sudo kubeadm join <control-plane-ip>:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Kubernetes Objects: Visual Map
Workload Objects
Pod — The Atomic Unit
A Pod is the smallest deployable unit in Kubernetes. It encapsulates one or more containers that share the same network namespace and storage volumes.
Listing 4: Minimal Pod YAML
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
namespace: default
labels:
app: my-app
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 10
When to Use Pods Directly. Rarely — Pods are ephemeral. Use them directly only for quick debugging or one-off commands. In production, always use a higher-level controller (Deployment, StatefulSet, etc.) that manages Pods automatically.
Deployment — Stateless Applications
A Deployment manages a ReplicaSet, which manages Pods. It enables declarative updates, rollbacks, and scaling.
Listing 5: Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: myapp:2.1
ports:
- containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: db_host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Key Deployment Commands
kubectl apply -f deployment.yaml
kubectl rollout status deploy/web-app
kubectl rollout history deploy/web-app
kubectl rollout undo deploy/web-app
kubectl scale deploy/web-app --replicas=5
kubectl set image deploy/web-app web=myapp:2.2
Deployment Use Cases
-
Web frontends and REST APIs: stateless services needing easy scaling
-
Microservices: each service is an independent Deployment
-
Worker processes: background tasks reading from a queue
-
Any application that can be replicated without coordination between instances
StatefulSet — Stateful Applications
StatefulSets are designed for applications requiring stable network identities, persistent storage, and ordered deployment.
Listing 6: StatefulSet for a database cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "mysql"
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Stateful Use Cases
-
Databases: MySQL, PostgreSQL, MongoDB, Cassandra
-
Message brokers: Kafka, RabbitMQ (need stable IDs for cluster membership)
-
Distributed caches: Redis Cluster, Elasticsearch
-
Any app requiring: stable hostnames, per-pod storage, ordered startup/shutdown
DaemonSet
A DaemonSet ensures one copy of a Pod runs on every Node in the cluster.
DaemonSet Use Cases
-
Log collection: Fluentd, Filebeat on every node
-
Monitoring agents: Prometheus Node Exporter, Datadog agent
-
Network plugins: CNI plugins (Calico, Flannel)
-
Security scanners: Falco intrusion detection
Job and CronJob
Listing 7: CronJob for periodic database backups
apiVersion: batch/v1
kind: CronJob
metadata:
name: db-backup
spec:
schedule: "0 2 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: backup
image: postgres:15
command: ["pg_dump", "-h", "$(DB_HOST)", "-U", "admin", "mydb"]
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Job / CronJob
-
Jobs: Database migrations, batch data processing, ML model training
-
CronJobs: Scheduled backups, cache warming, periodic cleanup, email notifications
Networking Objects
Service — Pod Discovery and Load Balancing
| Type | Typical Use | Description |
|---|---|---|
| ClusterIP | Internal API calls | Exposes service on cluster-internal IP. Default type. |
| NodePort | Dev/testing | Exposes service on each Node IP at a static port (30000-32767). |
| LoadBalancer | Cloud production | Provisions an external cloud load balancer. |
| ExternalName | DNS alias | Maps service to external DNS name, no proxying. |
Listing 8: ClusterIP and LoadBalancer Service examples
# ClusterIP (internal service communication)
apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
type: ClusterIP
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
# LoadBalancer (internet-facing)
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
Ingress — HTTP Routing
Listing 9: Ingress with TLS and multiple paths
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-svc
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
NetworkPolicy
Listing 10: Allow only frontend to reach backend on port 8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Configuration & Secrets
ConfigMap
Listing 11: CongMap creation and usage
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DB_HOST: "mysql-service"
DB_PORT: "3306"
LOG_LEVEL: "info"
app.conf: |
server.port=8080
cache.ttl=300
feature.newui=true
---
spec:
containers:
- name: app
image: myapp:1.0
envFrom:
- configMapRef:
name: app-config
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: app-config
Secret
Security Warning. Secrets are base64-encoded by default, not encrypted. Enable EncryptionConfiguration for encryption at rest, and use tools such as Sealed Secrets, Vault, or the External Secrets Operator for production security.
Listing 12: Creating and using Secrets
# Create secret imperatively (recommended)
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=Sup3rS3cr3t!
# Or declaratively with stringData (plain text)
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
stringData:
username: admin
password: "Sup3rS3cr3t!"
---
spec:
containers:
- name: app
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-secret
key: password
Horizontal Pod Autoscaler (HPA)
Listing 13: HPA configuration
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 500Mi
Storage Objects
Listing 14: StorageClass, PVC, and Pod volume usage
# StorageClass (admin configures once)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
reclaimPolicy: Delete
allowVolumeExpansion: true
---
# PVC (developer requests storage)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
---
# Pod uses the PVC
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
- name: db-data
mountPath: /var/lib/postgresql/data
volumes:
- name: db-data
persistentVolumeClaim:
claimName: database-pvc
| Access Mode | Short Name | Meaning |
|---|---|---|
| ReadWriteOnce | RWO | Mounted read-write by one Node |
| ReadOnlyMany | ROX | Mounted read-only by many Nodes |
| ReadWriteMany | RWX | Mounted read-write by many Nodes (NFS, etc.) |
| ReadWriteOncePod | RWOP | Mounted read-write by exactly one Pod |
RBAC — Access Control
Listing 15: Complete RBAC setup
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app-binding
namespace: production
subjects:
- kind: ServiceAccount
name: my-app-sa
namespace: production
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
RBAC Use Cases
-
CI/CD pipelines: Grant only update deployment permissions
-
Multi-tenant clusters: Restrict each team to its own namespace
-
Operators/controllers: Allow an application to list/watch its own resources
-
Auditing: Limit developer access to read-only in the production namespace
Namespaces & Cluster Objects
Listing 16: ResourceQuota and LimitRange
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
---
apiVersion: v1
kind: LimitRange
metadata:
name: container-limits
namespace: development
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 256Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "2"
memory: 2Gi
How Objects Work Together: Full Application Stack
Object Interaction Summary
-
User requests reach the Ingress Controller, which routes HTTP traffic to the correct Service
-
Services discover Pods via label selectors and load-balance requests across replicas
-
Deployments manage stateless frontend/backend Pods with rolling updates enabled
-
ConfigMap/Secret inject environment-specific configuration and credentials into Pods
-
HPA automatically scales Deployments based on CPU/memory usage metrics
-
StatefulSet manages the database Pod with stable identity and dedicated storage
-
PVC/PV provides durable, cloud-backed storage that outlives individual Pods
-
CronJob runs scheduled backup jobs targeting the database StatefulSet
-
ServiceAccount + RBAC grants backend app least-privilege access to the Kubernetes API
Scenario Reference: Which Object to Use
| Scenario | Object(s) | Reason |
|---|---|---|
| Stateless web app / API | Deployment + Service | Easy scaling, rolling updates, no persistent state |
| MySQL / PostgreSQL cluster | StatefulSet + PVC + Headless Service | Stable network IDs, per-pod storage, ordered start |
| Agent on every Node | DaemonSet | Exactly one Pod per node |
| Periodic batch job | CronJob | Scheduled execution |
| One-time migration | Job | Runs to completion, retries on failure |
| External HTTP routing | Ingress | L7 routing, TLS termination, host/path rules |
| Inter-service communication | ClusterIP Service | Stable virtual IP for internal service discovery |
| Expose dev app to LAN | NodePort Service | Access via NodeIP:Port without cloud LB |
| Cloud production traffic | LoadBalancer Service | Provisions cloud LB with external IP |
| App configuration | ConfigMap | Decouple non-sensitive config from image |
| Passwords, API keys | Secret | Store sensitive data, inject via env or volume |
| Auto scale under load | HPA | Scale replicas by CPU/memory/custom metrics |
| Durable storage | PVC + StorageClass | Persistent volumes that outlive Pods |
| Multi-team isolation | Namespace + ResourceQuota | Logical boundaries with enforced resource limits |
| App permissions control | ServiceAccount + RBAC | Least-privilege access to Kubernetes API |
| Micro-segmentation | NetworkPolicy | Whitelist pod-to-pod traffic rules |
Object Quick Reference Cheat Sheet

Essential kubectl Commands
Listing 17: Essential kubectl commands reference
# Cluster Info
kubectl cluster-info
kubectl get nodes -o wide
kubectl top nodes
# Namespaces
kubectl get namespaces
kubectl create namespace staging
kubectl config set-context --current --namespace=production
# Deploying & Managing
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl get all -n production
kubectl describe pod my-pod
kubectl logs my-pod -c app --follow
kubectl exec -it my-pod -- /bin/bash
kubectl port-forward svc/my-svc 8080:80
# Scaling & Updates
kubectl scale deployment web-app --replicas=5
kubectl set image deployment/web-app app=myapp:2.0
kubectl rollout status deployment/web-app
kubectl rollout undo deployment/web-app
kubectl rollout history deployment/web-app
# Debugging
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl describe node worker-1
kubectl top pods --containers -n production
kubectl get pod my-pod -o yaml
kubectl explain deployment.spec
# Storage
kubectl get pv,pvc -n production
kubectl describe pvc database-pvc
# RBAC
kubectl auth can-i create pods \
--as=system:serviceaccount:prod:my-sa
kubectl get rolebindings -n production
Pro Tips
-
Use
kubectlapply over kubectl create — it is idempotent and supports updates -
Add
-o wideto most get commands for extra detail (node placement, IPs) -
Use
kubectlget events -w to watch events in real time during deployments -
Set up shell aliases: alias
k=kubectland enable tab autocompletion -
Use kubectl diff
-f manifest.yamlto preview changes before applying -
kubectl explainis your in-terminal API reference — no browser needed
Key Takeaway
Kubernetes objects are declarative: describe desired state, and K8s reconciles it. Start with Namespaces, Deployments, and Services, then add CongMaps, Secrets, HPA, Ingress, and RBAC as your app matures.
Hands-On Exercise: hello-world on Kubernetes
About this Exercise: Step-by-step walkthrough of the full Kubernetes application lifecycle using the official docker.io/hello-world image. You will create a Deployment, expose it as a Service, observe self-healing, scale up, and explore useful inspection commands.
Prerequisites: Minikube installed and running.
minikube start # start local single-node cluster
kubectl cluster-info # verify kubectl is connected
kubectl get nodes # should show 1 node Ready
Step 1 — Create a Deployment with 3 Pods
Listing 18: hello-world-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello
image: hello-world
imagePullPolicy: IfNotPresent
kubectl apply -f hello-world-deployment.yaml
kubectl get deployments # NAME: hello-world READY: 3/3
kubectl get pods -l app=hello-world # lists the 3 pods
hello-world exists immediately: The hello-world container prints its message and exits with code 0. Kubernetes will restart it continuously, so pods will cycle through Running - Completed - CrashLoopBackOff. This is expected and demonstrates Kubernetes restart policy in action. For a long-running alternative, replace the image with nginx:alpine.
Step 2 — Verify Pod State and Read Logs
kubectl get pods -l app=hello-world -w # watch status changes live (Ctrl+C to stop)
kubectl describe pod <pod-name> # full event timeline and state
kubectl logs <pod-name> # see the "Hello from Docker!" output
kubectl logs <pod-name> --previous # logs from the previous (crashed) container
Reading Pod Output. kubectl logs shows stdout of the container. For hello-world you will see the familiar Docker welcome message. The --previous flag is essential when debugging pods that crash on startup — it lets you read logs before the container was restarted.
Step 3 — Expose the Deployment as a Service
Listing 19: Expose via NodePort (reachable from your machine)
kubectl expose deployment hello-world \
--type=NodePort \
--port=80 \
--name=hello-world-svc
kubectl get svc hello-world-svc # note the NodePort (30000-32767)
minikube service hello-world-svc --url # get the full URL to open in browser
Listing 20: Inspect the Service descriptor
kubectl describe svc hello-world-svc # Endpoints, Selector, NodePort details
kubectl get endpoints hello-world-svc # IPs of the backing pods
Step 4 — Self-Healing: Delete a Pod
kubectl get pods -l app=hello-world # note any pod name
kubectl delete pod <pod-name> # delete it manually
kubectl get pods -l app=hello-world -w # watch: replacement pod appears in seconds
Why does the Pod come back? The Deployment owns a ReplicaSet that constantly reconciles the desired state (3 replicas) with the actual state. When you delete a pod, the ReplicaSet controller detects the deficit and immediately schedules a new one. This is the core of Kubernetes self-healing.
Step 5 — Scale Up to 4 Pods
# Imperative --- fast, good for ad-hoc changes
kubectl scale deployment hello-world --replicas=4
# Declarative --- preferred in production (edit YAML, then apply)
# Edit hello-world-deployment.yaml: replicas: 3 -> replicas: 4
kubectl apply -f hello-world-deployment.yaml
kubectl get pods -l app=hello-world # should show 4 pods
kubectl get deployment hello-world # DESIRED: 4 READY: 4
Imperative vs. Declarative. Use imperative commands (scale, expose) for quick experiments. Use declarative YAML files (apply -f) in production—they are version-controlled, reviewable, and idempotent.
Step 6 — Rollout History and Undo
# Check rollout status and history
kubectl rollout status deployment/hello-world
kubectl rollout history deployment/hello-world
# Trigger a new rollout by updating the image
kubectl set image deployment/hello-world hello=hello-world:linux
# Roll back to the previous revision if something goes wrong
kubectl rollout undo deployment/hello-world
kubectl rollout history deployment/hello-world # revision count increases
How Rollouts work. Each kubectl apply, or set image command, creates a new ReplicaSet revision. Kubernetes performs a rolling update: it spins up new pods before terminating old ones, ensuring zero downtime. kubectl rollout undo atomically reverts to the previous ReplicaSet, making rollbacks instant and safe.
Step 7 — Clean Up
kubectl delete deployment hello-world
kubectl delete svc hello-world-svc
kubectl get all # verify: only default kubernetes svc remains
Full Lifecycle Cheatsheet
| Command | What it does |
|---|---|
kubectl apply -f deploy.yaml |
Create / update from YAML |
kubectl get pods -l app=hello-world -w |
Watch pod reconciliation |
kubectl get pods -l app=hello-world -w |
Debug crashed containers |
kubectl expose deployment ... |
Create a Service |
kubectl delete pod <pod> |
Trigger self-healing |
kubectl scale deployment ... --replicas |
Adjust capacity |
kubectl rollout undo deployment/... |
Instant rollback |
kubectl delete deployment ... |
Teardown |
Wrapping Up
This Kubernetes primer should help anyone looking to understand how a Kubernetes cluster works. Whether you are running your own on bare metal (as we do at Oshyn for our own software) or managing one on a public cloud provider (as we do for our customers), this tutorial should provide you with a basic understanding of how to effectively manage a Kubernetes Cluster.