Deploying Containers with Kubernetes

8.84K 0 0 0 0

✅ Chapter 2: Creating and Managing Kubernetes Deployments

🔍 Introduction

In Kubernetes, Deployments are one of the most essential workload objects used to manage the lifecycle of Pods. Whether you're deploying a single web service or an entire microservices architecture, Deployments give you tools for declarative, automated, and fault-tolerant application management.

In this chapter, you’ll learn:

  • What a Deployment is and how it works
  • How to create and update Deployments
  • Rolling updates and rollback
  • Scaling replicas
  • Common commands and YAML patterns

🚀 What is a Deployment in Kubernetes?

A Deployment defines the desired state for a set of Pods and the ReplicaSet managing them. It ensures:

  • A specified number of Pods are running at any given time
  • New versions are rolled out smoothly
  • Old versions are rolled back if necessary

Think of it as the controller that manages ReplicaSets, which in turn manage Pods.


🧱 Anatomy of a Deployment

Let’s examine a basic Deployment definition.

yaml

 

apiVersion: apps/v1

kind: Deployment

metadata:

  name: my-app

spec:

  replicas: 3

  selector:

    matchLabels:

      app: my-app

  template:

    metadata:

      labels:

        app: my-app

    spec:

      containers:

      - name: web

        image: nginx:1.17

        ports:

        - containerPort: 80


🧾 Key Sections Explained

Field

Purpose

replicas

Number of desired Pods

selector

How the Deployment finds the Pods to manage

template

Blueprint for new Pods

containers

Container specs like image and exposed ports


🛠️ Creating a Deployment

Save your configuration to a file, e.g., nginx-deployment.yaml, then run:

bash

 

kubectl apply -f nginx-deployment.yaml

This will create:

  • A Deployment named my-app
  • A ReplicaSet to manage 3 Pods
  • 3 Pods running the Nginx container

📋 Verify the Deployment

bash

 

kubectl get deployments

kubectl get pods

kubectl describe deployment my-app


🔄 Updating a Deployment

Let’s say you want to upgrade from nginx:1.17 to nginx:1.21.

Option 1: Modify and apply YAML again
Option 2: Use the CLI:

bash

 

kubectl set image deployment/my-app web=nginx:1.21

Kubernetes will:

  • Create new Pods with the new image
  • Gradually replace old Pods
  • Monitor health throughout the process

Rolling Update Strategy

By default, Deployments use a rolling update strategy.

You can control rollout behavior:

yaml

 

strategy:

  type: RollingUpdate

  rollingUpdate:

    maxSurge: 1

    maxUnavailable: 1

Term

Description

maxSurge

Max extra Pods during update

maxUnavailable

Max Pods that can be unavailable at once


Rolling Back a Deployment

If something goes wrong:

bash

 

kubectl rollout undo deployment my-app

To see revision history:

bash

 

kubectl rollout history deployment my-app


📈 Scaling a Deployment

Scaling increases or decreases the number of Pods.

bash

 

kubectl scale deployment my-app --replicas=5

In YAML:

yaml

 

spec:

  replicas: 5

Apply the updated YAML to scale.


🔍 Inspecting and Monitoring Deployments

Check status:

bash

 

kubectl rollout status deployment my-app

View logs of Pods:

bash

 

kubectl logs <pod-name>

Execute into container:

bash

 

kubectl exec -it <pod-name> -- /bin/sh


🧰 Useful Deployment Commands Cheat Sheet

Command

Action

kubectl apply -f file.yaml

Create/update a Deployment

kubectl get deployments

List Deployments

kubectl describe deployment <name>

Inspect details

kubectl set image

Update container image

kubectl rollout undo

Roll back to previous version

kubectl scale

Adjust replica count


🧪 Advanced Deployment Features

🔹 Probes: Readiness & Liveness

Ensure Pods are only used when healthy:

yaml

 

livenessProbe:

  httpGet:

    path: /

    port: 80

  initialDelaySeconds: 15

  periodSeconds: 20

  • Liveness probe: Checks if app is alive
  • Readiness probe: Checks if app is ready to receive traffic

🔹 Resource Requests & Limits

yaml

 

resources:

  requests:

    memory: "64Mi"

    cpu: "250m"

  limits:

    memory: "128Mi"

    cpu: "500m"

This helps with resource scheduling and prevents container overuse.


🔹 Environment Variables

yaml

 

env:

  - name: ENVIRONMENT

    value: "production"


🔹 Labels & Selectors

Labels help organize and query resources:

yaml

 

metadata:

  labels:

    tier: backend

Query with:

bash

 

kubectl get pods -l tier=backend


🧩 Comparing Deployments to Other Workloads

Resource

Use Case

Deployment

Stateless applications, rolling updates

StatefulSet

Stateful apps (databases), stable IDs

DaemonSet

Run Pod on every node (e.g., log collectors)

Job/CronJob

One-time or scheduled batch tasks


🧱 YAML Template for Reusability

yaml

 

apiVersion: apps/v1

kind: Deployment

metadata:

  name: {{ .Values.appName }}

spec:

  replicas: {{ .Values.replicaCount }}

  template:

    spec:

      containers:

        - name: {{ .Values.containerName }}

          image: {{ .Values.image }}

This can be used with Helm for dynamic deployments.


🚀 Summary: What You Learned in Chapter 2


  • Kubernetes Deployments manage Pods through ReplicaSets
  • You can define desired state via YAML or CLI
  • Supports rolling updates, rollback, scaling, and health checks
  • Critical for stateless applications
  • Highly configurable and production-ready

Back

FAQs


✅ 1. What is Kubernetes, and how does it differ from Docker?

Answer: Docker is used to build and run containers, while Kubernetes is a container orchestration platform that manages the deployment, scaling, and operation of multiple containers across a cluster of machines.

✅ 2. Do I need to learn Docker before learning Kubernetes?

Answer: Yes, a basic understanding of Docker is essential since Kubernetes is designed to manage and orchestrate Docker (or OCI-compatible) containers. You'll need to know how to build and run container images before deploying them with Kubernetes.

✅ 3. What is a Pod in Kubernetes?

Answer: A Pod is the smallest deployable unit in Kubernetes. It encapsulates one or more containers that share the same network, storage, and lifecycle. Pods are used to run containerized applications.

✅ 4. How do I expose my application to the internet using Kubernetes?

Answer: You can expose your application using a Service of type LoadBalancer or NodePort. For more advanced routing (e.g., domain-based routing), you can use an Ingress Controller.

✅ 5. What is a Deployment in Kubernetes?

Answer: A Deployment is a Kubernetes object that ensures a specified number of replicas (Pods) are running at all times. It handles rolling updates, rollback, and maintaining the desired state of the application.

✅ 6. Can Kubernetes run locally for learning and development?

Answer: Yes. Tools like Minikube, Kind, and Docker Desktop (with Kubernetes enabled) allow you to run a local Kubernetes cluster on your machine for development and testing.

✅ 7. What’s the difference between ConfigMap and Secret in Kubernetes?

Answer: Both are used to inject configuration data into Pods. ConfigMaps store non-sensitive data like environment variables, while Secrets are designed to store sensitive data like passwords, API tokens, or keys—encrypted at rest.

✅ 8. How does Kubernetes handle application failure or crashes?

Answer: Kubernetes automatically restarts failed containers, replaces them, reschedules Pods to healthy nodes, and ensures the desired state (like the number of replicas) is always maintained.

✅ 9. How do I monitor applications running in Kubernetes?

Answer: Kubernetes integrates well with monitoring tools like Prometheus, Grafana, Kube-state-metrics, and ELK stack (Elasticsearch, Logstash, Kibana). These tools help you track performance, health, and logs.

✅ 10. Is Kubernetes suitable for small projects or just large enterprises?

Answer: While Kubernetes shines in large, scalable environments, it can also be used for small projects—especially with tools like Minikube or cloud-managed clusters. However, simpler alternatives like Docker Compose may be better suited for truly small-scale applications.