Deploying Containers with Kubernetes

803 0 0 0 0

✅ Chapter 3: Services, Networking, and Ingress Controllers

🔍 Introduction

In Kubernetes, Deployments manage application Pods — but how do you make these Pods communicate internally and expose your services externally?

That’s where Services, Networking, and Ingress Controllers come in.

This chapter will guide you through:

  • Internal and external networking
  • Different types of Kubernetes Services
  • Service discovery
  • Configuring Ingress Controllers
  • Practical examples for exposing and routing traffic

🌐 Kubernetes Networking Model Overview

Kubernetes defines a simple but powerful networking model:

  • Every Pod gets its own IP address.
  • Every Pod can communicate with any other Pod (flat network space).
  • Services provide stable IPs and DNS names to access dynamic Pods.

Key expectations:

Principle

Details

Pod-to-Pod Communication

Allowed without NAT

Node-to-Pod Communication

Allowed

Pod-to-Service Communication

Allowed via virtual IPs (VIPs)


🧱 Kubernetes Services: Core Concept

A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy by which to access them — typically via a stable IP address and DNS name.

When Pods die and recreate (which happens often), a Service ensures your applications stay accessible.


🧩 Main Types of Services

Service Type

Purpose

ClusterIP

Default; exposes service internally only

NodePort

Exposes service on each node’s IP at a static port

LoadBalancer

Exposes service externally through a cloud provider's load balancer

ExternalName

Maps a service to a DNS name outside the cluster


🔹 ClusterIP Service

  • Default service type
  • Accessible only within the cluster

yaml

 

apiVersion: v1

kind: Service

metadata:

  name: my-service

spec:

  selector:

    app: my-app

  ports:

    - protocol: TCP

      port: 80

      targetPort: 8080

  type: ClusterIP

  • Access from another Pod using http://my-service:80

🔹 NodePort Service

  • Makes your service accessible outside the cluster via <NodeIP>:<NodePort>.
  • NodePort is typically between 30000–32767.

yaml

 

apiVersion: v1

kind: Service

metadata:

  name: my-nodeport-service

spec:

  type: NodePort

  selector:

    app: my-app

  ports:

    - port: 80

      targetPort: 8080

      nodePort: 30036

Access with:

text

 

http://<NodeIP>:30036


🔹 LoadBalancer Service

  • Works with cloud providers (GKE, EKS, AKS).
  • Allocates an external IP.

yaml

 

apiVersion: v1

kind: Service

metadata:

  name: my-loadbalancer

spec:

  selector:

    app: my-app

  ports:

    - port: 80

      targetPort: 8080

  type: LoadBalancer

Automatically provisions a cloud load balancer.


🔹 ExternalName Service

  • Maps a Kubernetes Service to an external DNS name.

yaml

 

apiVersion: v1

kind: Service

metadata:

  name: external-service

spec:

  type: ExternalName

  externalName: example.com

Use case: access external database or APIs via internal cluster names.


🔄 How Services Find Pods: Label Selectors

Services bind to Pods using label selectors.

Example:

yaml

 

selector:

  app: my-app

If your Pods have the label app=my-app, the Service routes traffic to those Pods automatically.


🧪 Example: Create Deployment + Service Together

yaml

 

apiVersion: apps/v1

kind: Deployment

metadata:

  name: hello-world

spec:

  replicas: 2

  selector:

    matchLabels:

      app: hello

  template:

    metadata:

      labels:

        app: hello

    spec:

      containers:

      - name: hello

        image: nginx

        ports:

        - containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

  name: hello-service

spec:

  selector:

    app: hello

  ports:

  - port: 80

    targetPort: 80

  type: NodePort

Apply:

bash

 

kubectl apply -f hello-world.yaml


🛡️ Kubernetes DNS-Based Service Discovery

Kubernetes automatically creates a DNS record for every Service.

Example:

  • Service: hello-service
  • Namespace: default
  • DNS: hello-service.default.svc.cluster.local

Pods can simply refer to Services by name.


🌍 Ingress: Smarter External Access to Services

Ingress is an API object that manages external access to your Services, typically over HTTP.

Instead of exposing each service individually via NodePort or LoadBalancer, you define a single entry point using Ingress.


📋 Ingress Components

Component

Purpose

Ingress Resource

Rules defining traffic routing

Ingress Controller

Software that implements the rules (e.g., NGINX, Traefik)


🛠️ Simple Ingress Example

Ingress Resource YAML:

yaml

 

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: my-ingress

  annotations:

    nginx.ingress.kubernetes.io/rewrite-target: /

spec:

  rules:

  - host: myapp.example.com

    http:

      paths:

      - path: /

        pathType: Prefix

        backend:

          service:

            name: hello-service

            port:

              number: 80


📋 How Ingress Works

Step

Action

1

User hits http://myapp.example.com

2

Request goes to Ingress Controller

3

Controller routes to appropriate Service

4

Service forwards to the Pod


️ Setting Up NGINX Ingress Controller (Minikube Example)

  1. Enable ingress addon:

bash

 

minikube addons enable ingress

  1. Deploy your app and ingress resource.
  2. Map hostnames via /etc/hosts:

text

 

<minikube_ip> myapp.example.com


📦 Comparing Service vs Ingress

Feature

Service

Ingress

Exposes single app

Centralized entry point for multiple apps

Load balancing

Path-based routing

SSL termination

(needs additional setup)

Built-in support


🔒 Secure Your Ingress

  • Use TLS Certificates (via cert-manager or manual)
  • Redirect HTTP to HTTPS
  • Use authentication and rate limiting (supported by NGINX annotations)

📚 Common Use Cases

Scenario

Solution

Internal-only services

ClusterIP

Exposing apps in development

NodePort

Production apps in cloud

LoadBalancer + Ingress

Host multiple services on single IP

Ingress routing


🚀 Summary: What You Learned in Chapter 3


  • Kubernetes Services enable access to Pods
  • Different Service types (ClusterIP, NodePort, LoadBalancer) cover various use cases
  • Ingress provides smart external routing and centralized management
  • NGINX is a popular Ingress Controller for Kubernetes
  • Combining Services and Ingress gives maximum flexibility and scalability

Back

FAQs


✅ 1. What is Kubernetes, and how does it differ from Docker?

Answer: Docker is used to build and run containers, while Kubernetes is a container orchestration platform that manages the deployment, scaling, and operation of multiple containers across a cluster of machines.

✅ 2. Do I need to learn Docker before learning Kubernetes?

Answer: Yes, a basic understanding of Docker is essential since Kubernetes is designed to manage and orchestrate Docker (or OCI-compatible) containers. You'll need to know how to build and run container images before deploying them with Kubernetes.

✅ 3. What is a Pod in Kubernetes?

Answer: A Pod is the smallest deployable unit in Kubernetes. It encapsulates one or more containers that share the same network, storage, and lifecycle. Pods are used to run containerized applications.

✅ 4. How do I expose my application to the internet using Kubernetes?

Answer: You can expose your application using a Service of type LoadBalancer or NodePort. For more advanced routing (e.g., domain-based routing), you can use an Ingress Controller.

✅ 5. What is a Deployment in Kubernetes?

Answer: A Deployment is a Kubernetes object that ensures a specified number of replicas (Pods) are running at all times. It handles rolling updates, rollback, and maintaining the desired state of the application.

✅ 6. Can Kubernetes run locally for learning and development?

Answer: Yes. Tools like Minikube, Kind, and Docker Desktop (with Kubernetes enabled) allow you to run a local Kubernetes cluster on your machine for development and testing.

✅ 7. What’s the difference between ConfigMap and Secret in Kubernetes?

Answer: Both are used to inject configuration data into Pods. ConfigMaps store non-sensitive data like environment variables, while Secrets are designed to store sensitive data like passwords, API tokens, or keys—encrypted at rest.

✅ 8. How does Kubernetes handle application failure or crashes?

Answer: Kubernetes automatically restarts failed containers, replaces them, reschedules Pods to healthy nodes, and ensures the desired state (like the number of replicas) is always maintained.

✅ 9. How do I monitor applications running in Kubernetes?

Answer: Kubernetes integrates well with monitoring tools like Prometheus, Grafana, Kube-state-metrics, and ELK stack (Elasticsearch, Logstash, Kibana). These tools help you track performance, health, and logs.

✅ 10. Is Kubernetes suitable for small projects or just large enterprises?

Answer: While Kubernetes shines in large, scalable environments, it can also be used for small projects—especially with tools like Minikube or cloud-managed clusters. However, simpler alternatives like Docker Compose may be better suited for truly small-scale applications.