Skip to main content

Echo Kubernetes Deployment

Introduction

Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. In this tutorial, we'll explore how to deploy our Echo application on Kubernetes, building on our knowledge of containers and moving toward a more robust, scalable deployment approach.

The Echo application is a simple service that responds by "echoing" back the requests it receives. While simple in functionality, deploying it on Kubernetes introduces us to powerful concepts in cloud-native architecture.

Prerequisites

Before getting started, ensure you have:

  • A basic understanding of containerization concepts
  • Docker installed on your machine
  • Minikube, kind, or access to a Kubernetes cluster
  • kubectl command-line tool installed
  • Our Echo application container image built and available (either locally or in a container registry)

Understanding Kubernetes Concepts

Before diving into deployment, let's understand some key Kubernetes concepts:

  • Pod: The smallest deployable unit in Kubernetes, typically containing one or more containers
  • Deployment: Manages the desired state for Pods and ReplicaSets
  • Service: Exposes applications running on Pods as a network service
  • ConfigMap/Secret: Store configuration data and sensitive information
  • Namespace: Provides a mechanism for isolating groups of resources within a cluster

Step 1: Prepare the Kubernetes Deployment Manifest

First, let's create a deployment manifest for our Echo application:

yaml
# echo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
labels:
app: echo
spec:
replicas: 3
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: echo-app:1.0
ports:
- containerPort: 8080
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"

This YAML file defines:

  • A deployment named "echo-deployment"
  • That creates 3 replicas (pods)
  • Using the echo-app:1.0 container image
  • Exposing port 8080
  • With specific CPU and memory resource limits

Step 2: Create a Service to Expose the Application

Now, let's create a service to make our Echo application accessible:

yaml
# echo-service.yaml
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
selector:
app: echo
ports:
- port: 80
targetPort: 8080
type: ClusterIP

This service will:

  • Select all pods with the label app: echo
  • Map port 80 on the service to port 8080 on those pods
  • Create a stable internal IP (ClusterIP) for accessing the application

Step 3: Deploy to Kubernetes

To deploy our application to Kubernetes, we'll use the kubectl command-line tool:

bash
# Apply the deployment
kubectl apply -f echo-deployment.yaml

# Apply the service
kubectl apply -f echo-service.yaml

Expected output:

deployment.apps/echo-deployment created
service/echo-service created

Step 4: Verify the Deployment

Check if the deployment was successful:

bash
# Check the deployment status
kubectl get deployments

# Check the pods
kubectl get pods

# Check the service
kubectl get services

Example output:

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
echo-deployment 3/3 3 3 45s

NAME READY STATUS RESTARTS AGE
echo-deployment-6c9bf6b7d4-8jhtx 1/1 Running 0 46s
echo-deployment-6c9bf6b7d4-n7trs 1/1 Running 0 46s
echo-deployment-6c9bf6b7d4-vd4jp 1/1 Running 0 46s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-service ClusterIP 10.96.142.145 <none> 80/TCP 30s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h

Step 5: Test the Deployed Application

Using Port-Forward

For testing purposes, we can forward a local port to the service:

bash
kubectl port-forward service/echo-service 8080:80

Now you can access the application at http://localhost:8080

Using a Node Port Service (Alternative)

If you want to expose the service externally (e.g., in Minikube), you can update the service type to NodePort:

yaml
# echo-service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: echo-service-nodeport
spec:
selector:
app: echo
ports:
- port: 80
targetPort: 8080
nodePort: 30080
type: NodePort

Apply this configuration:

bash
kubectl apply -f echo-service-nodeport.yaml

Now you can access the application at http://<node-ip>:30080. In Minikube, you can get the IP with minikube ip.

Step 6: Scale the Deployment

One of Kubernetes' strengths is its ability to scale applications easily:

bash
kubectl scale deployment echo-deployment --replicas=5

Verify the scaling:

bash
kubectl get pods

Output:

NAME                               READY   STATUS    RESTARTS   AGE
echo-deployment-6c9bf6b7d4-8jhtx 1/1 Running 0 10m
echo-deployment-6c9bf6b7d4-kv7hj 1/1 Running 0 5s
echo-deployment-6c9bf6b7d4-n7trs 1/1 Running 0 10m
echo-deployment-6c9bf6b7d4-vd4jp 1/1 Running 0 10m
echo-deployment-6c9bf6b7d4-zf7kx 1/1 Running 0 5s

Step 7: Adding Configuration

Let's add configuration to our Echo application using a ConfigMap:

yaml
# echo-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: echo-config
data:
ECHO_PREFIX: "Echo K8s: "
LOG_LEVEL: "info"

Update the deployment to use this config:

yaml
# echo-deployment-config.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
# ... other metadata
spec:
# ... other spec fields
template:
# ... other template fields
spec:
containers:
- name: echo
image: echo-app:1.0
ports:
- containerPort: 8080
env:
- name: ECHO_PREFIX
valueFrom:
configMapKeyRef:
name: echo-config
key: ECHO_PREFIX
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: echo-config
key: LOG_LEVEL

Apply these changes:

bash
kubectl apply -f echo-configmap.yaml
kubectl apply -f echo-deployment-config.yaml

Real-World Application: Blue-Green Deployment

In production, you might want to implement advanced deployment strategies like Blue-Green deployments. Here's how you could set it up for our Echo application:

  1. Create two deployments with different versions:
yaml
# echo-blue.yaml (current version)
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-blue
labels:
app: echo
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: echo
version: blue
template:
metadata:
labels:
app: echo
version: blue
spec:
containers:
- name: echo
image: echo-app:1.0
ports:
- containerPort: 8080
yaml
# echo-green.yaml (new version)
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-green
labels:
app: echo
version: green
spec:
replicas: 3
selector:
matchLabels:
app: echo
version: green
template:
metadata:
labels:
app: echo
version: green
spec:
containers:
- name: echo
image: echo-app:1.1
ports:
- containerPort: 8080
  1. Create a service that can be switched between blue and green:
yaml
# echo-service-bg.yaml
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
selector:
app: echo
version: blue # Initial version is blue
ports:
- port: 80
targetPort: 8080
type: ClusterIP
  1. To switch to the green version, update the service selector:
bash
kubectl patch service echo-service -p '{"spec":{"selector":{"version":"green"}}}'

This allows for seamless version switching with zero downtime.

Summary

In this tutorial, we've learned how to:

  • Create a Kubernetes deployment for our Echo application
  • Expose the application using a Kubernetes service
  • Verify and test our deployment
  • Scale the application to handle more load
  • Add configuration using ConfigMaps
  • Implement a Blue-Green deployment strategy

Kubernetes provides a powerful platform for deploying and managing containerized applications at scale. While our Echo application is simple, these same principles apply to more complex microservices architectures and cloud-native applications.

Additional Resources

Exercises

  1. Modify the Echo application deployment to use environment variables for configuration.
  2. Create a Horizontal Pod Autoscaler to automatically scale the Echo application based on CPU usage.
  3. Implement a rolling update strategy with specific parameters for max unavailable and max surge.
  4. Deploy the Echo application with an Ingress resource to route traffic based on hostnames or paths.
  5. Create a persistent volume for the Echo application to store logs or other data that needs to persist across pod restarts.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)