Skip to main content

Terraform for Containers

Introduction

Containerization has revolutionized application deployment by providing consistent environments across development, testing, and production. Terraform extends this power by enabling you to define your container infrastructure as code, making it repeatable, versioned, and automated.

In this guide, we'll explore how Terraform can be used to manage container-related infrastructure, including:

  • Docker containers and images
  • Kubernetes clusters and resources
  • Container registries
  • Networking for containers
  • Integration with container orchestration platforms

Prerequisites

Before diving in, you should have:

  • Basic understanding of Terraform concepts (providers, resources, variables)
  • Familiarity with container concepts
  • Terraform CLI installed (version 1.0+)
  • Docker and/or Kubernetes installed locally (for testing)

Terraform and Docker

Setting Up the Docker Provider

Terraform can directly manage Docker containers through the Docker provider. Let's start by configuring it:

hcl
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0.0"
}
}
}

provider "docker" {}

Creating Docker Resources

Docker Images

Let's pull and manage a Docker image:

hcl
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}

This instructs Terraform to pull the latest nginx image from Docker Hub.

Docker Containers

Next, let's create a container using this image:

hcl
resource "docker_container" "nginx" {
image = docker_image.nginx.image_id
name = "tutorial"
ports {
internal = 80
external = 8000
}
}

When you run terraform apply, this will:

  1. Pull the nginx image (if not already present)
  2. Create a container named "tutorial"
  3. Map port 80 inside the container to port 8000 on your host

Docker Networks

You can also create custom networks for container communication:

hcl
resource "docker_network" "private_network" {
name = "my_network"
}

resource "docker_container" "nginx" {
image = docker_image.nginx.image_id
name = "tutorial"
networks_advanced {
name = docker_network.private_network.name
}
}

Complete Docker Example

Here's a more comprehensive example creating a simple web application with a database:

hcl
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0.0"
}
}
}

provider "docker" {}

resource "docker_network" "app_network" {
name = "web_app_network"
}

resource "docker_image" "mysql" {
name = "mysql:8.0"
keep_locally = false
}

resource "docker_container" "mysql" {
image = docker_image.mysql.image_id
name = "db"
networks_advanced {
name = docker_network.app_network.name
}
env = [
"MYSQL_ROOT_PASSWORD=rootpassword",
"MYSQL_DATABASE=wordpress",
"MYSQL_USER=wordpress",
"MYSQL_PASSWORD=wordpress"
]
}

resource "docker_image" "wordpress" {
name = "wordpress:latest"
keep_locally = false
}

resource "docker_container" "wordpress" {
image = docker_image.wordpress.image_id
name = "wordpress"
networks_advanced {
name = docker_network.app_network.name
}
env = [
"WORDPRESS_DB_HOST=db",
"WORDPRESS_DB_USER=wordpress",
"WORDPRESS_DB_PASSWORD=wordpress",
"WORDPRESS_DB_NAME=wordpress"
]
ports {
internal = 80
external = 8080
}
depends_on = [docker_container.mysql]
}

output "wordpress_access_url" {
value = "http://localhost:8080"
}

After running terraform apply, you'll have WordPress running on port 8080, connected to a MySQL database, both in their own containers on a private network.

Terraform and Kubernetes

Setting Up the Kubernetes Provider

To manage Kubernetes resources with Terraform, you'll use the Kubernetes provider:

hcl
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.22.0"
}
}
}

provider "kubernetes" {
config_path = "~/.kube/config"
}

This configuration uses your local kubectl configuration to authenticate with your Kubernetes cluster.

Creating Kubernetes Resources

Namespaces

Let's start by creating a namespace for our application:

hcl
resource "kubernetes_namespace" "app_namespace" {
metadata {
name = "my-application"
}
}

Deployments

Next, let's create a deployment to run our application:

hcl
resource "kubernetes_deployment" "app" {
metadata {
name = "webapp"
namespace = kubernetes_namespace.app_namespace.metadata[0].name
}

spec {
replicas = 3

selector {
match_labels = {
app = "webapp"
}
}

template {
metadata {
labels = {
app = "webapp"
}
}

spec {
container {
image = "nginx:1.21"
name = "webapp"

port {
container_port = 80
}

resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "256Mi"
}
}
}
}
}
}
}

This deployment creates three replicas of an nginx container.

Services

To expose our application, we need a service:

hcl
resource "kubernetes_service" "app" {
metadata {
name = "webapp"
namespace = kubernetes_namespace.app_namespace.metadata[0].name
}

spec {
selector = {
app = kubernetes_deployment.app.spec[0].template[0].metadata[0].labels.app
}

port {
port = 80
target_port = 80
}

type = "LoadBalancer"
}
}

output "load_balancer_ip" {
value = kubernetes_service.app.status[0].load_balancer[0].ingress[0].ip
}

This creates a LoadBalancer service that exposes our application to external traffic.

ConfigMaps and Secrets

For application configuration, you'll often use ConfigMaps and Secrets:

hcl
resource "kubernetes_config_map" "app_config" {
metadata {
name = "app-config"
namespace = kubernetes_namespace.app_namespace.metadata[0].name
}

data = {
"config.json" = jsonencode({
database = {
host = "db.example.com"
port = 5432
}
features = {
enable_new_ui = true
}
})
}
}

resource "kubernetes_secret" "app_secrets" {
metadata {
name = "app-secrets"
namespace = kubernetes_namespace.app_namespace.metadata[0].name
}

data = {
db_password = base64encode("supersecret")
api_key = base64encode("api-key-value")
}

type = "Opaque"
}

Then, use these in your deployment:

hcl
resource "kubernetes_deployment" "app" {
# Previous configuration...

spec {
# Previous configuration...

template {
# Previous configuration...

spec {
container {
# Previous configuration...

env {
name = "DB_PASSWORD"
value_from {
secret_key_ref {
name = kubernetes_secret.app_secrets.metadata[0].name
key = "db_password"
}
}
}

volume_mount {
name = "config-volume"
mount_path = "/etc/config"
}
}

volume {
name = "config-volume"
config_map {
name = kubernetes_config_map.app_config.metadata[0].name
}
}
}
}
}
}

Creating Kubernetes Clusters with Terraform

AWS EKS Cluster

Here's how to create an Amazon EKS (Elastic Kubernetes Service) cluster with Terraform:

hcl
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

provider "aws" {
region = "us-west-2"
}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"

name = "eks-vpc"
cidr = "10.0.0.0/16"

azs = ["us-west-2a", "us-west-2b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]

enable_nat_gateway = true
single_nat_gateway = true

tags = {
Environment = "demo"
}
}

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.15.3"

cluster_name = "my-eks-cluster"
cluster_version = "1.28"

subnet_ids = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id

eks_managed_node_groups = {
main = {
min_size = 1
max_size = 3
desired_size = 2

instance_types = ["t3.medium"]
}
}
}

output "cluster_endpoint" {
value = module.eks.cluster_endpoint
}

output "cluster_certificate_authority_data" {
value = module.eks.cluster_certificate_authority_data
}

This example creates:

  1. A VPC with public and private subnets
  2. An EKS cluster in the private subnets
  3. A managed node group with 2 t3.medium instances

Google GKE Cluster

For Google Kubernetes Engine (GKE):

hcl
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
}

provider "google" {
project = "my-project-id"
region = "us-central1"
}

resource "google_container_cluster" "primary" {
name = "my-gke-cluster"
location = "us-central1"

# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1

network = "default"
subnetwork = "default"
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "my-node-pool"
cluster = google_container_cluster.primary.name
location = "us-central1"
node_count = 3

node_config {
preemptible = true
machine_type = "e2-medium"

oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}

output "kubernetes_cluster_name" {
value = google_container_cluster.primary.name
}

output "kubernetes_cluster_host" {
value = google_container_cluster.primary.endpoint
}

Container Registry Management

AWS ECR Registry

hcl
provider "aws" {
region = "us-west-2"
}

resource "aws_ecr_repository" "app_repo" {
name = "my-app"
image_tag_mutability = "MUTABLE"

image_scanning_configuration {
scan_on_push = true
}
}

output "repository_url" {
value = aws_ecr_repository.app_repo.repository_url
}

Azure Container Registry

hcl
provider "azurerm" {
features {}
}

resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "East US"
}

resource "azurerm_container_registry" "acr" {
name = "myacrregistry"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard"
admin_enabled = true
}

output "login_server" {
value = azurerm_container_registry.acr.login_server
}

Advanced Container Orchestration

Setting up a Complete EKS Infrastructure

Let's create a more comprehensive example with an EKS cluster and deployments:

hcl
module "eks" {
# EKS module configuration from previous example
}

provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}

resource "kubernetes_namespace" "example" {
metadata {
name = "example"
}
}

resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
namespace = kubernetes_namespace.example.metadata[0].name
labels = {
app = "example"
}
}

spec {
replicas = 2

selector {
match_labels = {
app = "example"
}
}

template {
metadata {
labels = {
app = "example"
}
}

spec {
container {
image = "nginx:1.21"
name = "example"

resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "256Mi"
}
}

liveness_probe {
http_get {
path = "/"
port = 80
}

initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}

resource "kubernetes_service" "example" {
metadata {
name = "terraform-example"
namespace = kubernetes_namespace.example.metadata[0].name
}
spec {
selector = {
app = kubernetes_deployment.example.spec[0].template[0].metadata[0].labels.app


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)