Skip to main content

Terraform Security Best Practices

Introduction

Terraform is a powerful Infrastructure as Code (IaC) tool that enables developers and operations teams to define and provision infrastructure resources in a declarative way. However, with great power comes great responsibility, especially when it comes to security. Poor security practices in Terraform can lead to exposed credentials, misconfigured resources, and potential vulnerabilities in your infrastructure.

This guide will walk you through essential security best practices for Terraform, helping you build more secure infrastructure while maintaining the flexibility and efficiency that Terraform offers.

Why Security Matters in Terraform

Terraform code defines your entire infrastructure, making it a critical security component:

  • It often contains or accesses sensitive information like API keys and passwords
  • Infrastructure misconfigurations can expose your systems to attacks
  • Version control of infrastructure code can inadvertently leak secrets
  • Automated deployments can propagate security issues quickly

1. State File Security

The Terraform state file contains sensitive information about your infrastructure, including resource IDs and sometimes even passwords or other secrets.

Best Practices for State Files

Use Remote State Storage

Always store your Terraform state in a secure, remote backend rather than keeping it locally or in version control.

hcl
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
encrypt = true
}
}

Enable Encryption

Enable encryption for your state storage:

  • For AWS S3: Set encrypt = true
  • For Azure Storage: Enable Azure Storage encryption
  • For Google Cloud Storage: Use default encryption settings

Control Access

Implement strict access controls to state files:

  • Use IAM policies to restrict who can access the state
  • Consider using state locking to prevent concurrent modifications
hcl
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}

2. Secret Management

Avoid Hardcoding Secrets

Never hardcode sensitive information directly in your Terraform files:

Bad Practice:

hcl
provider "aws" {
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region = "us-west-2"
}

Better Practice:

hcl
provider "aws" {
region = "us-west-2"
# AWS credentials are loaded from environment variables or credentials file
}

Use Environment Variables

Set sensitive information as environment variables:

bash
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
terraform apply

Use Secret Management Tools

Integrate with dedicated secret management solutions:

HashiCorp Vault

hcl
data "vault_generic_secret" "aws_creds" {
path = "secret/aws"
}

provider "aws" {
access_key = data.vault_generic_secret.aws_creds.data["access_key"]
secret_key = data.vault_generic_secret.aws_creds.data["secret_key"]
region = "us-west-2"
}

AWS Secrets Manager

hcl
data "aws_secretsmanager_secret" "db_creds" {
name = "prod/db/credentials"
}

data "aws_secretsmanager_secret_version" "db_creds" {
secret_id = data.aws_secretsmanager_secret.db_creds.id
}

locals {
db_creds = jsondecode(data.aws_secretsmanager_secret_version.db_creds.secret_string)
}

resource "aws_db_instance" "default" {
# ...
username = local.db_creds.username
password = local.db_creds.password
# ...
}

3. Module Security

Use Trusted Modules

When using third-party modules:

  • Prefer official modules from HashiCorp or major cloud providers
  • Review the code of community modules before using them
  • Pin module versions to prevent unexpected updates
hcl
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0" # Pin to a specific version

# Module configuration
}

Validate Module Inputs

Validate inputs to ensure they meet security requirements:

hcl
variable "db_password" {
type = string
description = "Database password"
sensitive = true

validation {
condition = length(var.db_password) >= 16
error_message = "The database password must be at least 16 characters long."
}

validation {
condition = can(regex("[A-Z]", var.db_password)) && can(regex("[a-z]", var.db_password)) && can(regex("[0-9]", var.db_password)) && can(regex("[!@#$%^&*]", var.db_password))
error_message = "The database password must contain at least one uppercase letter, one lowercase letter, one number, and one special character."
}
}

4. Identity and Access Management

Use Least Privilege Principle

Assign minimal permissions to resources and service accounts:

hcl
resource "aws_iam_policy" "s3_read_only" {
name = "s3-read-only"
description = "Allow read-only access to specific S3 bucket"

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:GetObject",
"s3:ListBucket",
]
Effect = "Allow"
Resource = [
"arn:aws:s3:::${var.bucket_name}",
"arn:aws:s3:::${var.bucket_name}/*"
]
}
]
})
}

Time-Limited Credentials

When possible, use time-limited credentials:

hcl
resource "aws_iam_access_key" "deploy_key" {
user = aws_iam_user.deploy_user.name
pgp_key = var.pgp_key
}

resource "time_rotating" "key_rotation" {
rotation_days = 30
}

resource "aws_iam_access_key" "deploy_key_rotated" {
user = aws_iam_user.deploy_user.name
pgp_key = var.pgp_key

# Create a new key when the time_rotating resource changes
lifecycle {
create_before_destroy = true
replace_triggered_by = [time_rotating.key_rotation]
}
}

5. Network Security

Use Private Networks

Prefer private networks and VPCs over public ones:

hcl
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}

resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-west-2a"
}

resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-west-2b"
map_public_ip_on_launch = true
}

# Only resources that need to be public should be in the public subnet

Restrict Security Groups

Limit inbound and outbound traffic with security groups:

hcl
resource "aws_security_group" "web" {
name = "web-sg"
description = "Security group for web servers"
vpc_id = aws_vpc.main.id

# Allow incoming HTTP/HTTPS traffic only
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Allow outbound traffic to specific destinations only
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["10.0.0.0/16"] # Only allow traffic within VPC
}
}

6. Encryption

Enable Encryption at Rest

Enable encryption for all resources that store data:

hcl
resource "aws_s3_bucket" "data" {
bucket = "my-secure-data-bucket"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
bucket = aws_s3_bucket.data.id

rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

Enable Encryption in Transit

Ensure data is encrypted in transit:

hcl
resource "aws_lb_listener" "front_end" {
load_balancer_arn = aws_lb.front_end.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01"
certificate_arn = aws_acm_certificate.cert.arn

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.front_end.arn
}
}

7. Versioning and CI/CD Security

Version Control Best Practices

  • Use .gitignore to prevent committing sensitive files
  • Use pre-commit hooks to catch secrets before they're committed
  • Consider using git-crypt for encrypting sensitive files in repos

Example .gitignore for Terraform:

# .gitignore
*.tfstate
*.tfstate.backup
*.tfvars
.terraform/

Implement CI/CD Security Checks

Add security checks to your CI/CD pipeline:

yaml
# Example GitHub Actions workflow
name: Terraform Security Scan

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

- name: Setup Terraform
uses: hashicorp/setup-terraform@v1

- name: Terraform Format Check
run: terraform fmt -check

- name: Terraform Security Scan
uses: triat/terraform-security-scan@v3

- name: Terraform Validate
run: |
terraform init -backend=false
terraform validate

8. Resource Tagging and Monitoring

Implement Resource Tagging

Use tags to track ownership and purpose of resources:

hcl
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"

tags = {
Name = "WebServer"
Environment = "Production"
Owner = "DevOps"
Project = "Website"
ManagedBy = "Terraform"
}
}

Enable Logging and Monitoring

Configure logging and monitoring for all resources:

hcl
resource "aws_cloudwatch_log_group" "app_logs" {
name = "/app/production"
retention_in_days = 30

tags = {
Environment = "Production"
Application = "MyApp"
}
}

resource "aws_cloudtrail" "main" {
name = "tf-trail"
s3_bucket_name = aws_s3_bucket.trail.id
include_global_service_events = true
is_multi_region_trail = true
enable_log_file_validation = true
}

9. Automated Security Scanning

Use Terraform Validators

Integrate tools like tfsec, checkov, and terrascan into your workflow:

bash
# Install tfsec
brew install tfsec

# Scan your Terraform code
tfsec .

# Output example:
# HIGH: Resource 'aws_s3_bucket.data' has no encryption configured
# File: main.tf:15

Implement Policy as Code

Use tools like Sentinel or OPA to enforce security policies:

hcl
# Example Sentinel policy (policy.sentinel)
import "tfplan"

# All S3 buckets must have encryption enabled
s3_buckets = filter tfplan.resource_changes as _, rc {
rc.type is "aws_s3_bucket" and
(rc.change.actions contains "create" or rc.change.actions contains "update")
}

encryption_enabled = rule {
all s3_buckets as _, bucket {
bucket.change.after.server_side_encryption_configuration is not null
}
}

main = rule {
encryption_enabled
}

Practical Exercise: Securing a Terraform AWS Infrastructure

Let's implement a secure infrastructure for a simple web application on AWS:

hcl
# main.tf

terraform {
required_version = ">= 1.0.0"

backend "s3" {
bucket = "my-terraform-state"
key = "secure-infra/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-state-lock"
}

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = var.region
}

# Create a VPC with public and private subnets
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"

name = "secure-vpc"
cidr = "10.0.0.0/16"

azs = ["${var.region}a", "${var.region}b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]

enable_nat_gateway = true
single_nat_gateway = true

# Enable VPC flow logs
enable_flow_log = true
flow_log_destination_type = "cloud-watch-logs"
flow_log_destination_arn = aws_cloudwatch_log_group.vpc_flow_logs.arn
flow_log_traffic_type = "ALL"
flow_log_cloudwatch_log_group_kms_key_id = aws_kms_key.logs.arn

tags = var.tags
}

# Create KMS key for encryption
resource "aws_kms_key" "logs" {
description = "KMS key for log


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)