Terraform for Compliance
Introduction
Infrastructure compliance is a critical concern in modern cloud environments. Organizations must ensure their infrastructure adheres to security standards, industry regulations, and internal policies. Terraform, as an Infrastructure as Code (IaC) tool, offers powerful capabilities to implement, validate, and maintain compliance requirements across your infrastructure.
This guide explores how Terraform can be leveraged for compliance purposes, helping you automate compliance checks, enforce security policies, and maintain an audit trail of infrastructure changes.
Understanding Compliance in Infrastructure
Before diving into Terraform specifics, let's understand what compliance means in the context of infrastructure:
- Security Standards: Like CIS benchmarks, NIST frameworks, and cloud provider-specific security best practices
- Industry Regulations: Such as HIPAA (healthcare), PCI DSS (payment card industry), GDPR (data protection), SOC 2 (service organizations)
- Internal Policies: Company-specific requirements for resource configuration, access controls, and security measures
Implementing compliance manually is error-prone and time-consuming. Terraform helps automate this process by:
- Defining compliant infrastructure as code
- Validating configurations before deployment
- Enforcing policies during the planning and application phases
- Creating audit trails through state tracking and version control
Key Terraform Compliance Tools
Terraform Sentinel
HashiCorp Sentinel is a policy-as-code framework integrated with Terraform Cloud and Terraform Enterprise. It allows you to define and enforce policies on your Terraform configurations.
// Example Sentinel policy to enforce mandatory tags
import "tfplan"
required_tags = ["environment", "owner", "project"]
validate_tags = func(resource_tags) {
for required_tags as rt {
if resource_tags[rt] is undefined {
return false
}
}
return true
}
// Check EC2 instances for required tags
ec2_instances = filter tfplan.resource_changes as _, rc {
rc.type is "aws_instance" and
(rc.change.actions contains "create" or rc.change.actions contains "update")
}
violations = filter ec2_instances as _, instance {
not validate_tags(instance.change.after.tags)
}
main = rule {
length(violations) is 0
}
Terraform Compliance Testing
You can use tools like terraform-compliance
or OPA (Open Policy Agent)
to validate your Terraform configurations against policies.
Example using terraform-compliance:
# Install terraform-compliance
$ pip install terraform-compliance
# Run compliance checks
$ terraform-compliance -p terraform.tfplan -f compliance-policies/
Example compliance policy file (using BDD syntax):
Feature: Security Groups should be used to protect services
In order to improve security
As engineers
We'll use AWS Security Groups as a Firewall
Scenario: Ensure all security groups have a description
Given I have AWS Security Group defined
Then it must contain description
And its value must match the "^.*$" regex
Scenario: Ensure no security group allows ingress from 0.0.0.0/0 to port 22
Given I have AWS Security Group defined
When it contains ingress
And it has rule with port 22
Then it must not have source 0.0.0.0/0
Checkov
Checkov is a static code analysis tool for infrastructure-as-code that can scan Terraform files for misconfigurations and security issues.
# Install Checkov
$ pip install checkov
# Scan your Terraform directory
$ checkov -d ./terraform_code
Sample output:
Check: CKV_AWS_41: "Ensure no hard coded AWS access key and secret key in provider"
PASSED for resource: aws.provider
File: /main.tf
Guide: https://docs.bridgecrew.io/docs/bc_aws_secrets_1
Check: CKV_AWS_23: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
FAILED for resource: aws_s3_bucket.data
File: /s3.tf
Guide: https://docs.bridgecrew.io/docs/s3_14-data-encrypted-at-rest
Implementing Compliance with Terraform: Step-by-Step Approach
1. Define Compliance Requirements as Code
Start by translating your compliance requirements into Terraform configurations. Here's an example of an AWS S3 bucket that meets compliance standards:
resource "aws_s3_bucket" "compliant_bucket" {
bucket = "my-compliant-bucket"
# Ensure versioning is enabled
versioning {
enabled = true
}
# Enforce encryption at rest
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
# Block public access
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
# Enable logging
logging {
target_bucket = aws_s3_bucket.log_bucket.id
target_prefix = "log/"
}
# Add compliance tags
tags = {
Environment = "Production"
DataClassification = "Confidential"
ComplianceScope = "PCI-DSS"
Owner = "Security Team"
}
}
2. Implement Compliance Modules
Create reusable Terraform modules that encapsulate compliance requirements. This ensures consistency across your infrastructure.
Example of a compliance module for AWS VPC:
# modules/compliant-vpc/main.tf
variable "vpc_cidr" {
description = "CIDR block for the VPC"
type = string
}
variable "environment" {
description = "Environment tag value"
type = string
}
resource "aws_vpc" "compliant_vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "compliant-vpc-${var.environment}"
Environment = var.environment
ManagedBy = "Terraform"
}
}
resource "aws_flow_log" "vpc_flow_logs" {
log_destination = aws_cloudwatch_log_group.flow_logs.arn
log_destination_type = "cloud-watch-logs"
traffic_type = "ALL"
vpc_id = aws_vpc.compliant_vpc.id
tags = {
Name = "vpc-flow-logs-${var.environment}"
Environment = var.environment
}
}
resource "aws_cloudwatch_log_group" "flow_logs" {
name = "vpc-flow-logs-${var.environment}"
retention_in_days = 90
tags = {
Environment = var.environment
Purpose = "SecurityCompliance"
}
}
Using the module:
module "production_vpc" {
source = "./modules/compliant-vpc"
vpc_cidr = "10.0.0.0/16"
environment = "production"
}
3. Validate Compliance Before Deployment
Always validate your Terraform configurations before applying them:
# Initialize your Terraform directory
$ terraform init
# Format your code for consistency
$ terraform fmt
# Validate the syntax
$ terraform validate
# Run compliance checks with Checkov
$ checkov -d .
# Plan the changes
$ terraform plan -out=tfplan
# Run terraform-compliance against the plan
$ terraform-compliance -p tfplan -f compliance-policies/
4. Enforce Compliance Through CI/CD Pipelines
Integrate compliance checks into your CI/CD pipeline. Here's an example GitHub Actions workflow:
name: Terraform Compliance Checks
on:
pull_request:
paths:
- 'terraform/**'
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_version: 1.0.0
- name: Terraform Init
run: terraform init
working-directory: ./terraform
- name: Terraform Validate
run: terraform validate
working-directory: ./terraform
- name: Install Checkov
run: pip install checkov
- name: Run Checkov
run: checkov -d ./terraform
- name: Terraform Plan
run: terraform plan -out=tfplan
working-directory: ./terraform
- name: Install terraform-compliance
run: pip install terraform-compliance
- name: Run terraform-compliance
run: terraform-compliance -p tfplan -f compliance-policies/
working-directory: ./terraform
Real-World Applications
Scenario 1: Healthcare Provider Implementing HIPAA Compliance
A healthcare provider needs to ensure their infrastructure complies with HIPAA regulations:
# Define an AWS KMS key for encryption
resource "aws_kms_key" "hipaa_key" {
description = "KMS key for HIPAA-compliant encryption"
deletion_window_in_days = 30
enable_key_rotation = true
tags = {
Environment = "Production"
Compliance = "HIPAA"
}
}
# Create an encrypted RDS instance
resource "aws_db_instance" "hipaa_database" {
identifier = "hipaa-database"
engine = "postgres"
engine_version = "13.4"
instance_class = "db.t3.micro"
allocated_storage = 100
max_allocated_storage = 1000
# Security configurations
storage_encrypted = true
kms_key_id = aws_kms_key.hipaa_key.arn
multi_az = true
deletion_protection = true
skip_final_snapshot = false
final_snapshot_identifier = "hipaa-db-final-snapshot"
# Enhanced monitoring
monitoring_interval = 60
monitoring_role_arn = aws_iam_role.rds_monitoring.arn
# HIPAA compliance tags
tags = {
Environment = "Production"
Compliance = "HIPAA"
DataType = "PHI"
Owner = "Healthcare Team"
}
}
Scenario 2: Financial Service Implementing PCI DSS Requirements
A financial services company implementing PCI DSS compliance:
# Network configuration with proper segmentation
module "pci_vpc" {
source = "./modules/compliant-vpc"
vpc_cidr = "10.0.0.0/16"
environment = "production"
# Create subnets with proper isolation
subnets = {
public = {
cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24"]
az_suffixes = ["a", "b"]
name_prefix = "public"
}
pci_dmz = {
cidr_blocks = ["10.0.3.0/24", "10.0.4.0/24"]
az_suffixes = ["a", "b"]
name_prefix = "pci-dmz"
}
pci_app = {
cidr_blocks = ["10.0.5.0/24", "10.0.6.0/24"]
az_suffixes = ["a", "b"]
name_prefix = "pci-app"
}
pci_data = {
cidr_blocks = ["10.0.7.0/24", "10.0.8.0/24"]
az_suffixes = ["a", "b"]
name_prefix = "pci-data"
}
}
}
# WAF for protecting web applications
resource "aws_wafv2_web_acl" "pci_waf" {
name = "pci-protected-api"
description = "WAF for PCI DSS compliance"
scope = "REGIONAL"
default_action {
allow {}
}
# OWASP Top 10 protections
rule {
name = "SQLInjectionRule"
priority = 1
statement {
sql_injection_match_statement {
field_to_match {
all_query_arguments {}
}
text_transformation {
priority = 1
type = "URL_DECODE"
}
}
}
action {
block {}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "SQLInjectionRule"
sampled_requests_enabled = true
}
}
# Additional rules would be added here...
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "pci-waf"
sampled_requests_enabled = true
}
tags = {
Environment = "Production"
Compliance = "PCI-DSS"
}
}
Scenario 3: Cloud Security Posture Management
Implementing a monitoring solution to continuously track compliance:
# CloudTrail for audit logging
resource "aws_cloudtrail" "compliance_trail" {
name = "compliance-audit-trail"
s3_bucket_name = aws_s3_bucket.audit_logs.id
include_global_service_events = true
is_multi_region_trail = true
enable_log_file_validation = true
kms_key_id = aws_kms_key.audit_key.arn
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
tags = {
Environment = "Production"
Purpose = "ComplianceAudit"
}
}
# AWS Config for continuous compliance monitoring
resource "aws_config_configuration_recorder" "config_recorder" {
name = "compliance-recorder"
role_arn = aws_iam_role.config_role.arn
recording_group {
all_supported = true
include_global_resource_types = true
}
}
resource "aws_config_configuration_recorder_status" "config_recorder_status" {
name = aws_config_configuration_recorder.config_recorder.name
is_enabled = true
}
# AWS Config Rules for automated compliance checking
resource "aws_config_config_rule" "s3_encryption_enabled" {
name = "s3-encryption-enabled"
description = "Checks whether S3 buckets have encryption enabled"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
}
depends_on = [aws_config_configuration_recorder.config_recorder]
}
resource "aws_config_config_rule" "root_mfa" {
name = "root-account-mfa-enabled"
description = "Checks whether the root user of the AWS account requires MFA"
source {
owner = "AWS"
source_identifier = "ROOT_ACCOUNT_MFA_ENABLED"
}
depends_on = [aws_config_configuration_recorder.config_recorder]
}
Visualizing
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)