Terraform modules are how you turn raw Terraform into a reusable, versioned “library” of infrastructure components. In this article we’ll go through what modules are, the types you’ll see in practice, how to create them, when to factor code into a module, how to update them safely, how to publish them, and finally how to consume them from your stacks.


What is a Terraform module?

At its core, a module is just a directory containing Terraform configuration that can be called from other Terraform code.

  • Any directory with .tf files is a module.
  • The directory where you run terraform init/plan/apply is your root module.
  • A root module can call child modules via module blocks, which is how you achieve reuse and composition.

Conceptually, a module is like a function in code:

  • Inputs → variables
  • Logic → resources, locals, data sources
  • Outputs → values other code can depend on

Good modules hide internal complexity behind a clear, minimal interface, exactly as you’d expect from a well‑designed API.


Types of modules you’ll deal with

In practice you’ll encounter several “types” or roles of modules:

  1. Root module
    • The entrypoint of a stack (e.g. envs/prod), where you configure providers, backends, and call other modules.
    • Represents one deployable unit: a whole environment, a service, or a single app stack.
  2. Child / reusable modules
    • Reusable building blocks: VPCs, EKS clusters, RDS databases, S3 buckets, etc.
    • Usually live under modules/ in a repo, or in a separate repo entirely.
    • Called from root or other modules with module "name" { ... }.
  3. Public registry modules
    • Published to the public Terraform Registry, versioned and documented.
    • Example: terraform-aws-modules/vpc/aws
    • Great for standard primitives (VPCs, security groups, S3, etc.), less so for business‑specific patterns.
  4. Private/organizational modules
    • Hosted in private registries or Git repos.
    • Usually represent your organization’s conventions and guardrails (“a compliant VPC”, “a hardened EKS cluster”).

Architecturally, many teams settle on layers:

  • Layer 0: cloud and providers (root module).
  • Layer 1: platform modules (VPC, KMS, logging, IAM baselines).
  • Layer 2: product/service modules (service X, API Y) that compose platform modules.

Creating a Terraform module

Standard structure

A well‑structured module typically has:

  • main.tf – core resources and module logic
  • variables.tf – input interface
  • outputs.tf – exported values
  • versions.tf (optional but recommended) – provider and Terraform version constraints
  • README.md – usage, inputs, outputs, examples

This structure is not required by Terraform but is widely used because it keeps interfaces clear and tooling friendly.

Simple working example

Let’s build a small AWS S3 bucket module and then consume it from a root module.

Module: modules/aws_s3_bucket

modules/aws_s3_bucket/versions.tf:

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

modules/aws_s3_bucket/variables.tf:

variable "bucket_name" {
  type        = string
  description = "Name of the S3 bucket."
}

variable "environment" {
  type        = string
  description = "Environment name (e.g., dev, prod)."
  default     = "dev"
}

variable "extra_tags" {
  type        = map(string)
  description = "Additional tags to apply to the bucket."
  default     = {}
}

modules/aws_s3_bucket/main.tf:

resource "aws_s3_bucket" "this" {
  bucket = var.bucket_name

  tags = merge(
    {
      Name        = var.bucket_name
      Environment = var.environment
    },
    var.extra_tags
  )
}

modules/aws_s3_bucket/outputs.tf:

output "bucket_id" {
  description = "The ID (name) of the bucket."
  value       = aws_s3_bucket.this.id
}

output "bucket_arn" {
  description = "The ARN of the bucket."
  value       = aws_s3_bucket.this.arn
}

Rationale:

  • variables.tf defines the module’s public input contract.
  • outputs.tf defines the public output contract.
  • versions.tf protects you from incompatible provider/Terraform versions.
  • main.tf stays focused on resources and any derived locals.

Root module consuming it

In your root directory (e.g. project root):

versions.tf:

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

providers.tf:

provider "aws" {
  region                      = "us-east-1"

  # Fake credentials for LocalStack
  access_key                  = "test"
  secret_key                  = "test"

  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true
  s3_use_path_style           = true

  # Point AWS services at LocalStack
  endpoints {
    s3 = "http://localhost:4566"
    # add more if needed, e.g. dynamodb = "http://localhost:4566"
  }
}

variables.tf:

variable "aws_region" {
  type        = string
  description = "AWS region to deploy into."
  default     = "ap-southeast-2"
}

variable "environment" {
  type        = string
  description = "Environment name."
  default     = "dev"
}

main.tf:

module "logs_bucket" {
  source      = "./modules/aws_s3_bucket"
  bucket_name = "my-org-logs-${var.environment}"
  environment = var.environment
  extra_tags = {
    owner = "platform-team"
  }
}

output "logs_bucket_arn" {
  value       = module.logs_bucket.bucket_arn
  description = "Logs bucket ARN."
}

How to validate this example

From the root directory:

  1. Start LocalStack (for example, via Docker):

    docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack

    This exposes the LocalStack APIs on http://localhost:4566 as expected by the provider config.

  2. terraform init

  • Ensures Terraform and the AWS provider are set up; discovers the local module.
  1. terraform validate
  • Confirms syntax, types, required variables satisfied.
  1. terraform plan
  • You should see one S3 bucket to be created, with the name my-org-logs-dev by default.
  • Confirm that the tags include Environment = dev and owner = platform-team.
  1. terraform apply
  • After apply, run terraform output logs_bucket_arn and check that:
    • The ARN looks correct for your region.
    • The bucket exists in AWS with expected tags.

If these checks pass, your module and consumption pattern are wired correctly.


When to create a module

You should not modularise everything; the trick is to modularise at the right abstraction boundaries.

Good reasons to create a module

  • You’re copy‑pasting the same pattern across stacks or repos
    • Example: the same cluster pattern for dev, stage, prod.
    • A module eliminates duplication and concentrates fixes in one place.
  • You have a logical component with a clear responsibility
    • Examples: “networking”, “observability stack”, “Generic service with ALB + ECS + RDS”.
    • Each becomes a module with focused inputs and outputs.
  • You want to hide complexity and provide sane defaults
    • Consumers shouldn’t need to know every IAM policy detail.
    • Provide a small set of inputs; encode your standards inside the module.
  • You want a contract between teams
    • Platform team maintains modules; product teams just configure inputs.
    • This aligns nicely with how you manage APIs or libraries internally.

When not to create a module (yet)

  • One‑off experiments or throwaway code.
  • A single, simple resource that is unlikely to be reused.
  • When you don’t yet understand the pattern — premature modularisation leads to awkward, unstable interfaces.

A good heuristic: if you’d be comfortable writing a README with “what this does, inputs, outputs” and you expect re‑use, it’s a good module candidate.


Updating a module safely

Updating modules has two dimensions: changing the module itself, and rolling out the updated version to consumers.

Evolving the module interface

Prefer backwards‑compatible changes when possible:

  • Add new variables with sensible defaults instead of changing existing ones.
  • Add new outputs without altering the meaning of existing outputs.
  • If you must break behaviour, bump a major version and document the migration path.

Internally you might refactor resources, adopt new provider versions, or change naming conventions, but keep the external contract as stable as you can.

Versioning strategy

For modules in a separate repo or registry:

  • Use semantic versioning: MAJOR.MINOR.PATCH.
    • PATCH: bugfixes, no breaking changes.
    • MINOR: new optional features, backwards compatible.
    • MAJOR: breaking changes.

Tag releases (v1.2.3) and use those tags in consumers (Git or registry).

Rolling out updates to consumers

For a Git‑sourced module:

module "logs_bucket" {
  source  = "git::https://github.com/my-org/terraform-aws-s3-bucket.git?ref=v1.3.0"
  # ...
}

To upgrade:

  1. Change ref from v1.2.0 to v1.3.0.
  2. Run terraform init -upgrade.
  3. Run terraform plan and review changes carefully.
  4. Apply in lower environments first, then promote the same version to higher environments (via branch promotion, pipelines, or workspace variables).

For a registry module, the pattern is the same but with a version argument:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.3.0"
}

Pinning versions gives you reproducibility and avoids surprise changes across environments.


Publishing a module

Publishing is about making your module discoverable and consumable by others, with strong versioning and documentation.

Public registry (high‑level)

To publish a module publicly (e.g. to the Terraform Registry):

  • Place the module in a public VCS repo (commonly GitHub).
  • Name the repo using the convention: terraform-<PROVIDER>-<NAME>
    • Example: terraform-aws-s3-bucket.
  • Ensure the repo root contains your module (main.tf, variables.tf, outputs.tf, etc.).
  • Tag a version (e.g. v1.0.0).
  • Register the module on the registry UI (linking your VCS account).

Once indexed, users can consume it as:

module "logs_bucket" {
  source  = "my-org/s3-bucket/aws"
  version = "1.0.0"

  bucket_name = "my-org-logs-prod"
  environment = "prod"
}

Private registries and Git

For internal usage, many organizations prefer:

  • Private registry (Terraform Cloud/Enterprise, vendor platform, or self‑hosted).
    • Similar flow to the public registry, but scoped to your org.
  • Direct Git usage
    • Modules are consumed from Git with ?ref= pointing to tags or commits.
    • Simpler setup, but you lose some of the browsing and discoverability that registries provide.

The key idea is the same: modules are versioned artefacts, and consumers should pin versions and upgrade intentionally.


Consuming modules (putting it all together)

To consume any module, you:

  1. Add a module block.
  2. Set source to a local path, Git URL, or registry identifier.
  3. Pass the required inputs as arguments.
  4. Use the module’s outputs via module.<name>.<output_name>.

Example: consuming a local network module and a registry VPC module side by side.

# Local module (your own)
module "network" {
  source = "./modules/network"

  vpc_cidr        = "10.0.0.0/16"
  public_subnets  = ["10.0.1.0/24", "10.0.2.0/24"]
  private_subnets = ["10.0.11.0/24", "10.0.12.0/24"]
}

# Registry module (third-party)
module "logs_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 4.0"

  bucket = "my-org-logs-prod"

  tags = {
    Environment = "prod"
  }
}

output "network_vpc_id" {
  value = module.network.vpc_id
}

output "logs_bucket_arn" {
  value = module.logs_bucket.s3_bucket_arn
}

The root module becomes a composition layer, wiring together multiple modules rather than directly declaring many low‑level resources.


Summary of key practices

  • Treat modules as APIs: clear inputs, clear outputs, stable contracts.
  • Use a predictable structure: main.tf, variables.tf, outputs.tf, versions.tf, README.md.
  • Only create modules where there is clear reuse or a meaningful abstraction.
  • Version modules and pin those versions when consuming them.
  • Use lower environments and terraform plan to validate updates before promoting.