Terraform configurations are built out of blocks. Understanding block types is critical because they define how you declare infrastructure, wire modules together, and control Terraform’s behavior.
1. The Anatomy of a Block
Every Terraform block has the same basic shape:
TYPE "label1" "label2" {
argument_name = expression
nested_block_type {
# ...
}
}
Key parts:
- Type: The keyword at the start (
resource,provider,variable, etc.). This tells Terraform what kind of thing you are defining. - Labels: Extra identifiers whose meaning depends on the block type.
- Example:
resource "aws_instance" "web" - Type:
resource - Labels:
"aws_instance"(resource type),"web"(local name)
- Example:
- Body: The
{ ... }section, which can contain:- Arguments:
name = expression - Nested blocks:
block_type { ... }
- Arguments:
Rationale: The consistent shape makes the language predictable. Block type + labels define what the block is; the body defines how it behaves or is configured.
2. Core Top-Level Block Types
These blocks usually appear at the top level of your .tf files and together they define a module: its inputs, logic, and outputs.
2.1 terraform block
Configures Terraform itself:
- Required providers and their versions.
- Required Terraform version.
- Backend configuration (usually via a nested
backendblock interraform).
Example:
terraform {
required_version = ">= 1.6.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Rationale: Keeps tooling constraints explicit and version-pinned, so behavior is deterministic across environments and team members.
2.2 provider block
Configures how Terraform talks to an external API (AWS, Azure, GCP, Kubernetes, etc.):
provider "aws" {
region = var.aws_region
}
Typical aspects:
- Credentials and regions.
- Aliases for multiple configurations (e.g.,
provider "aws" { alias = "eu" ... }).
Rationale: Providers are the “drivers” Terraform uses to translate configuration into real infrastructure; separating them lets you re-use the same module with different provider settings.
2.3 resource block
Declares infrastructure objects Terraform will create and manage.
resource "aws_s3_bucket" "this" {
bucket = "${local.name}-bucket"
}
Structure:
- Type label: the provider-specific resource type (
"aws_s3_bucket"). - Name label: a local identifier (
"this","web","db", etc.). - Body: arguments and nested blocks that define the resource’s configuration.
Rationale: The resource block is the heart of Terraform; it expresses desired state. Every apply tries to reconcile actual infrastructure with what these blocks declare.
2.4 data block
Reads information about existing objects without creating anything.
data "aws_ami" "latest_amazon_linux" {
most_recent = true
filter {
name = "name"
values = ["amazon-linux-2-*"]
}
owners = ["amazon"]
}
You reference it as data.aws_ami.latest_amazon_linux.id.
Rationale: Data sources decouple “lookup” from “creation”. You avoid hardcoding IDs/ARNs and can dynamically discover things like AMIs, VPC IDs, or roles.
2.5 variable block
Defines inputs to a module:
variable "aws_region" {
type = string
description = "AWS region to deploy into"
default = "us-west-2"
}
Key fields:
type: basic or complex types (string, number, list, map, object, etc.).default: makes a variable optional.description: documentation for humans.
Rationale: Explicit inputs make modules reusable, testable, and self-documenting. They are your module’s API.
2.6 output block
Exposes values from a module:
output "bucket_name" {
value = aws_s3_bucket.this.bucket
description = "Name of the S3 bucket created by this module."
}
Rationale: Outputs are your module’s return values, allowing composition: root modules can print values, and child modules can feed outputs into other modules or systems (e.g., CI/CD).
2.7 locals block
Defines computed values for use within a module:
locals {
name_prefix = "demo"
bucket_name = "${local.name_prefix}-bucket"
}
Notes:
- You can have multiple
localsblocks; Terraform merges them. - Access them via
local.<name>.
Rationale: Locals centralize derived values and remove duplication. That keeps your configuration DRY and easier to refactor.
3. Nested Blocks vs Arguments
Within a block body you use two constructs:
-
Arguments:
key = expression
Example:bucket = "demo-bucket". -
Nested blocks:
block_type { ... }
Example:resource "aws_instance" "web" { ami = data.aws_ami.latest_amazon_linux.id instance_type = "t3.micro" network_interface { device_index = 0 network_interface_id = aws_network_interface.web.id } }
Why have both?
- Arguments are single values; they are the usual “settings”.
- Nested blocks model structured, often repeatable configuration sections (e.g.,
ingressrules in security groups,network_interface,lifecycle,tagblocks in some providers).
Rationale: Using nested blocks for structured/repeated sections keeps complex resources readable and makes it clear which values logically belong together.
4. Meta-Arguments and Lifecycle Blocks
Some names inside a resource are meta-arguments understood by Terraform itself rather than by the provider:
Common meta-arguments:
depends_on: Add explicit dependencies when Terraform’s graph inference isn’t enough.count: Create multiple instances of a resource using integer indexing.for_each: Create multiple instances keyed by a map or set.provider: Pin a resource to a specific provider configuration (e.g.,aws.eu).lifecycle: Special nested block that controls create/update/destroy behavior.
Example lifecycle:
resource "aws_s3_bucket" "this" {
bucket = "${local.name}-bucket"
lifecycle {
prevent_destroy = true
ignore_changes = [tags]
create_before_destroy = true
}
}
Rationale: Meta-arguments give you control over resource orchestration rather than definition. They let you express cardinality, ordering, and safety rules without resorting to hacks or external tooling.
5. Putting It All Together
Below is a small but coherent configuration that demonstrates the main block types and how they interact. You can drop this into an empty directory as main.tf.
terraform {
required_version = ">= 1.6.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
variable "aws_region" {
type = string
description = "AWS region to deploy into (used by LocalStack as well)"
default = "us-east-1"
}
locals {
project = "block-types-localstack-demo"
bucket = "${local.project}-bucket"
}
provider "aws" {
region = var.aws_region
# Dummy credentials – LocalStack doesn’t actually validate them.
access_key = "test"
secret_key = "test"
# Talk to LocalStack instead of AWS.
s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
s3 = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
# This data source will return the LocalStack “test” account (000000000000).
data "aws_caller_identity" "current" {}
resource "aws_s3_bucket" "this" {
bucket = local.bucket
tags = {
Project = local.project
Owner = data.aws_caller_identity.current.account_id
}
lifecycle {
prevent_destroy = true
}
}
output "bucket_name" {
value = aws_s3_bucket.this.bucket
description = "The name of the created S3 bucket."
}
output "account_id" {
value = data.aws_caller_identity.current.account_id
description = "AWS (LocalStack) account ID used for this deployment."
}
What this example shows
terraformblock: pins Terraform and the AWS provider.variable: input for region.locals: internal naming logic.provider: AWS configuration.data: a data source reading your current AWS identity.resource: S3 bucket, including nestedlifecycleand tags.output: exposes bucket name and account ID.
How to Run and Validate with LocalStack
-
Start LocalStack (for example, via Docker):
docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstackThis exposes the LocalStack APIs on
http://localhost:4566as expected by the provider config. -
Initialize Terraform:
terraform init -
Format and validate:
terraform fmt -check terraform validate -
Plan and apply against LocalStack:
terraform plan terraform applyConfirm with
yeswhen prompted. Terraform will create the S3 bucket in LocalStack rather than AWS; the dummy credentials and endpoint mapping make this safe for local experimentation. -
Check outputs:
terraform output terraform output bucket_name terraform output account_id -
Configure profile
aws configure --profile localstack -
Verify in LocalStack (using AWS CLI configured to point to LocalStack):
aws --endpoint-url http://localhost:4566 s3 ls --profile localstackYou should see the bucket named in
bucket_name. LocalStack typically usestestcredentials and a default account ID of000000000000. -
Destroy (noting
prevent_destroy)Because of
prevent_destroy = true,terraform destroywill refuse to delete the bucket. That’s intentional, to illustrate thelifecycleblock. Removeprevent_destroy, runterraform applyagain, then:terraform destroy
6. A Quick Comparison Table
To solidify the concepts, here is a concise comparison of key block types:
| Block type | Purpose | Typical labels | Commonly uses nested blocks |
|---|---|---|---|
terraform |
Configure Terraform itself | None | required_providers, backend |
provider |
Configure connection to an API | Provider name (e.g., "aws") |
Occasionally provider-specific blocks |
resource |
Declare managed infrastructure | Resource type, local name | lifecycle, provisioner, provider-specific |
data |
Read existing infrastructure | Data source type, local name | Provider-specific nested blocks |
variable |
Define module inputs | Variable name | None (just arguments) |
output |
Expose module outputs | Output name | None (just arguments) |
locals |
Define internal computed values | None | None (just arguments) |
Leave a Reply