This article shows how to run a local AWS‑like S3 environment with LocalStack in Docker, manage buckets with Terraform, and inspect everything visually using an S3 GUI client such as S3 Browser (or any S3‑compatible desktop app).
1. Overview of the setup
You will end up with:
- LocalStack running via
docker-compose.yml, exposing S3 onhttp://localhost:4566. - Terraform creating an S3 bucket, enabling versioning, and adding a lifecycle rule.
- S3 Browser (or a similar S3 GUI) connected to LocalStack so you can see buckets and object versions visually.
Rationale: this mirrors a real AWS workflow (Infra as Code + GUI) while remaining entirely local and safe to experiment with.
2. LocalStack with docker-compose.yml
Create a working directory, e.g. localstack-s3-terraform, and add docker-compose.yml:
version: "3.8"
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- "4566:4566" # Edge port: all services, including S3
- "4510-4559:4510-4559"
environment:
- SERVICES=s3 # Only start S3 for this demo
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "./localstack-data:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Key aspects:
- Port 4566 is the single “edge” endpoint for S3 and other services in current LocalStack.
SERVICES=s3keeps the environment focused and startup fast../localstack-datapersists LocalStack state (buckets and objects) between restarts.
Start LocalStack:
docker compose up -d
3. Terraform config with versioning and lifecycle
In the same directory, create main.tf containing the AWS provider configured for LocalStack and S3 with versioning + lifecycle policy:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "ap-southeast-2"
access_key = "test"
secret_key = "test"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_use_path_style = true
endpoints {
s3 = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "demo" {
bucket = "demo-bucket-localstack"
}
resource "aws_s3_bucket_versioning" "demo_versioning" {
bucket = aws_s3_bucket.demo.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "demo_lifecycle" {
bucket = aws_s3_bucket.demo.id
rule {
id = "expire-noncurrent-30-days"
status = "Enabled"
filter {
prefix = "" # apply to all objects
}
noncurrent_version_expiration {
noncurrent_days = 30
}
}
}
Important Terraform points:
- Provider: points to
http://localhost:4566so all S3 calls go to LocalStack, not AWS. - Dummy credentials (
test/test) are sufficient; LocalStack doesn’t validate real AWS keys. - Versioning is modeled as a separate resource to clearly express bucket behavior.
- Lifecycle configuration is modeled explicitly as well, aligning with AWS best practices and lifecycle examples.
Initialize and apply:
terraform init
terraform apply
Confirm when prompted; Terraform will create the bucket, enable versioning, and attach the lifecycle rule.
4. Configuring S3 Browser (or similar GUI) for LocalStack
Now that LocalStack is running and Terraform has created your bucket, you connect S3 Browser (or any S3 GUI) to LocalStack instead of AWS.
In S3 Browser, create a new account/profile with something like:
- Account name:
LocalStack(any label you like). - S3 endpoint / server:
http://localhost:4566 - Access key:
test - Secret key:
test - Region:
ap-southeast-2
Make sure your client is configured to use the custom endpoint instead of the standard AWS endpoints (this is usually done in an “S3 Compatible Storage” as the Account Type).
Once saved and connected:
- You should see the bucket
demo-bucket-localstackin the bucket list. - Opening the bucket lets you upload, delete, and browse objects, just as if you were talking to real S3.
Recent Comments