Step 4: Breaking the Terralith
In the previous step, you successfully duplicated your prod
environment as a new dev
environment by replicating your prod
module declaration as a new dev
module instance in your live/main.tf
file. While this demonstrated the power of reusable modules, it also introduced significant risk: the Terralith. You’ve tightly coupled management of both your dev
and prod
environments in a single state file, and you introduce risk to one whenever you attempt to make changes to another.
In this step, you’ll solve this problem by breaking apart your Terralith apart. You’ll refactor your live
root module into two distinct dev
and prod
root modules. Each will have its own state file, completely eliminating the risk of accidental cross-environment changes.
Tutorial
Section titled “Tutorial”Breaking down your Terralith so that you have multiple root modules is fairly simple now that you understand state manipulation a bit better.
First, let’s create a top-level directory for prod
in live
.
# live
mkdir prod
Next, let’s move everything into the prod
directory (If you’re not comfortable with using the find
command here, you can just drag the content into the prod
directory).
find . -mindepth 1 -maxdepth 1 -not -name 'prod' -exec mv {} prod/ \;
To complete our new multi-environment setup, let’s duplicate that prod
directory to a new dev
directory.
cp -R prod dev
We need to edit the contents of the dev
and prod
directories to make some key adjustments. First, we’ll want to make sure that the backend.tf
files are updated to use new keys so that the two root modules don’t conflict with each other.
terraform { backend "s3" { bucket = "terragrunt-to-terralith-blog-2025-07-31-01" key = "dev/tofu.tfstate" region = "us-east-1" encrypt = true use_lockfile = true }}
terraform { backend "s3" { bucket = "terragrunt-to-terralith-blog-2025-07-31-01" key = "prod/tofu.tfstate" region = "us-east-1" encrypt = true use_lockfile = true }}
We’ll also want to update the references to the shared module, update the .auto.tfvars
file and edit the outputs to handle all the changes necessary for this project.
module "main" { source = "../../catalog/modules/best_cat"
name = var.name
aws_region = var.aws_region
lambda_zip_file = var.lambda_zip_file force_destroy = var.force_destroy}
module "main" { source = "../../catalog/modules/best_cat"
name = var.name
aws_region = var.aws_region
lambda_zip_file = var.lambda_zip_file force_destroy = var.force_destroy}
Given that we’ve renamed the the module, we’ll also need to add moved
blocks to handle the state moves that need to take place here. If you’re not sure what we’re doing here, consider reviewing earlier steps.
moved { from = module.dev.module.ddb.aws_dynamodb_table.asset_metadata to = module.main.module.ddb.aws_dynamodb_table.asset_metadata}
moved { from = module.dev.module.iam.aws_iam_policy.lambda_basic_execution to = module.main.module.iam.aws_iam_policy.lambda_basic_execution}
moved { from = module.dev.module.iam.aws_iam_policy.lambda_dynamodb to = module.main.module.iam.aws_iam_policy.lambda_dynamodb}
moved { from = module.dev.module.iam.aws_iam_policy.lambda_s3_read to = module.main.module.iam.aws_iam_policy.lambda_s3_read}
moved { from = module.dev.module.iam.aws_iam_role.lambda_role to = module.main.module.iam.aws_iam_role.lambda_role}
moved { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution}
moved { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb}
moved { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_s3_read to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_s3_read}
moved { from = module.dev.module.lambda.aws_lambda_function.main to = module.main.module.lambda.aws_lambda_function.main}
moved { from = module.dev.module.lambda.aws_lambda_function_url.main to = module.main.module.lambda.aws_lambda_function_url.main}
moved { from = module.dev.module.s3.aws_s3_bucket.static_assets to = module.main.module.s3.aws_s3_bucket.static_assets}
moved { from = module.prod.module.ddb.aws_dynamodb_table.asset_metadata to = module.main.module.ddb.aws_dynamodb_table.asset_metadata}
moved { from = module.prod.module.iam.aws_iam_policy.lambda_basic_execution to = module.main.module.iam.aws_iam_policy.lambda_basic_execution}
moved { from = module.prod.module.iam.aws_iam_policy.lambda_dynamodb to = module.main.module.iam.aws_iam_policy.lambda_dynamodb}
moved { from = module.prod.module.iam.aws_iam_policy.lambda_s3_read to = module.main.module.iam.aws_iam_policy.lambda_s3_read}
moved { from = module.prod.module.iam.aws_iam_role.lambda_role to = module.main.module.iam.aws_iam_role.lambda_role}
moved { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution}
moved { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb}
moved { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_s3_read to = module.main.module.iam.aws_iam_role_policy_attachment.lambda_s3_read}
moved { from = module.prod.module.lambda.aws_lambda_function.main to = module.main.module.lambda.aws_lambda_function.main}
moved { from = module.prod.module.lambda.aws_lambda_function_url.main to = module.main.module.lambda.aws_lambda_function_url.main}
moved { from = module.prod.module.s3.aws_s3_bucket.static_assets to = module.main.module.s3.aws_s3_bucket.static_assets}
Next, we’ll update the outputs, just like we did for the main.tf
files.
output "lambda_function_url" { description = "URL of the Lambda function" value = module.main.lambda_function_url}
output "s3_bucket_name" { description = "Name of the S3 bucket for static assets" value = module.main.s3_bucket_name}
output "lambda_function_url" { description = "URL of the Lambda function" value = module.main.lambda_function_url}
output "s3_bucket_name" { description = "Name of the S3 bucket for static assets" value = module.main.s3_bucket_name}
Finally, we need to update the .auto.tfvars
files to reflect the difference in inputs passed to variables in these two root modules.
# Required: Name used for all resources (must be unique)name = "best-cat-2025-07-31-01"
# Required: Path to your Lambda function zip filelambda_zip_file = "../../dist/best-cat.zip"
# Required: Name used for all resources (must be unique)name = "best-cat-2025-07-31-01-dev"
# Required: Path to your Lambda function zip filelambda_zip_file = "../../dist/best-cat.zip"
It’s time for some more state manipulation! We currently have a single state file in S3 at s3://[your-state-bucket]/tofu.tfstate
. Our plan for splitting the state is to basically duplicate state for both the dev
and prod
root modules, then remove resources that we don’t need from state in each of the root modules.
In addition to having the state in S3, we also have a local copy of state in each root module. Running the tofu init -migrate-state
command with the .terraform
directory populated by copy of state from the previous configuration of the project will copy state to the new location in each new root module.
$ tofu init -migrate-state
Initializing the backend...Backend configuration changed!
OpenTofu has detected that the configuration specified for the backendhas changed. OpenTofu will now check for existing state in the backends.
Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "s3" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "s3"! OpenTofu will automaticallyuse this backend unless the backend configuration changes.
$ tofu init -migrate-state
Initializing the backend...Backend configuration changed!
OpenTofu has detected that the configuration specified for the backendhas changed. OpenTofu will now check for existing state in the backends.
Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "s3" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "s3"! OpenTofu will automaticallyuse this backend unless the backend configuration changes.
We now have the state in s3://[your-state-bucket]/tofu.tfstate
copied to both:
s3://[your-state-bucket]/dev/tofu.tfstate
s3://[your-state-bucket]/prod/tofu.tfstate
We need to remove the resources from state that aren’t relevant in the new root modules, now so that we don’t deploy prod
resources in the dev
root module and vice versa.
removed { from = module.prod.module.s3.aws_s3_bucket.static_assets lifecycle { destroy = false }}
removed { from = module.prod.module.ddb.aws_dynamodb_table.asset_metadata lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_role.lambda_role lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_policy.lambda_s3_read lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_policy.lambda_dynamodb lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_policy.lambda_basic_execution lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_s3_read lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb lifecycle { destroy = false }}
removed { from = module.prod.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution lifecycle { destroy = false }}
removed { from = module.prod.module.lambda.aws_lambda_function.main lifecycle { destroy = false }}
removed { from = module.prod.module.lambda.aws_lambda_function_url.main lifecycle { destroy = false }}
removed { from = module.dev.module.s3.aws_s3_bucket.static_assets lifecycle { destroy = false }}
removed { from = module.dev.module.ddb.aws_dynamodb_table.asset_metadata lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_role.lambda_role lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_policy.lambda_s3_read lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_policy.lambda_dynamodb lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_policy.lambda_basic_execution lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_s3_read lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_dynamodb lifecycle { destroy = false }}
removed { from = module.dev.module.iam.aws_iam_role_policy_attachment.lambda_basic_execution lifecycle { destroy = false }}
removed { from = module.dev.module.lambda.aws_lambda_function.main lifecycle { destroy = false }}
removed { from = module.dev.module.lambda.aws_lambda_function_url.main lifecycle { destroy = false }}
Project Layout Check-in
Section titled “Project Layout Check-in”At this stage, we should have a live
directory that looks like the following (the catalog
directory shouldn’t have changed at all):
Directorylive
Directorydev
- backend.tf
- main.tf
- moved.tf
- outputs.tf
- providers.tf
- removed.tf
- vars-optional.tf
- vars-required.tf
- versions.tf
Directoryprod
- backend.tf
- main.tf
- moved.tf
- outputs.tf
- providers.tf
- removed.tf
- vars-optional.tf
- vars-required.tf
- versions.tf
Applying Updates
Section titled “Applying Updates”We should now see that we’re simply going to forget the removed resources instead of destroying them.
$ tofu plan...Plan: 0 to add, 1 to change, 0 to destroy, 11 to forget....
Let’s apply both dev
and prod
to finalize the moves and removals.
$ tofu apply...Apply complete! Resources: 0 added, 1 changed, 0 destroyed, 11 forgotten....
$ tofu apply...Apply complete! Resources: 0 added, 1 changed, 0 destroyed, 11 forgotten....
Trade-offs
Section titled “Trade-offs”We did it! We successfully broke apart our Terralith using OpenTofu alone. Some organizations get to this stage in their IaC journey, and are perfectly happy with managing their infrastructure like this.
You can limit the blast radius of your dev
and prod
environments this way, and it’s fairly straightforward to adjust your current working directory to the dev
root module when making modifications to the dev
environment, and adjusting your working directory to the prod
root module when making modifications to the prod
environment. This is actually the pattern that Gruntwork was initially helping customers achieve early on to make their infrastructure safer, and more manageable by teams.
There are, however, some downsides to how we’re managing infrastructure here.
- There’s some annoying boilerplate that’s inconvenient to create and maintain. The following files are identical in each environment, but need to be present just to get OpenTofu to provision the same module:
main.tf
outputs.tf
providers.tf
vars-optional.tf
vars-required.tf
- We also have almost the same file in each of these, and their values aren’t really that interesting.
backend.tf
.auto.tfvars
- We also don’t have a convenient way to run multiple root modules at once. What if we want to update both
dev
andprod
at once? What if we want to break down the environments further?
- As you might have guessed, the next step is to introduce Terragrunt to address some of these downsides, and unlock even more capabilities for managing infrastructure at scale.
Wrap Up
Section titled “Wrap Up”This is a pivotal moment in this guide. You have successfully started to break down the Terralith!
By migrating your state and refactoring your configuration, you have split your single, high-risk state file into two separate ones: one for dev
and one for prod
. The primary benefit is safety. You’ve drastically reduced the blast radius, as running tofu apply
in the dev
directory can now only affect development resources and running tofu apply
in the prod
directory can only affect production resources. However, this safety has come at the cost of duplication. Your dev
and prod
directories contain a lot of identical, boilerplate .tf
files, and it isn’t very scalable. What if you have twice as many environments? What if you have ten times as many? How are you going to handle making all those updates?
Helping customers solve these problems and more at scale is what Terragrunt was designed for, which we’ll introduce next to streamline your workflow.