Exploring Terraform on AWS: Launching EC2 and Setting Up a Load Balancer
In the past few days, I explored Terraform on AWS to understand how to automate cloud resources.
Before, we built a basic setup with VPC and security groups.
Now, I continued my learning by launching EC2 instances and creating a working Application Load Balancer (ALB).
This part helped me understand how different AWS services connect together using Terraform.
Here’s what I worked on step by step:
- EC2 Instances (Public Bastion Host + Private Instances)
- Elastic IP for static public IP
- SSH provisioning with
null_resource - Application Load Balancer setup
- Target Groups and EC2 Attachments
What Are Terraform Modules?
Before going deeper, I learned that modules in Terraform are like reusable blocks of code.
They make it easier to organize and manage large projects.
Instead of writing the same resource many times, you can call a module, give it some input variables, and Terraform handles the rest.
For example:
- The VPC module automatically creates subnets, route tables, and internet gateways.
- The EC2 module creates instances with tags, keys, and networking already configured.
- The ALB module creates listeners, target groups, and load balancers without writing every single resource manually.
Modules help make the code clean, organized, and easy to reuse in future projects.
As a junior learner, this made it much easier for me to understand complex setups step by step.
Defining EC2 Variables
Instance Type
variable "instance_type" {
description = "EC2 Instance Type"
type = string
default = "t2.micro"
}
This variable decides the size of the EC2 instance.t2.micro is small and part of the AWS free tier, which is good for testing or practice.
Instance Key Pair
variable "instance_keypair" {
description = "AWS EC2 Key pair"
type = string
default = "terraform-key"
}
This is the SSH key that lets you connect to your EC2 instance.
You need to create this key in your AWS account before using it in Terraform.
Private Instance Count
variable "private_instance_count" {
description = "Private EC2 Instances Count"
type = number
default = 1
}
This shows how many private EC2 instances will be created.
You can increase the number if you want more servers running in the backend.
Launching the Public Bastion Host
module "ec2_public" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "5.7.0"
name = "${var.environment}-BastionHost"
ami = data.aws_ami.amzlinux2.id
instance_type = var.instance_type
key_name = var.instance_keypair
subnet_id = module.vpc.public_subnets[0]
vpc_security_group_ids = [module.public_bastion_sg.security_group_id]
tags = local.common_tags
}
This block uses the official EC2 module from Terraform.
It makes launching an EC2 instance much easier because you don’t have to define each resource by hand.
The module already knows what an EC2 instance needs (like AMI, subnet, and security group).
You just pass the correct values, and it builds everything for you.
This EC2 acts as a Bastion Host, which is a secure gateway for connecting to private instances.
Assigning Elastic IP
resource "aws_eip" "bastion_eip" {
instance = module.ec2_public.id
domain = "vpc"
tags = local.common_tags
}
The Elastic IP (EIP) gives a static public IP to the bastion host.
This means the IP will not change after a restart, which is very helpful for SSH access or firewall rules.
Remote Provisioning with null_resource
resource "null_resource" "name" {
connection {
type = "ssh"
host = aws_eip.bastion_eip.public_ip
user = "ec2-user"
private_key = file("private-key/terraform-key.pem")
}
provisioner "file" {
source = "private-key/terraform-key.pem"
destination = "/home/ec2-user/terraform-key.pem"
}
provisioner "remote-exec" {
inline = [
"sudo chmod 400 /home/ec2-user/terraform-key.pem",
"echo VPC created with ID: ${module.vpc.vpc_id} >> creation-log.txt"
]
}
}
The null_resource helps run remote commands inside the EC2 instance after it is created.
It connects with SSH, copies a file, and runs basic shell commands.
I used it to log when the VPC was created.
Setting Up Application Load Balancer (ALB)
module "alb" {
source = "terraform-aws-modules/alb/aws"
version = "9.11.0"
name = "${local.name}-alb"
load_balancer_type = "application"
vpc_id = module.vpc.vpc_id
subnets = module.vpc.public_subnets
security_groups = [module.loadbalancer_sg.security_group_id]
}
This block uses another Terraform module for Application Load Balancer.
It automatically creates the ALB, listeners, and target groups.
You don’t need to write each resource — the module handles it based on the input values.
It runs on public subnets and forwards traffic to backend EC2 instances.
Listener and Target Group
The listener waits for incoming web requests and forwards them to the target group.
The target group checks if EC2 instances are healthy before sending traffic to them.
For example, I set health checks on /app1/index.html, so the ALB can know if an instance is running fine.
Attaching EC2 Instances to the ALB
resource "aws_lb_target_group_attachment" "mytg1" {
for_each = { for k, v in module.ec2_private: k => v }
target_group_arn = module.alb.target_groups["mytg1"].arn
target_id = each.value.id
port = 80
}
This part connects each private EC2 instance to the ALB target group.
It helps the load balancer send traffic to backend servers automatically.
What I Learned as a Junior Engineer
While exploring Terraform on AWS, I learned:
- How Terraform modules make the code cleaner and easier to reuse
- How to launch EC2 instances automatically
- How to attach an Elastic IP for stable SSH access
- How to connect to EC2 remotely using Terraform
- How to create and use an Application Load Balancer
- How to connect backend EC2s to the ALB using a target group
This project helped me understand how Infrastructure as Code (IaC) works and how different AWS services interact.
Next, I plan to try adding HTTPS redirection, path-based routing, and Route 53 DNS integration to make the setup more complete.