The next step is to create the VPC and ALB.
VPC, short for virtual private cloud, allows you to launch AWS services in a logically isolated virtual network.
We start by creating a file called variables.tf
that will hold variable values of the application. Fill the file with the following content:
variable "app_name" { type = string default = "litebreeze-ecs" } variable "app_env" { type = string default = "staging" } variable "aws_region" { type = string description = "AWS region where the infrastructure will be created" default = "eu-west-1" } variable "vpc_cidr" { description = "IP address range to use in VPC" default = "172.16.0.0/16" } variable "az_count" { description = "Number of Availability zones" default = "3" } variable "subnet_count" { description = "Number of subnets" default = "3" }
The variables are explained in their description.
Next, we create a file called vpc.tf
. We will be launching the Fargate cluster inside this VPC. The VPC should only fetch the Availability zones within a region. Add the following content to the above file:
data "aws_availability_zones" "ecs_az" { state = "available" filter { name = "opt-in-status" values = ["opt-in-not-required"] } }
Create a VPC inside AWS cloud for hosting our services.
resource "aws_vpc" "ecs_vpc" { cidr_block = var.vpc_cidr instance_tenancy = "default" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.app_name}-vpc" Environment = "${var.app_env}" } }
Create an internet gateway allowing resources in public subnets to access the outside world.
resource "aws_internet_gateway" "ecs_internet_gw" { vpc_id = aws_vpc.ecs_vpc.id tags = { Name = "${var.app_name}-internet-gw" Environment = "${var.app_env}" } }
Create public subnets in different availability zones.
resource "aws_subnet" "ecs_public" { count = var.subnet_count vpc_id = aws_vpc.ecs_vpc.id cidr_block = cidrsubnet(var.vpc_cidr, 2, count.index) availability_zone = data.aws_availability_zones.ecs_az.names[count.index % var.az_count] map_public_ip_on_launch = true tags = { Name = "${var.app_name}-public-sn-${count.index}" Environment = "${var.app_env}" } }
Create a VPC routing table.
resource "aws_route_table" "ecs_rtb_public" { vpc_id = aws_vpc.ecs_vpc.id tags = { Name = "${var.app_name}-public-rtb" Environment = "${var.app_env}" } }
Associate route for the public subnet in the VPC route table.
resource "aws_route" "ecs_route_public" { route_table_id = aws_route_table.ecs_rtb_public.id destination_cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.ecs_internet_gw.id }
Create a routing table entry in the VPC routing table for all the public subnets.
resource "aws_route_table_association" "ecs_rtba_public" { count = var.subnet_count subnet_id = element(aws_subnet.ecs_public.*.id, count.index) route_table_id = aws_route_table.ecs_rtb_public.id }
Create a directory plans
inside the Terraform folder. Execute the following command to generate a plan for the above infrastructure.
terraform plan -out plans/vpc
The command will output:
Terraform will perform the following actions: # aws_internet_gateway.ecs_internet_gw # aws_route.ecs_route_public will be created # aws_route_table.ecs_rtb_public will be created # aws_subnet.ecs_public[0] will be created # aws_subnet.ecs_public[1] will be created # aws_subnet.ecs_public[2] will be created # aws_vpc.ecs_vpc will be created Plan: 7 to add, 0 to change, 0 to destroy.
Review the plan and apply the changes.
terraform apply plans/vpc Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
Commit and push the changes to the repository. All the codes related to the above section are available in this GitHub link.
We will be using the application load balancer(ALB) to direct traffic to the web services.
Create a file called security_groups.tf
and add the following ALB security group to allow select traffic to Fargate services:
resource "aws_security_group" "ecs_elb" { name = "${var.app_name}-elb-sg" description = "SG for ELB to access the ECS" vpc_id = aws_vpc.ecs_vpc.id ingress { description = "Allow HTTP access from anywhere" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } tags = { Name = "${var.app_name}-elb-sg" Environment = "${var.app_env}" } }
Create a file called alb.tf
and the following contents for creating the new ALB utilizing the security group created in the previous file.
resource "aws_alb" "ecs_alb" { name = "${var.app_name}-application-alb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.ecs_elb.id] subnets = aws_subnet.ecs_public.*.id tags = { Name = "${var.app_name}-application-elb" Environment = "${var.app_env}" } }
Create a new target group for the ALB HTTP traffic along with a health check to determine if the container is working as expected.
resource "aws_lb_target_group" "ecs_alb_tg" { name = "${var.app_name}-alb-tg" port = 80 protocol = "HTTP" vpc_id = aws_vpc.ecs_vpc.id target_type = "ip" health_check { healthy_threshold = "3" interval = "30" protocol = "HTTP" matcher = "200" timeout = "3" path = "/health_check" unhealthy_threshold = "2" } tags = { Name = "${var.app_name}-alb-tg" Environment = "${var.app_env}" } }
Register the ALB HTTP listener.
resource "aws_lb_listener" "ecs_alb_http_listener" { load_balancer_arn = aws_alb.ecs_alb.id port = "80" protocol = "HTTP" default_action { type = "forward" target_group_arn = aws_lb_target_group.ecs_alb_tg.id } tags = { Name = "${var.app_name}-http-listener" Environment = "${var.app_env}" } }
Execute the following command to generate a plan for the above infrastructure.
terraform plan -out plans/alb The command will output: Terraform will perform the following actions: # aws_alb.ecs_alb will be created # aws_lb_listener.ecs_alb_http_listener will be created # aws_lb_target_group.ecs_alb_tg will be created # aws_security_group.ecs_elb will be created Plan: 4 to add, 0 to change, 0 to destroy.
Review the plan and apply the changes.
terraform apply plans/alb Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Commit and push the changes to the repository. All the codes related to the above section are available in this GitHub link.
Part 1 – Getting started with Terraform
Part 2 – Creating the VPC and ALB
Part 3 – Backing services
Part 4 – Configuring the LEMP stack in Docker
Part 5 – ECS
Leave a Reply