Any service which the application consumes over a network can be termed as Backing services. The easiest examples are MySQL database, S3 object storage, Redis in-memory datastore, SES for sending emails etc. The important factor about backing services is that we can swap the backing services depending on our needs i.e. we can replace SES with SendGrid etc. Our Laravel application will use CloudWatch, RDS, Redis and S3 as backing services.
Update the file security_groups.tf
allowing HTTP traffic(port 80
) from the internet to Fargate tasks, Redis traffic(port 6379
) from Fargate tasks to Elasticache cluster and MySQL traffic(port 3306
) from Fargate tasks to RDS.
resource "aws_security_group" "ecs_tasks" { name = "${var.app_name}-ecs-tasks-sg" description = "SG for ECS tasks to allow access only from the ELB" vpc_id = aws_vpc.ecs_vpc.id ingress { description = "Allow application access from ${var.app_name}-elb-sg" from_port = 80 to_port = 80 protocol = "tcp" security_groups = [aws_security_group.ecs_elb.id] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } tags = { Name = "${var.app_name}-ecs-tasks-sg" Environment = "${var.app_env}" } } resource "aws_security_group" "ecs_cache" { name = "${var.app_name}-ecs-cache-sg" description = "SG for Elasticache to allow access only from ECS" vpc_id = aws_vpc.ecs_vpc.id ingress { description = "Allow ECS access from ${var.app_name}-ecs-tasks-sg" from_port = 6379 to_port = 6379 protocol = "tcp" security_groups = [aws_security_group.ecs_tasks.id] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "${var.app_name}-ecs-cache-sg" Environment = "${var.app_env}" } } resource "aws_security_group" "ecs_rds" { name = "${var.app_name}-rds-sg" description = "SG for RDS to allow access only from ECS" vpc_id = aws_vpc.ecs_vpc.id ingress { description = "Allow RDS access from ${var.app_name}-ecs-tasks-sg" from_port = 3306 to_port = 3306 protocol = "tcp" security_groups = [aws_security_group.ecs_tasks.id] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "${var.app_name}-rds-sg" Environment = var.app_env } }
Create the file cloudwatch.tf
for creating the CloudWatch log groups. Each group will store the logs of the web server, queue worker and the scheduler respectively.
resource "aws_cloudwatch_log_group" "ecs_webserver_logs" { name = "${var.app_name}-${var.app_env}-webserver-logs" retention_in_days = 3 tags = { Name = "${var.app_name}-webserver-logs" Environment = var.app_env } } resource "aws_cloudwatch_log_group" "ecs_worker_logs" { name = "${var.app_name}-${var.app_env}-worker-logs" retention_in_days = 3 tags = { Name = "${var.app_name}-worker-logs" Environment = var.app_env } } resource "aws_cloudwatch_log_group" "ecs_scheduler_logs" { name = "${var.app_name}-${var.app_env}-scheduler-logs" retention_in_days = 3 tags = { Name = "${var.app_name}-scheduler-logs" Environment = var.app_env } }
Create the file elasticache.tf
and fill the details for provisioning the Redis cluster.
Create a Elasticache subnet group consisting of all the Fargate public subnets. This group will be applied to our Redis cluster.
resource "aws_elasticache_subnet_group" "ecs_cache_subnet_group" { name = "${var.app_name}-subnet-group" subnet_ids = aws_subnet.ecs_public.*.id tags = { Name = "${var.app_name}-redis" Environment = var.app_env } }
Create the Redis cluster specifying the Elasticache subnet group and the security group.
resource "aws_elasticache_replication_group" "ecs_cache_replication_group" { replication_group_id = "${var.app_name}-replication-group" description = "Elasticache for ${var.app_name}" engine = "redis" engine_version = "6.2" node_type = "cache.t4g.micro" port = 6379 automatic_failover_enabled = true subnet_group_name = aws_elasticache_subnet_group.ecs_cache_subnet_group.name security_group_ids = [aws_security_group.ecs_cache.id] replicas_per_node_group = 3 num_node_groups = 1 tags = { Name = "${var.app_name}-redis" Environment = "${var.app_env}" } }
Create the file iam_roles.tf
for setting up IAM roles that will be used by Fargate.
Initially, create the policy document granting permission to assume the role using temporary security credentials.
data "aws_iam_policy_document" "ecs_tasks_execution_role_policy" { statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["ecs-tasks.amazonaws.com"] } } }
Create the IAM role
resource "aws_iam_role" "ecs_tasks_execution_role" { name = "${var.app_name}-ecs-task-execution-role" description = "${var.app_name} ECS tasks execution role" assume_role_policy = data.aws_iam_policy_document.ecs_tasks_execution_role_policy.json tags = { Name = "${var.app_name}" Environment = "${var.app_env}" } }
Store the ARN value of AmazonEC2ContainerServiceforEC2Role
into the custom policy. This managed IAM policy contains the permissions needed to use the full Amazon ECS feature set.
data "aws_iam_policy" "AmazonEC2ContainerServiceforEC2Role" { arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" }
Attach the above policy to the previously creating IAM role.
resource "aws_iam_role_policy_attachment" "ecs_access_policy_attachment" { role = aws_iam_role.ecs_tasks_execution_role.name policy_arn = data.aws_iam_policy.AmazonEC2ContainerServiceforEC2Role.arn }
Create a custom policy for allowing S3 access from Laravel application running in Fargate
data "aws_iam_policy_document" "ecs_php_s3_policy_document" { statement { actions = [ "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectAcl", "s3:DeleteObject" ] resources = [ "${aws_s3_bucket.ecs_s3.arn}", "${aws_s3_bucket.ecs_s3.arn}/*", ] } }
Create the S3 access policy.
resource "aws_iam_policy" "ecs_php_s3_policy" { name = "${var.app_name}-ecs-s3-policy" description = "PHP access to S3" policy = data.aws_iam_policy_document.ecs_php_s3_policy_document.json }
Attach the policy for S3 access to the previously created IAM role.
resource "aws_iam_role_policy_attachment" "web_ec2_s3_policy_attachment" { role = aws_iam_role.ecs_tasks_execution_role.name policy_arn = aws_iam_policy.ecs_php_s3_policy.arn }
Create the file ssm.tf
for storing the RDS master password
resource "random_password" "db_generated_password" { length = 16 special = true override_special = "*-_" } resource "aws_ssm_parameter" "db_password" { name = "/${var.app_name}/${var.app_env}/database/password/master" description = "MySQL master password" type = "SecureString" value = random_password.db_generated_password.result tags = { Name = "${var.app_name}-application-elb" Environment = "${var.app_env}" } }
Create the file rds.tf
and fill the details for provisioning the RDS machine. Create a RDS subnet group consisting of all the Fargate public subnets. This group will be applied to our DB instance.
resource "aws_db_subnet_group" "ecs_rds_subnet_group" { name = "${var.app_name}-rds-subnet-group" subnet_ids = aws_subnet.ecs_public.*.id tags = { Name = "${var.app_name}-rds" Environment = var.app_env } } resource "aws_db_instance" "ecs_rds" { allocated_storage = 10 engine = "mysql" engine_version = "8.0.23" instance_class = "db.t2.micro" identifier = "${var.app_name}-mysql" db_name = "litebreezeStaging" username = "root" password = random_password.db_generated_password.result parameter_group_name = "default.mysql8.0" multi_az = false publicly_accessible = true skip_final_snapshot = true storage_type = "gp2" backup_window = "00:00-00:30" backup_retention_period = 7 maintenance_window = "Mon:00:30-Mon:01:00" allow_major_version_upgrade = false vpc_security_group_ids = [aws_security_group.ecs_rds.id] db_subnet_group_name = aws_db_subnet_group.ecs_rds_subnet_group.name tags = { Name = "${var.app_name}-rds" Environment = var.app_env } }
Create the file s3.tf
and add the following content for creating the S3 bucket.
resource "aws_s3_bucket" "ecs_s3" { bucket = "${var.app_name}-${var.app_env}-assets" } resource "aws_s3_bucket_acl" "ecs_s3_acl" { bucket = aws_s3_bucket.ecs_s3.id acl = "private" } resource "aws_s3_bucket_public_access_block" "ecs_s3_permissions" { bucket = aws_s3_bucket.ecs_s3.id block_public_acls = false block_public_policy = true ignore_public_acls = false restrict_public_buckets = true }
Execute the following command to generate a plan for the above infrastructure.
terraform plan -out plans/backing-services
The command will output:
Terraform will perform the following actions: # data.aws_iam_policy_document.ecs_php_s3_policy_document will be read during apply # aws_cloudwatch_log_group.ecs_scheduler_logs will be created # aws_cloudwatch_log_group.ecs_webserver_logs will be created # aws_cloudwatch_log_group.ecs_worker_logs will be created # aws_db_instance.ecs_rds will be created # aws_db_subnet_group.ecs_rds_subnet_group will be created # aws_ecr_repository.ecr_nginx will be created # aws_ecr_repository.ecr_php will be created # aws_elasticache_replication_group.ecs_cache_replication_group will be created # aws_elasticache_subnet_group.ecs_cache_subnet_group will be created # aws_iam_policy.ecs_php_s3_policy will be created # aws_iam_role.ecs_tasks_execution_role will be created # aws_iam_role_policy_attachment.ecs_access_policy_attachment will be created # aws_iam_role_policy_attachment.web_ec2_s3_policy_attachment will be created # aws_s3_bucket.ecs_s3 will be created # aws_s3_bucket_acl.ecs_s3_acl will be created # aws_s3_bucket_public_access_block.ecs_s3_permissions will be created # aws_security_group.ecs_cache will be created # aws_security_group.ecs_rds will be created # aws_security_group.ecs_tasks will be created # aws_ssm_parameter.db_password will be created # random_password.db_generated_password will be created Plan: 21 to add, 0 to change, 0 to destroy.
Review the plan and apply the changes.
terraform apply plans/backing-services Apply complete! Resources: 21 added, 0 changed, 0 destroyed.
Commit and push the changes to the repository. All the codes related to the above section are available in this GitHub link.
Part 1 – Getting started with Terraform
Part 2 – Creating the VPC and ALB
Part 3 – Backing services
Part 4 – Configuring the LEMP stack in Docker
Part 5 – ECS