{"id":26366,"date":"2019-03-26T16:32:40","date_gmt":"2019-03-26T21:32:40","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=26366"},"modified":"2021-12-15T00:15:58","modified_gmt":"2021-12-15T05:15:58","slug":"part-3-building-a-fully-operational-devops-platform-on-aws-using-terraform_devops","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/part-3-building-a-fully-operational-devops-platform-on-aws-using-terraform_devops\/","title":{"rendered":"Part 3: Building a DevOps Platform on AWS using Terraform"},"content":{"rendered":"
In\u00a0Part 1<\/a> and Part 2<\/a>\u00a0of the series we focused on building out the AWS infrastructure (networking, routing, etc.) and DevOps Application servers (EC2 instances and configuration, database, Redis, etc.).<\/p>\n In Part 3 I will again build additional Terraform scripts which build out the Fargate cluster to run our application and an Elastic Container Registry (ECR) to store our application images. This will set us up for the final Part 4, where we will deploy our containerized application to AWS Fargate using the Jenkins scrips and our sample application code.<\/p>\n Let\u2019s get started\u2026<\/strong><\/p>\n This next section of script will address building out an Elastic Container Environment utilizing the AWS Fargate service.<\/p>\n This script will create a CloudWatch log group where our \u201cHello World\u201d Fargate tasks will log all activities related to starting, stopping and configuring the containers.<\/p>\n <\/a><\/p>\n This script is responsible for creating our Elastic Container Repository (ECR) \u201cmyapp-repo\u201dwhere Jenkins will push our application images and Fargate will pull the image.<\/p>\n <\/a><\/p>\n When we later deploy copies of our application into containers, we will need a load balancer to manage load balancing to each container. Here we will create an application load balancer and associated listener and target group to place our container apps.<\/p>\n Our load balancer will be listening on port 80 and forwarding traffic to the containers on port 80 (HTTP). We will later reference the target group when building out our Fargate container service to ensure our apps are deployed behind this load balancer.<\/p>\n <\/a><\/p>\n Later when we build out a Fargate task, we will need to define two (2) roles for the task:<\/p>\n Here we define those 2 roles with the required policies and permissions.<\/p>\n <\/a><\/p>\n <\/a><\/p>\n Here we define 2 security groups. The first allows TCP traffic on any port from the public subnet CIDR ranges. This will allow our load balancer, which resides in the public subnet, to forward traffic to our target containers.<\/p>\n In a production environment, you would want to tighten this up to specific ports. The second SG will be attached to our load balancer to only allow TCP\/80 traffic to ingress into the load balancer.<\/p>\n <\/a><\/p>\n In order to run our application in an AWS Container, we will utilize the Fargate Elastic Container Service, which allows us to avoid managing the infrastructure by simply deploying our application containers as Fargate tasks. Here we will create the Fargate cluster which will manage the infrastructure where our application will run.<\/p>\n <\/a><\/p>\n Now that we have a Fargate cluster defined (above), we can define how our application will be deployed within the cluster. So, we create an ECS task definition and ECS service then point the task at our Elastic Container Repository URL where our Docker image resides.<\/p>\n The task definition defines what image to deploy, the resource constraints per task and the role under which to execute the task. The service defines how many copies of the task to run, the load balancer managing traffic to the tasks, which subnets to deploy to and which Fargate cluster to deploy our tasks.<\/p>\n The service continuously monitors the tasks to ensure there are a proper number of tasks running and will spawn additional tasks as needed to maintain the desired count.<\/p>\n <\/a><\/p>\n In order to build out the ECS task templates, a template file needs to be created for each Fargate (ECS) task. The following template needs to be created in the templates sub-folder of your current working directory. Create this file in the \u201ctemplates\u201d sub-folder.<\/p>\n This template file defines the image to use for our container service, which is located in AWS Elastic Container Repository (ECR). It defines the port mappings on the ECS host and how that maps to the Docker container. We are mapping 2 ports (22) and (80).<\/p>\n The other important configuration is the logging configuration, which directs logs to a specific log group in CloudWatch Logs in our region.<\/p>\n <\/a><\/p>\n Once all the scripts and templates are created, we can once again run terraform plan<\/em><\/strong> and terraform apply<\/em><\/strong> to build the remaining platform components and configure the EC2 instances.<\/p>\n <\/a><\/p>\n As you can see, we are now using a terraform template<\/em><\/strong> in one of the new scripts. Terraform reports an error satisfying plugin requirements. To fix this, you will need to once again run terraform init<\/em><\/strong> to install the terraform provider for processing templates and run another plan.<\/p>\n <\/a><\/p>\n Let\u2019s try terraform plan<\/em><\/strong> again with the provider installed. Success\u2026The output is very long. I truncated the output to show only the tail end.<\/p>\n <\/a><\/p>\n Run another \u2018plan\u2019. You will see all of the new resources that we just built script for. Again, truncated here for brevity.<\/p>\nfg_cloudwatch.tf<\/h4>\n
fg_ecr.tf<\/h4>\n
fg_elb.tf<\/h4>\n
fg_iam.tf<\/h4>\n
\n
fg_sg.tf<\/h4>\n
fg_ecs.tf<\/h4>\n
fg_ecs_tasks.tf<\/h4>\n
templates\/hello-world.json.tpl<\/h4>\n
Terraform plan<\/h4>\n
Terraform init<\/h4>\n
Terraform plan (take 2)<\/h4>\n
Terraform apply<\/h4>\n