Elastic Container Service (ECS) is a managed AWS service that typically uses Docker, which allows developers to launch containers and ensure that container instances are isolated from each other.ย
ECS sits on top of Docker, allowing you to launch, set up, and monitor your Docker containers on your ECS cluster.
You need an infrastructure to run Docker containers. There are two options for it:
- Serverless option (with Fargate)
- Managed option (with EC2) – This comes with EC2 instances, which we rent and pay by the hour.
ECS supports auto-scaling, which lets you handle variable volume. As your traffic rises and falls, you can set up auto-scaling on a specific metric (e.g., traffic, memory utilization, CPU utilization). Therefore, you can bring the number of containers up or down in response to fluctuations in the selected metric. This ensures your service always has enough infrastructure to serve the incoming traffic.
NEW RESEARCH: LEARN HOW DECISION-MAKERS ARE PRIORITIZING DIGITAL INITIATIVES IN 2024.
ECS is great for ad hoc jobs or full-scale services that require a certain number of containers up and running. Using ECS with Docker is also very cost-effective, as you can host multiple containers on a single compute resource.ย
For example, when using EC2, you can have multiple Docker tasks and containers running on that single instance. It’s cost-effective because you can better utilize the available resources and not use them on operating system overhead.
Before You Begin
Before starting, you should have an AWS account with an IAM identity and privileges to manage the following services:
- EC2
- ECS
- ECR
- VPC
- Load balancer (EC2 feature)ย
- IAM
- S3
Elastic Container Registry
ECR is where we will later upload the Docker images using CodeBuild. The configuration is straightforward; you need to search ECR and create a repository. Then, you can select whether the repository should be public or private and add the repository name. The repository name would be used in the buildspec file to identify where you want to upload the image.
Dockerfile and Buildspec
The first step would be to create a Dockerfile so that it can upload Docker images to Elastic Container Registry.
FROM node:12.22-alpine WORKDIR /app COPY package.json yarn.lock /app RUN yarn install COPY . /app EXPOSE 3000 CMD npm run start:prod
We would also need a buildspec file. It will give instructions to CodeBuild on how to log in, run the code, and then upload the Docker image to ECR. The build will run whenever we commit code into a specific branch.ย
version: 0.2 phases: install: runtime-versions: docker: 18 pre_build: commands: - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) - REPOSITORY_URI={ECR repo URI} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_TAG=${COMMIT_HASH:=latest} build: commands: - docker build -t $REPOSITORY_URI:latest -f Dockerfile . - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG post_build: commands: - docker push $REPOSITORY_URI:latest - docker push $REPOSITORY_URI:$IMAGE_TAG - printf '[{"name":"nestjs-graphql","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json artifacts: files: imagedefinitions.json
Add both Dockerfile and buildspec files to the root of the directory.ย
CodeBuild
Go to CodeBuild and select Create a project
1. Project Configuration
In project configuration, add theย Project name.
2. Source
Define the source of the repository. In our case, the source provider is GitHub. Also, add the Repository URL and name of the branch in the source version as shown below.
3. Environment
We will use Managed Image with Ubuntu Operating System for the environment image.
We also need to enable the Privileged Flag for building Docker images. If this flag is not enabled, we will get the following error.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Chooseย New service roleย and add a role name for now. We will add a few more permissions to the newly created role later.
We will also require a fewย Additional configurationsย like setting compute and Environment variables. For the compute option, we will chooseย 3 GB memory, 2 vCPUs.ย You can choose higher RAM and CPU based on your needs.
We can add Environment variables in AWS Secrets Manager and use them in the build.
4. Buildspec
Choose theย Use a buildspec file option and specify a name if you have multiple buildspec for staging and production.
5. Batch Configuration
We will not use batch configuration.
6. Artifacts
We will not add any artifact but can use Encryption Key, which is in additional configuration. Go to Key Management service and add arnย link so that the build can access configurations in the S3 bucket.
7. Logs
We are only enabling cloud watch logs. After all the configurations, click Create build project.
8. Permissionsย
After creating a new service role in the CodeBuild environment, add:
- SecretsManagerReadWrite to access environment variables and other secrets
- AmazonS3ReadOnlyAccess, which stores project configuration
- CodeBuildAccessToECR to upload build images to Elastic Container Register
- AmazonEC2ContainerRegisteryFullAccess to deploy the image to EC2 instance using Elastic Container Service
Elastic Container Service Configuration
Go to Elastic Container Service and create a new cluster. We will create relevant services and tasks in the cluster.
1. Task Definition
A Task Definition defines which containers are present in the task and how they will communicate with each other. Create a new Task Definition. We will be selectingย Fargateย as a launch type because it’s an AWS-managed infrastructure and has no EC2 instance to manage.
Configure task and container definitions:
- Add the definition name.
- Task role, an optional IAM role that tasks can use to make API requests to other AWS services.
- Network mode would be by default awsvpc for ECS Fargate.
Task execution to the IAM role that authorizes Amazon ECS to pull private images and publish logs for your task. This takes the place of the EC2 Instance role when running tasks.
For the task size, we will choose 2GB RAM and 0.5 vCPU. The memory and CPU should be subject to the needs of the application.
Add Container, the container name (specified in our buildspec file), and get the image URL from ECR.
Port mappings allow containers to access ports on the host container instance to send or receive traffic. This should be the port we exposed in the dockerfile.
We don’t need other configuration options for this setup. After this, create the task definition.
2. Create Service
Go to the created cluster in ECS and create a new service. Service helps configure copies of the task definition we want to run and maintain in a cluster.
Step 1: Configure service
- Select Fargate as the launch type for running the task
- Specify the task definition created in the previous step
- Choose the clusterย
- Add the service name
- The number of tasks would be 1 for this project
The rest of the configurations should be default for step 1.
Step 2: Configure network
Our containers will need external access to communicate with ECS external endpoint. We would also want to run our service and task definition in a private network. Therefore, we will configure Virtual Private Cloudย using AWS documentation onย Creating a VPC with Public and Private Subnets for your clusterย and then create:
- Virtual Private Cloud
- Public subnets in VPC that task scheduler should consider for placement
- Security group, a VPC security group, is by default created with port 80 open to the internet, but we will have to add Custom TCP with Port 3000 because our container is exposed to port 3000.
The Elastic load balancing will help distribute all the incoming traffic between the running tasks. We can configure the load balancer and its target groups in EC2 load balancing options.
We will create a target group because load balancers routes will request the target group and perform health checks on the targets.
Go to Load Balancers > Target Groups > Create target group
Next, specify group details:
- The target Type should be IP because it helps with routing to multiple network interfaces and IP addresses on the same instance.
- Add a target group name, protocol, and port as shown below:
- Select the created VPC that will host the load balancer. We will configure the load balance after creating the target group.
To register targets, choose a network, specify IPs and defined ports, and create the target group.
We will create the Network Load Balancer because it distributes incoming TCP and UDP traffic across multiple targets such as Amazon EC2 instances, microservices, and containers. When the load balancer receives a connection request, it selects a target based on the protocol and port that are specified in the listener configuration and the routing rule specified as the default action.
- Add load balancer name.
- Selectย internet-facing scheme because it routes requests from clients over the internet to targets.
- Select IP address which our subnets use, i.e.,ย IPv4
- Select the Virtual Private Cloud(VPC) for our targets. We will select the sameย VPCย added to our target group.
- Select at least two Availability Zones and one subnet per zone, i.e., eu-central-1a and eu-central-1b. The load balancer routes traffic to targets in these Availability Zones only.
- Add Listener with TCP protocol, port 3000, which should forward to the target group we created. The listeners in your load balancer receive matching protocol and port requests and route these requests based on the default action you specify. You can use a TLS listener to offload the work of encryption and decryption to your load balancer.
This completes the ECS service and task definition configuration. The last step would be to create a CodePipeline.
CodePipeline
CodePipeline will help automate the software release process. We will add source, CodeBuild, and ECS deployment stages to our CodePipeline.
1. Choose Pipeline Settings
- Add pipeline name
- New Service role
2. Add Source Stage
- Select GitHub (version 2) as a source provider
- Connect to GitHub account
- Add Repository and branch name, which we want to trigger when new code is committed into that branch
- Enable Start the pipeline on source code change option
3. Add Build Stage
- Select AWS CodeBuild as Build provider because we configured our builds using CodeBuild
- Add Region
- Select the Build project which we created using CodeBuild
4. Add Deploy Stage
- Select Amazon ECS as Deploy provider
- Add the cluster name where we created the service
- Select the service name which has the task definitions we created for the current project
5. Review
Review the configuration and create a Pipeline. After creating the CodePipeline, push some changes into the branch we selected, and the project should automatically build, upload images to ECR, and then deploy to ECS. After successful deployment, navigate to the load balancer, copy and paste the DNS name to the browser, and test if the application is deployed correctly or not. You should be able to see the application running.
This post was published under the JavaScript Community of Experts. Communities of Experts are specialized groups at Modus that consolidate knowledge, document standards, reduce delivery times for clients, and open up growth opportunities for team members. Learn more about the Modus Community of Experts program here.ย
Modus Create
Related Posts
-
Hybrid Application Testing with Protractor and Appium
Testing hybrid projects can be challenging. The number of mobile devices the product has to…
-
How to Protect Your Azure App with a Web Application Firewall
Protect your web applications in Microsoft Azure using Application Gateway, Front Door, and Web Application…