Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.
This blog is second part of Post .
If you have not already read it then I recommend going through it first, I have explained why we chose Service Based Architecture and how Docker helped us in setting up & starting application on local machine with just one command.
In this post we will see how to deploy our App on multiple docker container using Amazon’s ECS.
Why deploy container for each service
Deploying all service on single machine is possible but we should refrain from it. If we deploy all service on single machine then we are not utilising benefits of service based architecture (except manageable/easy-to-upgrade codebase).
2 Major benefits of container deployment for each service are:
- Isolation of Crash
- Independent Scaling
Isolation of Crash:
If one service in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.
Independent Scaling:
Amount of infrastructure and number of instances of each service can be scaled up and down independently.
Why we chose Amazon’s ECS
We mostly use Amazon’s AWS service for deploying our applications therefore our first preference is services provided by Amazon AWS for deploying containers.
For container deployment, Amazon provide 2 service to choose from
- EKS (Elastic Container Service for Kubernetes)
- ECS (Elastic Container Service)
Amazon is charging $0.2 per hour for each Amazon’s EKS cluster. We didn’t wanted to pay for services which is not directly impacting our business therefore we looked for alternatives.
Amazon does not charge for ECS. We have to pay only for the EC2 instance which are running. Another advantage of ECS is its learning curve which is much lower then EKS.
Therefore ECS is optimal for our use case.
Before we start using ECS, we should be familiar with components of ECS
Components of ECS
- Task Definition
- Task
- Service
- Cluster
- ECR
Task Definition:
A task definition is like a blueprint for your application. In this step, you will specify a task definition so Amazon ECS knows which Docker image to use for containers, how many containers to use in the task, and the resource allocation for each container.
Task:
Task is instance of a Task Definition. It is running container with the settings defined in the Task Definition
Service:
A service launches and maintains copies of the task definition in your cluster. For example, by running an application as a service, Amazon ECS will auto-recover any stopped tasks and maintain the number of copies you specify.
Cluster:
A logic group of EC2 instances. When an instance launches the ecs-agent software on the server registers the instance to an ECS Cluster.
ECR:
Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications.
Launch Types:
Amazon ECS has two modes: Fargate launch type and EC2 launch type
- Fargate
- EC2
Fargate:
AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. All you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application
EC2:
EC2 launch type allows you to have server-level, more granular control over the infrastructure that runs your container applications. Amazon ECS keeps track of all the CPU, memory and other resources in your cluster, and also finds the best server for a container to run on based on your specified resource requirements. You are responsible for provisioning, patching, and scaling clusters of servers. You can decide which type of server to use, which applications and how many containers to run in a cluster to optimize utilization.
Choosing between Fargate & EC2
Fargate is more expensive than running and operating an EC2 instance yourself. Fargate price is reduced by 50% recently . To start with, we need more control over our infrastructure therefore we chose EC2 over Fargate. May be we will switch to Fargate in future when its cost is similar to EC2 and we have more experience in managing ECS infrastructure.
Create ECS Cluster
Go to Amazon ECS Service,




In few minute, your cluster will be created and you will see it under ECS service.
Traefik (Load Balance & Proxy Server)
Traefik (open source & production proven) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components and configures itself automatically and dynamically. Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world.
Traefik Overview

Traefik Web UI
Traefik provides a web UI for showing all running container and path on which they are served. Example:

Deploy Traefik on ECS
Create a Task definition for Traefik, click new task definition.


Click on Add Container.

Click Create Create Task Definition.
Now we will create a service for running Traefik task





Click on create service. This will create a service, After Service is created, it will start running a Task for given task definition.
Edit Security Group Inbound port, Add following rule:
Now go to public IP address of EC2, example: 192.12.31.12:8080
You should see Traefik Dashboard.
Create ECR Repo for each service
Go to Amazon ECR service:


Logging
You can send each container instance’s ECS agent logs and Docker container logs to Amazon CloudWatch Logs to simplify issue diagnosis.
Edit Task definition to set log configuration

Deploying Rails API
- Create a Task Definition for Rails API




After creating task definition, create a service to launch container
- Service

Other steps is similar to Traefik service creation, as shown above.
traefik.frontend.rule in Docker label specify mapping for url & service. Example: Host:example.com;PathPrefixStrip:/rails-api, here /rails-api path is mapped with our rails-api container which is running on ECS.
Once service is live and task is running, curl example.com/rails-api and it will be served through rails-api container which we just deployed.
Deploying React APP
Deployment step for react is similar to rails app, only difference is creation of react image for production deployment.
My Dockerfile for react production deployment is:
FROM node:11.6.0-alpine WORKDIR '/app' # Install yarn and other dependencies via apk RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/* COPY package.json yarn.lock /app/ COPY . ./ RUN npm run build # production environment FROM nginx:1.13.9-alpine ARG app_name RUN rm -rf /etc/nginx/conf.d COPY conf /etc/nginx COPY --from=0 /app/build /usr/share/nginx/html/$app_name EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]
conf is directory with following structure
---conf | ---conf.d | --- default.conf
default.conf contains
server { listen 80; root /usr/share/nginx/html; index index.html; location /react-web { try_files $uri $uri/ /react-web/index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
Here, I am serving my compiled HTML, CSS & JS through nginx.
My docker-compose-prod.yml
react-web: build: context: './react-web' dockerfile: $PWD/Dockerfile-React-Prod args: - app_name=react-web volumes: - $PWD/inventory-web/:/app/ environment: - NODE_ENV=production
In package.json, I added:
"homepage": "/react-web"
and I added traefik frontend rule to map /react-web with react container.
Now create production image for react-web, push on ECR & deploy like traefik service. After deployment react-web should be accessible when accessed on /react-web path.
Deployment Script
I have written a shell script for deployment on ECS. My shell script requires AWS Command Line Interface (AWS CLI) & ecs-deploy.
#!/bin/sh # Login to amazon ecr eval $(aws ecr get-login --no-include-email) # Build production image docker-compose -f docker-compose-prod.yml -p prod build $1 # Tag image with latest tag docker tag prod_$1:latest path-to-ecr-repo:latest # Push image to ECR docker push path-to-ecr-repo:latest # Use ecs-deploy to deploy latest image from ECR ./ecs-deploy -c cluster-name -n $1 -i path-to-ecr-repo:latest
Save above script in deploy
file.
For deployment:
./deploy NAME-OF-SERVICE
example: ./deploy rails-api
Summary
Learning curve for ECS is short and there is no extra cost for ECS service (charges applicable for EC2 instance only) therefore if you are getting started with container deployment on production then ECS is good fit.
In Next blog post I will write how to deploy Redis & Elasticsearch container on ECS and how to setup Network Discovery so that our Rails API container can communicate with Redis & Elasticsearch.