An Efficient way to manage API calls with RxRetrofit


Android is becoming popular day by day and thousands of apps are being published on Play Store. To survive in this competition, the User experience matters a lot. The app crash is the worst User experience, in other words we can say 1 billion dollar mistake. So I am going to explain how to use RxRetrofit efficiently to make API calls.


The most common mistake made by the Android developer is while making an API call and not handling the fact that the user can exit the app or press the back button, before the API call returns. The result is app crash!

In more detail, when an API is hit, that network call is being performed on the background thread and based on response the UI needs to be updated in UI thread. Now, when the background thread is running and the back button is pressed, the current screen (Activity or Fragment) will be popped out from the stack. After this when data is received from the API and an attempt to update the UI will result in a crash as the current screen (Activity or Fragment) will not be in the stack.


Make API calls using RxRetrofit !!


   .observeOn(AndroidSchedulers.mainThread()) // On UI thread
   .subscribeOn( // On background thread 
   .subscribe({ response -> // On Success 

         // App will crash on the below line
         buttonLogin.setText("Verify OTP")

   }, {
       // On Error (Could not reach to the server)

According to the above example, when you hit the Login API and immediately press the back button (Activity or Fragment removed from the stack) then you get the response from the API in the subscribe() method, you try to update the UI and the app will crash!

Use the Disposable interface of the RxJava which is used with RxRetrofit.

See the below example :

public interface Disposable {
   void dispose();  // Dispose the resource or an operation.
   boolean isDisposed(); // Returns true if this resource has been disposed.

Let’s implement it in the above example,

1) private var disposableLoginAPI: Disposable? = null // Globally
disposableLoginAPI = 
   .observeOn(AndroidSchedulers.mainThread()) // On UI thread
   .subscribeOn( // On background thread 
   .subscribe({ response -> // On Success 

         buttonLogin.setText("Verify OTP")

   }, {
       // On Error (Could not reach to the server)

The subscribe() returns disposable so initialize the disposable with it.

3) override fun onStop() {

     // If it's not disposed then dispose  
    if (disposableLoginAPI?.isDisposed == false) {
         disposableLoginAPI?.dispose() // Magic!

So once you are about to leave the screen (Activity or Fragment), in the onStop() method, dispose() method will dispose the API call and there will be no response in the subscribe() method, so UI will not be updated and the app will not crash.

For Multiple API calls

You can use CompositeDisposable to add multiple disposable objects:

val listOfDisposables=CompositeDisposable() // Init object

//Adding multiple disposables

And finally dispose in the onStop() method.

override fun onStop() {

   listOfDisposables.dispose()  // Dispose all the disposables 

Happy Coding!

Posted in General | Tagged , , | Leave a comment

AWS EC2: Increase volume size on the fly (Using AWS Management Console)

As time passes the production requirements grow. This happens all the time and recently again it happened and I got an alert from Nagios telling me that disk space is getting full. I immediately checked the server to find out what is consuming space!!

Initially, I thought, perhaps unnecessary files are consuming the space. I can just delete those files and get rid of the problem. But on checking it turned out that everything was important to keep in the EBS (Elastic Block Store) volume and nothing can be deleted.

The only solution was to increase the volume size. Thankfully it is AWS. They have enhanced the console to a great extent and increasing the volume size is a piece of cake now. Here are the simple steps:

1) Check the exact volume size which is initially their with ‘lsblk’.

 nvme0n1     259:0    0   50G  0 disk  
 └─nvme0n1p1 259:1    0   50G  0 part / 

This is the exact size of my volume it means 50 GB is available and it is using the exact 50 GB

To increase the volume size follow the below steps:

2) Login to your aws console and than go to the volumes in your EC2 dashboard and find the volume which you want to increase.

3) Select the volume and then click on the ‘Actions’ -> ’Modify Volume’ .

4) Increase the volume size edit the field and enter the new volume size (In my case i want to increase it to 60 GB).

5) Click on the ‘Modify’ button then you can see it will be reflacted to your aws console within a minute but it will be not in use until ‘reboot’.

6) Login to your server and check with ‘lsblk’ you will get to see output as below.

nvme0n1     259:0    0   60G  0 disk  
└─nvme0n1p1 259:1    0   50G  0 part /

Note: This denotes that disk size is 60 GB and 50 GB is allocated to nvme0n1p1.

7) Now to allocate 60 GB to nvme0n1p1 reboot your instance.

CAUTION: While rebooting the instance it will down your service.

8) After rebooting its done now you can check with ‘df -h’. You will see partition is using 60 GB.

/dev/nvme0n1p1   59G  24G   35G  42% /
Posted in General | Leave a comment

Deploy Digital Certificates to the iOS System keychain Store

Content posted here with the permission of the author Amit Dhadse, who is currently employed at Josh Software. Original post available here.

iOS devices became widely used in an enterprise level daily-work now a days, due to this we need to have guaranteed secure communications between device and associated enterprise services.

So question raise about enterprise environment security which can be covered with the help of VPN or any other third party tool. Actually what VPN does is creating secure tunnel between identity device and service which is hosted on cloud or on partner networks.

Thats one part of accessing enterprise service but what if we don’t want to use any VPN client and still wants to access enterprise service from any device in our case we are taking iOS device.One way of doing that is using digital certificates to encrypt/decrypt, sign and authenticate communications and data.So how we can use certificate on iOS device at system level so that iOS can use that certificate while consuming services. Advantage of this over VPN is, User has login every time on VPN Client to consume services where as deploying certificate on iOS device is one time activity and it will work all along till User removed certificate.


I have been asked to create an iOS app to request and deploy digital certificates in device system keychain so that, it will be recognised by all system apps of iOS while accessing enterprise services. After doing my research I concluded that in iOS devices there are two types of certificate keychain stores:

  • App certificates keychain store:
    • This keychain store embedded  in app space and can be used by that app only.(Not exposed at system level)
    • Apple offer APIs to Insert/Delete/Update certificate
  • System certificate keychain store:
    • This keychain is exposed at system level. This store located in Setting -> General -> Profiles used for VPN and wifi networks
    • Apple does not offer any API to deploy/retrieve certificates from the system certificate keychain store (Profiles).

So there are four ways to deploy certificates to system certificate keychain store (not programmatically):

  1. Configuration Profiles: Pre-configured profiles that used to distribute settings.(These can be created on MAC and deploy in iOS device)
  2. Using Simple Certificate Enrolment Protocol (SCEP): this protocol enables Over-the-Air Enrolment of digital certificates, mainly used for routers and switches.
  3. Email Attachment: send the certificate from a desktop or mac to iOS device as an email attachment. Open this attachment(certificate) in iOS mail client ( having system privileges ). It will automatically ask you for installation. 
  4. Using Safari: browse the certificate using Safari. Safari has system privileges over normal UIWebView.

After I did my research I found a workaround that worked for me to deploy a certificate to the System Certificate Store from Xcode directly.

Here’s what you will need to do, first, to create a web-server inside your App, then save the certificate in this web-server and open it’s URL.

When the page launches, Safari will recognize that it’s a certificate, then the System Profiles will pop up and ask the user to install the certificate. After the user install the certificate it will be saved to the Profiles and the certificate can be used in the System level.

To create web server inside iOS app, there are 2-3 libraries are available on Git. I recommend “GCDWebserver” library to create web server due to its easy implementation, background support and most important it is stable.

How to use GCDWebserver :-

    let localServer = GCDWebDAVServer(uploadDirectory: path)
    localServer?.start(withPort: 8080, bonjourName: nil)
    let url = localServer?.serverURL
    • Here “path” is your certificate path(in my case it was document directory path as my certificate was store in document directory(NSDocumentDirectory) folder).
    • “serverURL”  will return URL of certificate which is hosted on local server.
    • Open this URL on safari, Safari recognise that it is certificate and pops an installation alert automatically.

Note :-  Now in this scenario here is one problem, as soon as user open URL in safari, user were no longer in app all control is shift to safari and to comeback in app, user has to do manually.

 Solution :- import “SafariServices” framework which already have a safari view in it which opens safari inside your app.

I spent a lot of time and research efforts to reach upto this solution, I hope someone surly will get benefit from my results. Please contact me for any question.

Posted in General | Tagged , , , , , , | Leave a comment

Adding SSL certificate to Traefik on ECS

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

Traefik is awesome reverse proxy & load balancer. If you are not using Traefik already then I recommend using it in your next project. I can guarantee that you will not regret.

Setting up SSL certificate on Traefik is a cakewalk. While adding SSL on traefik, I realised how it outshine other reverse proxy (Nginx , HAProxy).

Traefik use LetsEncrypt to automatically generate and renew SSL certificates.


FROM      traefik:v1.7-alpine

COPY      traefik_ecs.toml /etc/traefik/traefik.toml
RUN touch /etc/traefik/acme.json
RUN chmod +x /etc/traefik/acme.json


defaultEntryPoints = ["https", "http"]

  address = ":80"
    entryPoint = "https"
  address = ":443"
  address = ":8080"

entryPoint = "bar"
dashboard = true

clusters = ["YOUR_ECS_CLUSTER_NAME"]
watch = true
autoDiscoverClusters = false
refreshSeconds = 15
exposedByDefault = true
region = "YOUR_AWS_REGION"
email = "YOUR_EMAIL"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
entryPoint = "http"

Replace YOUR_* values with actual, build image using Dockerfile and deploy it on ECS. That’s it, Traefik will take care of rest and SSL certificate will be added to your domain. Isn’t Traefik awesome ? Let me know what you think through comments below.




Posted in General | Tagged , , | Leave a comment

My Internship at Josh

It was during our sixth-semester, that the entire class received an email about the internship at Josh Software Pvt Ltd. We had to attend a coding round, and then two technical rounds followed by an HR round. Being a student of Industrial mathematics course I was not into programming much but still, I decided to appear for the coding round. Overall 40 students came for the internship drive. After the initial round due to some glitch, my name was not in the list of those selected for the next round. This was a major blow to me because I was hoping to make it to the next rounds. As I was about to leave the place with a heavy heart I was informed that I have also cleared the first round. I was literally on cloud nine by this time(So overwhelmed…that I almost shouted at my friend). Then came the technical and HR rounds, I excelled in all of them and ultimately bagged my first ever internship. When I look back the whole selection process itself was a learning experience. I got to know more about myself, my goals, strengths, and weaknesses. The fact that I was the only student from my class and the only female candidate who was awarded the internship helped in firmly establishing my belief in myself.

After joining as an intern at Josh, I went through a rigorous 10 days training. I learned and relearned a lot of stuff. It was exhausting and scary at the same time. The feeling of vulnerability and inadequacy would come now and then. One thing which helped me initially in gelling out with the people at Josh was the new year’s party.

My first office party!!

All the interns were introduced to the other team members at Josh. The environment at the party was very chill. From co-founders to new appointees everyone was present there and was enjoying freely. This was an ice-breaking moment for me because I realized that your work is all that matters. As long as you are doing your work sincerely, you don’t need to be scared of anyone. It doesn’t matter whether you are experienced or fresher. Also being an intern all I wanted was an environment where there is room for mistakes, improvements and a place where people are approachable. One thing which standout for Josh is its amazing work culture. It’s so conducive for one’s overall growth.

After a while, I was assigned a project manager and a mentor. I was very nervous and especially when I and my partner were called by our project manager Anil Kumar and mentor Rahul Ojha. We were asked to complete an assignment which was full of technical jargon. Being a naive programmer I was scared to death when I was asked for this. I requested them that I need time because I have never attempted such an assignment before. To my surprise, they understood and told me to start with elementary assignments. It took a lot of mistakes, learning and brainstorming sessions with Rahul but ultimately I was able to complete my assignment. This was a huge achievement for me. The way I was being monitored by my mentor was amazing. He was not just spoon feeding but was giving me hints about what to learn and where to look for the answers. It was like a puzzle. I enjoyed it thoroughly.
The first project was finally given to me and my partner after carefully analyzing our performances. We were working on the rails technology and were asked to make an app. From being a student to an intern to my first client meeting. It was a roller coaster ride. I got a chance to directly communicate with the client. It was a humbling experience. For the first time, I felt like I am an integral part of Josh and have been given responsibilities for which I am accountable.

Presently my project is at the fag end of completion. I would be lying if I say it was an easy journey from beginning to this end. There were many ups and downs. Often I found myself in the position where I couldn’t decide how to go about the problem. When one is not able to even pinpoint the problem, finding the solution is a different problem altogether to solve. But then a word with Anil or others would dissipate the clouds of dismay. Things became easier for us because we could ask our doubts to anyone and everyone was ready to help us with full enthusiasm.

I could mention so many incidents where I was helped by my team like the one time I was frustrated with the pace at which I was completing my work. This was hampering my performance also. I was like what I am even doing here. This is not for mathematics students and is tailor-made for computer science students. Like a cold breeze on a hot day, Rahul’s advice would come to stick with the problem and that with extra effort I could solve my problem. God and only he knows how much I have troubled him by my doubts. Shailesh for whom I had this impression that he is very reserved and strict proved me wrong by laughing at the memes I used to show him. The attitude of Ganesh whenever he sees a problem often leaves me speechless. His statement “ab to isko solve karenge hi” works like glucose. It provides immense energy to tackle any problem. Sahil’s perseverant attitude always pushed me to have the same tenacity on my work too. I still remember one talk delivered by Mr. Gautam Rege(Co-founder of Josh), it was accessible to anyone and was very motivating. All one needs at the start of his career is ample support and motivation. I really look up to you sir. Thank you for being so inspiring and motivating.

As I have mentioned earlier, the one thing which standout at Josh is its work culture. It’s a perfect blend of professionalism and flexibility. One gets appreciated for good work and at the same time, you can’t take your work for granted. I remember the numerous times I have been rebuked for my mistakes and also appreciated for my good work. My participation as a trainer during one of Rails girls meet up was appreciated in All hands. These little acts worked as a catalyst for my growth.

How many times does an intern get the chance to share lunch with co-founders? Yes, not many times but at Josh, the environment is very congenial. The monotonous office culture would sometimes take a toll on us and to kill the boredom we-the music lovers-would start playing songs. Instead of not allowing us Neha and Sai would just advice us to slightly lower the volume. The other thing which came to our rescue was carrom. I am nowhere qualified even to tell myself a naive player. The probability that I would not hit the piece I am targeting was more than I hit it. I was casually made fun of my bad shots by Mr. Umesh and Mr.Swapnil but now it seems that along with programming skills, I have also honed up my carrom skills.
This blog has been the hardest to write for me by far. In part, the challenge stems from trying to sum up months worth of experiences in just a few paragraphs.
My internship at Josh software has taught me more than I could have imagined. As an Intern, I feel my duties were diverse and ever-changing. Sometimes it’s tough to recall everything I have taken in over the past months, but I feel that these are some of the most beneficial lessons I have learned.

What I’ve Learned:

I’m not alone: Coming into this position, I felt that I had no idea where my career was going and I lacked confidence about what I could do and what I am really good at. My internship has definitely given me a better understanding of my skill set and where my career may take me, but most importantly, I’ve come to learn that I am not alone. This job has taught me that almost everybody is in the same position. Very few college students know what they want to do, and it is something that is simply not worth worrying about. Thanks to my internship I now know that if I continue to work hard things will fall into place.

How to behave in the office: This being my first position in an office atmosphere, I didn’t know exactly what to expect. The environment here at Josh is quite relaxed, yet it taught me how to behave in the workplace. Simply working in the office and getting used to everything here has definitely prepared me for whatever my next position may be. Just observing the everyday events has taught me more about teamwork, and how people can come together to get things done. Although sometimes I have to remind myself to use my inside voice, I feel I’ve adapted to office life relatively well.

How to build my resume: As I said, this internship has improved my skills a ton, both off the paper and on paper. I didn’t realize it all of this time, but this position served not only as a positive learning experience but a resume builder as well. I came into this with a resume that was basically naked, now I am leaving and I have lots of updating to do. My resume doesn’t need a makeover, it needs to be restarted from scratch, and that’s a good thing! I underestimated how much work I did that actually translates to my resume.

I’d like to thank everyone here at Josh who has helped me out. This has truly been a great learning experience and I’ll be forever indebted to those who gave me a hand here. As far as future interns are concerned I would advice to always be friendly, work hard, and ask questions. Always ask questions. Hopefully, you come away from your internship with as much as I did.

Posted in General | Leave a comment

GoLang with Rails

Content posted here with the permission of the author Shweta Kale who is currently employed at Josh Software. Original post available here.

GoLang with Rails? Wondering why one would use GoLang with Rails?

Read on to find out!!

This is purely based on our requirement but can surely benefit others looking forward to similar use-case. We had a web app written in Rails but facing performance bottleneck while processing large chunk of data. The natural choice seem to use power of GoLang concurrency.

In order to use GoLang with our Rails app few approaches came to my mind. But I found one or the other flaw:

  • Write api’s in GoLang APP and route request from nginx based on request URL. Simple but for using this approach we would also need to add authentication in GoLang app. So authentication will be Rails as well as in GoLang – This doesn’t seem correct, because if I had to change authentication mechanism, would need to make changes in two apps.

  • Use RestClient and call GoLang apis from Rails application. So request will be routed to Rails app and it will call api from GoLang app and serve response. Here I will achieve some level of performance but again my Rails app will have to serve request which GoLang app can directly serve and the response has to wait for response from GoLang app.

  • Use FFI. Using FFI we can call GoLang binary directly. You can watch this video to see how it can be done. This seems fine at first, but what if I had to load balance moving GoLang app to other server?

So which approach did I follow?

We went with NONE of the above, but a 4th idea using rack_proxy gem.

Here is sample code for middleware we wrote

class EventServiceProxy < Rack::Proxy
def initialize(app)
@app = app

def call(env)
original_host = env["HTTP_HOST"]
if env["HTTP_HOST"] != original_host

def rewrite_env(env)
request =

if request.path.match('/events')
if env['warden'].authenticated?
env["HTTP_HOST"] = "localhost:8000"
env['HTTP_AUTHORIZATION'] = env['rack.session']['warden.user.user.key'][0]


And we inserted our middleware just after Warden (Devise uses this internally for authentication)

config.middleware.insert_after(Warden::Manager, EventServiceProxy)

In above code snippet we are just proxing our request to localhost:8000 where GoLang App is running and setting up user_id in header. Warden adds authenticated user_id in env['rack.session']['warden.user.user.key'][0] so now we know who is logged in at GoLang App from header.

We added middleware in GoLang which extracts user_id from header and sets curretUser details in context.

Important Note
Our GoLang application is exposed only to Rails application and not to the whole world so we are sending user_id in header.

The main advantages we saw using this approach are:

  • We could use existing authentication mechanism used in Rails application
  • If needed we can add load balancer to our Rails and/or GoLang application which is micro service.
  • If we have used FFI we had to put binary on same machine but here we can have application and GoLang service on different machines.
  • As request will be rewritten from Rack it saved redirect and going through entire stack of rails app.

This could be used with any framework similar to Rails.

By using above approach now we can use power of GoLang when needed and development speed of Rails 🙂

Posted in General | Leave a comment

Deploying Service Based Architecture Application on Amazon’s ECS (Elastic Container Service)

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

This blog is second part of Post .

If you have not already read it then I recommend going through it first, I have explained why we chose Service Based Architecture and how Docker helped us in setting up & starting application on local machine with just one command.

In this post we will see how to deploy our App on multiple docker container using Amazon’s ECS.

Why deploy container for each service

Deploying all service on single machine is possible but we should refrain from it. If we deploy all service on single machine then we are not utilising benefits of service based architecture (except manageable/easy-to-upgrade codebase).

2 Major benefits of container deployment for each service are:

  1. Isolation of Crash
  2. Independent Scaling

Isolation of Crash:

If one service in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.

Independent Scaling:

Amount of infrastructure and number of instances of each service can be scaled up and down independently.

Why we chose Amazon’s ECS

We mostly use Amazon’s AWS service for deploying our applications therefore our first preference is services provided by Amazon AWS for deploying containers.

For container deployment, Amazon provide 2 service to choose from

  1. EKS (Elastic Container Service for Kubernetes)
  2. ECS (Elastic Container Service)

Amazon is charging $0.2 per hour for each Amazon’s EKS cluster. We didn’t wanted to pay for services which is not directly impacting our business therefore we looked for alternatives.

Amazon does not charge for ECS. We have to pay only for the EC2 instance which are running. Another advantage of ECS is its learning curve which is much lower then EKS.

Therefore ECS is optimal for our use case.

Before we start using ECS, we should be familiar with components of ECS

Components of ECS

  • Task Definition
  • Task
  • Service
  • Cluster
  • ECR

Task Definition:

task definition is like a blueprint for your application. In this step, you will specify a task definition so Amazon ECS knows which Docker image to use for containers, how many containers to use in the task, and the resource allocation for each container.


Task is instance of a Task Definition. It is running container with the settings defined in the Task Definition


A service launches and maintains copies of the task definition in your cluster. For example, by running an application as a service, Amazon ECS will auto-recover any stopped tasks and maintain the number of copies you specify.


A logic group of EC2 instances. When an instance launches the ecs-agent software on the server registers the instance to an ECS Cluster.


Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications.

Launch Types:

Amazon ECS has two modes: Fargate launch type and EC2 launch type

  • Fargate
  • EC2


AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. All you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application


EC2 launch type allows you to have server-level, more granular control over the infrastructure that runs your container applications. Amazon ECS keeps track of all the CPU, memory and other resources in your cluster, and also finds the best server for a container to run on based on your specified resource requirements. You are responsible for provisioning, patching, and scaling clusters of servers. You can decide which type of server to use, which applications and how many containers to run in a cluster to optimize utilization.

Choosing between Fargate & EC2

Fargate is more expensive than running and operating an EC2 instance yourself. Fargate price is reduced by 50% recently . To start with, we need more control over our infrastructure therefore we chose EC2 over Fargate. May be we will switch to Fargate in future when its cost is similar to EC2 and we have more experience in managing ECS infrastructure.

Create ECS Cluster

Go to Amazon ECS Service,

In few minute, your cluster will be created and you will see it under ECS service.

Traefik (Load Balance & Proxy Server)

Traefik (open source & production proven) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components and configures itself automatically and dynamically. Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world.

Traefik Overview

Traefik Web UI

Traefik provides a web UI for showing all running container and path on which they are served. Example:

Traefik Web UI

Deploy Traefik on ECS

Create a Task definition for Traefik, click new task definition.

Click on Add Container.

Click Create Create Task Definition.

Now we will create a service for running Traefik task

Click on create service. This will create a service, After Service is created, it will start running a Task for given task definition.

Edit Security Group Inbound port, Add following rule:

Now go to public IP address of EC2, example:

You should see Traefik Dashboard.

Create ECR Repo for each service

Go to Amazon ECR service:


You can send each container instance’s ECS agent logs and Docker container logs to Amazon CloudWatch Logs to simplify issue diagnosis.

Edit Task definition to set log configuration

Deploying Rails API

  • Create a Task Definition for Rails API

After creating task definition, create a service to launch container

  • Service

Other steps is similar to Traefik service creation, as shown above.

traefik.frontend.rule in Docker label specify mapping for url & service. Example:;PathPrefixStrip:/rails-api, here /rails-api path is mapped with our rails-api container which is running on ECS.

Once service is live and task is running, curl and it will be served through rails-api container which we just deployed.

Deploying React APP

Deployment step for react is similar to rails app, only difference is creation of react image for production deployment.

My Dockerfile for react production deployment is:

FROM node:11.6.0-alpine

WORKDIR '/app'

# Install yarn and other dependencies via apk
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*

COPY package.json yarn.lock /app/

COPY . ./

RUN npm run build

# production environment
FROM nginx:1.13.9-alpine
ARG app_name
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
COPY --from=0 /app/build /usr/share/nginx/html/$app_name
CMD ["nginx", "-g", "daemon off;"]

conf is directory with following structure

     --- default.conf

default.conf contains

server {
  listen 80;
  root   /usr/share/nginx/html;
  index  index.html;
  location /react-web {
    try_files $uri $uri/ /react-web/index.html;
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /usr/share/nginx/html;

Here, I am serving my compiled HTML, CSS & JS through nginx.

My docker-compose-prod.yml

      context: './react-web'
      dockerfile: $PWD/Dockerfile-React-Prod
        - app_name=react-web
      - $PWD/inventory-web/:/app/
      - NODE_ENV=production

In package.json, I added:

"homepage": "/react-web"

and I added traefik frontend rule to map /react-web with react container.

Now create production image for react-web, push on ECR & deploy like traefik service. After deployment react-web should be accessible when accessed on /react-web path.

Deployment Script

I have written a shell script for deployment on ECS. My shell script requires AWS Command Line Interface (AWS CLI) & ecs-deploy.


# Login to amazon ecr
eval $(aws ecr get-login --no-include-email)

# Build production image
docker-compose -f docker-compose-prod.yml -p prod build $1

# Tag image with latest tag
docker tag prod_$1:latest path-to-ecr-repo:latest

# Push image to ECR
docker push path-to-ecr-repo:latest

# Use ecs-deploy to deploy latest image from ECR
./ecs-deploy -c cluster-name -n $1 -i path-to-ecr-repo:latest

Save above script in deploy file.

For deployment:

example: ./deploy rails-api


Learning curve for ECS is short and there is no extra cost for ECS service (charges applicable for EC2 instance only) therefore if you are getting started with container deployment on production then ECS is good fit.

In Next blog post I will write how to deploy Redis & Elasticsearch container on ECS and how to setup Network Discovery so that our Rails API container can communicate with Redis & Elasticsearch.

Posted in General | Leave a comment