Conversations that matter: Driving digital disruption in Banking

Banking and Chatbots

The word on the street says that banking and finance are moving towards digital transformation more aggressively than ever. Disruptive technologies like artificial intelligence and machine learning are key focus areas for FinTech leaders today. Developing long term solutions at scale to simplify finance operations is what will get developers the most brownie points!

Gartner predicts that companies offering personalization will outperform brands who don’t. With IoT in place now, the number of connected devices in the market has only increased, and the world already has 2.5 billion smartphone users. More than 37% of the world is using messaging apps today, and approximately 20 million already use smart speakers. The paradigm shift from non-personalized marketing to social media is representative of new age interactive customer experiences, and that’s the big fish to catch. Conversational commerce is a key driver of this transformation, and its recent popularity is well deserved. This also includes text based chatbots that increase consumer engagement especially in service-based sectors. In the last year, especially, this has grown incredibly and enabled businesses to connect to 5 times more the number of customers than usual. Impact? Their revenue grew by 10-20%. That is massive! Imagine the kind of growth opportunities out there!

Conversational Banking: Embracing digital transformation
The key differentiator for conversational commerce is that it allows users to converse through a platform of their choice, along with greater transparency. It is cost effective for banks and financial institutions, as a chatbot is simply a conversational algorithm embedded within a chat interface, i.e. a one-time investment. Intelligent chatbots generate a human like conversation with consumers, provide businesses with a dynamic understanding of their needs and while at it, optimize user data in real time. The smarter your bot becomes, the more data it collects. Initially, FinTech chatbots focused on customer experience but more recently, investments in contextual insights driven communication have made bots the new age contact executives, says PWC. Bots have also overtaken IVR and are helping users authenticate transactions seamlessly. In fact, chatbots can also provide CXOs with operational information and thus help them focus more on strategic business objectives, rather than remain caught up in day to day activities.

Adopting holistic, AI empowered support systems for back office

While optimizing customer experiences is priority, chatbots can be used to solve operational back office problems too. Backed by advanced machine learning and natural language processing (NLP), chatbots are essentially conversational analytics platforms that initiate actions without human intervention. A well designed chatbot reduces turn around time, provides instant information, enhances cross selling, improvises mundane queries and has the ability to provide omni channel experiences. For example, if a customer wants their bank statements, all they need to do is send a message to the chatbot. The details will be furnished to them within seconds! Based on the customer’s history and digital profile, chatbots also recommend investment options, provide market related news and suggest ways to utilize credit card points. Proactive suggestions for the win! That’s not all. Advanced chatbots can even analyze complex legal contracts much faster than lawyers, saving a large chunk of manpower and resources in the process. Granting access to software systems, resetting passwords and handling day to day IT operations is also achievable. Cognitive intelligence can further be utilized to pay down debt. Personal banking assistants are already anticipating questions for thousands of common FAQs, reducing the need for time consuming telephonic conversations.

Roadmap for the future: Intelligent conversations
An insights-driven bank complete with sales and marketing functions, and custom offerings based on global trends is the future of FinTech. Imagine an institution empowered with technology that can engage with consumers in real time, bridge gaps between existing legacy infrastructure through predictive data analytics and keep track of everything, all in one place. It will benefit not only the end consumer, but the bank’s employees as well by cutting down hours of repetitive work. For example, if a bank runs a loyalty program and wants to find out customers with the most number of transactions, they don’t need to manually look through their records. An AI and RPA powered chatbot can easily look through the customer data and respond via text! Scalable, high performance open source solutions implemented through apps add to dynamic UX as well. Mobile payments, digital wallets and UPI have also seen a massive escalation in the recent years, and non banking transactions like bill payments are adding to the boost. Soon, front office banking systems will be overtaken by mobile apps, and ticketing and back office systems will run on data analytics and blockchain. While certain processes like KYCs and internal employee management will definitely rely on the human touch, disruptive tech is here to stay. Conversational commerce may be in its infancy stage right now, but it holds the power to build strong business-consumer relationships.. The next era of FinTech will transform banks of today into cognitive financial institutions of tomorrow, and I, for one, can’t wait to see that happen!

Advertisements
Posted in Artificial Intelligence, Insuretech | Leave a comment

What the next year looks like for InsurTech

insurtech 2

Over the last two years, investments in the global InsurTech market have crossed $2.3 billion and the sector has continued to grow at an astounding rate of 3-4%.  EY estimated the net income growth rate to cross 23% as opposed to 14% in 2017, all of this only point towards one thing: the immense possibilities for the future. With more than 24 life insurance companies and 33 non-life insurance companies, the market is booming in India today. This is mind blowing, but one of the biggest challenges that the insurance sector faces, is the way it is perceived. People no longer want to think of it as boring paperwork, long calls with agents and multiple forms to file for one claim. Consumers, especially millennials are looking for specific policies with small premiums and maximized benefits that they can purchase conveniently, preferably online. Here’s where a combination of insurance and intelligent technology comes into play. InsurTech led by disruptive tech like AI sure has been more than a buzzword in the last few years!

InsurTech in 2018

The Insurance sector in India has seen many improvements in the past year, both in terms of innovation and premium growth. While digital innovations successfully disrupt the traditional ways of functioning, the government is taking more steps towards financial inclusion and a better-connected India. The income of the Indian middle class is also rising steadily, and this means that insurance companies are better suited to plan their premium offerings. Life Insurance premiums have been growing at a CAGR of 12.49%, and Non-life premium at a CAGR of 11.05%. More and more Point of Sale people are becoming part of this industry, and thus, consumers are becoming more and more comfortable with the idea of buying insurance online. This landmark decision by the IRDAI will surely increase business potential quite drastically. Broking Agencies like PolicyBazaar, QuickInsure and TurtleMint have seen investments of more than $250 million in 2018. There’s so much more to come!

Disrupting InsurTech with AI and IoT

With more than 337 million smartphones and millennials who form almost half of India’s population, IoT is also booming. This means only one thing: more consumer data. Data is the backbone of artificial intelligence, and going ahead, AI coupled with predictive analytics will pave the way towards mapping trends and consumer behavior better, in turn, helping insurance companies make informed policy decisions. As IoT devices becomes cheaper and better integrated in our daily life cycle, AI will use this data for real-time risk evaluation and ensure the right premiums for every customer! Paperwork? Forms? Manual intervention. All gone! Policy issuance will become so much simpler, faster and more efficient. IoT will play a major role in the future with real-time metrics being a major factor to issue better premiums. Imagine if you get a drive-score to build your driving profile that can get you a reduced premium, and if you ever need to claim the insurance, you don’t even have to stand in line because it gets credited to your account through the click of a button. Sounds awesome, right? AI is all set to disrupt traditional claims, distribution, underwriting, and pricing, and this solution is closer than you think!

Predictions for 2019: Customized Insurance Plans, Cryptocurrency and more

While all insurance companies want to make their premiums as affordable as possible, that will always depend on the customer’s profile. However, in the near future, you could get customised insurance products, negotiated on the terms and benefits of a policy Vs the premium, and this would all be automated for each customer separately. Customers want solutions, not services, and if you can give them that, you’ve won the race already. The emergence of online third party platforms that build an entire insurance ecosystem online for consumers to choose solutions from is going to be a key trend this year. India will also see microinsurance coming up, in line with the government’s goals of financial inclusion mentioned earlier. McKinsey states that item insurance, or the concept of ‘insurance as a service’ will escalate and allow people to insure items only when they are being used. Gamification, chatbots and mobile tools to ensure constant user engagement cannot be missed out. After all, it is the ‘connected generation’ that insurers have to target.

Adopting newer tech to gain an edge over competitors by improving operational efficiency is the mantra we all need to follow. BigData, AI, and IoT are here to make their mark, and the in near future, crypto-currencies may disrupt the InsurTech sector as well. Here’s to a year full of tech innovations that matter!

Posted in Insuretech | Leave a comment

Making Manufacturing Smart: Predictive Maintenance

Predictive Maintenance (1).png

A growing middle-class population, higher spending power and per capita income and the increasing share of young professionals in India today have given way to initiatives like Make in India that aims at increasing the contribution of the manufacturing sector to the country’s GDP. Manufacturing is growing at an astounding rate, and with the government’s support along with both, domestic and foreign investments, predictions state that India is on its way to becoming the fifth largest manufacturing hub in the world. Many global companies and MNCs have set up their operational centers here. As manufacturing looks to play a larger role in our economy, technology advancement and tech intervention in this sector will continue to be a great opportunity for the entire IT industry.

Challenges in the manufacturing sector

The goal of every manufacturing organization is the same: to maximize machine efficiency. This is by no means easy, especially because of the rate at which the demands for goods is rising. This also means that machines producing these goods will have to be serviced periodically, as poor maintenance strategies single handedly decrease efficiency. The biggest challenge faced by the manufacturing industry is to provide seamless, consistent performance, because routine failures and downtime are a very real threat to the overall performance. When you have machines that perform repetitive tasks every single day, this is bound to happen. Sometimes, maximum utilization of machine parts (to the extent till when they break off!) may lead to catastrophic, even permanent damage and lead on to a longer downtime. Then, of course, there are the failures that we don’t see coming and the unplanned downtimes. If you change parts frequently, that’s an additional overhead cost, and may cause unnecessary changes to a daily routine. Often, companies may end up with a spare parts surplus, which ultimately impacts the business’s bottom line, and not in a good way. The real question is, can there be something that helps professionals gauge how and when they should get machines serviced?

Prevention is better that cure: Predictive Maintenance

The answer is yes. Enter Predictive Maintenance!  but the use of deep learning technology is leading towards A new age method backed by deep learning and advanced technology, the purpose of predictive maintenance is to safeguard the health of machines and make sure they are not being overused. It aims at avoiding unplanned downtime and minimizing planned downtime. We are now living in the fourth industrial revolution, and it is time for manufacturers to shift from ‘Why fix something that is not broken’ to ‘Let’s prevent it from breaking down in the first place.’ In essence, the requirement of the industry is to move from a reactive chain of thought to an anticipated one, and that is exactly what predictive maintenance offers.

Imagine how much easier life would be if you knew beforehand which machine part needed servicing. Instead of breaking open the entire machine (which by this time, in all probability has stopped working) and figuring out where the problem lies and ordering spare parts because you didn’t know which part would need replacing, you could just keep the required part ready. So much time, energy and money saved! This also means that your downtime is planned, rather, it’s optimized. Undertaking predictive maintenance regularly also means that equipment life increases significantly because it is well taken care of. Moreover, one of the greatest advantages predictive learning offers is a boost in employee productivity since it lowers crucial callouts, saves time and in turn, reduces stress. You are happy. Your machines are well serviced. Your team is at peace. Works like a charm, right?

 

Tech intervention as the base of Predictive Maintenance

This sounds pretty awesome, but it’s not that easy to implement. Predictive maintenance is far from being only a plug and play solution; it is so much more. Without technologies like IoT, data analysis and deep learning, predictive maintenance cannot function. There are hundreds of layers of data that need to be collected over time to keep this up and running, because only properly analyzed data from critical equipment sensors, ERP systems and computerized maintenance management systems can give you an accurate Human to Machine (H2M) interaction. Different organizations and machines may also be at different stages of maturity, but all of them need to be monitored constantly. IoT is the biggest piece of this puzzle because it translated physical actions from machines to digital signals that are analyzed along with this data. It is thus, the key to a successful production network. Then come predictive algorithms and business intelligence tools that read this data, trigger reaction and close the digital-to-physical loop. Deep neural networks are also used in this approach to learn from data sequences and extract valuable insights.

All of this being put into place together provides you with your predictive maintenance strategy, which is then implemented by your organization’s task force. The true impact of these strategies is not immediate, but most definitely measurable. It is still in the early stages of development right now as organizations begin to realize the value that technological disruptions can bring about. Much like a good wine, predictive maintenance is also sure to get better with time. Here’s raising a glass to the future!

 

Posted in Artificial Intelligence, General, Manufacturing industry | Leave a comment

Changing the game: Technological disruptions in the Indian Insurance Sector

Josh Banner (1)-01.pngDid you know that there are more than 55 life insurance and non-life insurance companies that operate in India alone? That’s a huge number, and it allows for fierce competition! Owing to individuals’ higher disposable income, increasing life expectancy, economic growth of the country and the Government’s increased FDI limit, investments in the insurance sector have increased manifold and the horizon for growth has expanded even further. With a CAGR of 14.4 percent, research and predictions state that the Indian insurance industry will reach $280 billion by 2020. While the last decade saw a lot of scale, the upcoming decade is all about operational efficiency backed by technology! The Insurance companies that do not leverage technology to reduce their overheads and increase operation efficiency will find it very difficult to sustain themselves. The most important aspect that will lead this growth on is consumer behaviour along with scalable distribution channels and lower overheads. In today’s technological day and age, most customers have turned to digital channels to understand more about premiums, compare products and analyse diverse insurance offerings. It is imperative that the insurance sector implements technology wisely to achieve a holistic growth.

Retaining the human factor with point of sales persons (POSP)

I say that technology is one of the main drivers of innovation for almost every industry today. That, however, does not mean that we can let go of the human factor completely. It needs to be a combination of both, because the value of human experience and understanding is unparalleled. While we know that there are various offerings that each insurance industry provides to its customers, the fact of the matter remains that the level of penetration in the country is still low. To increase penetration, we need distribution models that can explain to the masses the benefits of insurance and what all it entails. These distributors until recently operated as “Insurance Agents”. Earlier, when people were not that aware about insurance, these agents would sell insurance policies on behalf of the insurance company. Awareness amongst the masses has risen and now, consumers themselves want to compare insurance quotes. This means that insurance agents are now at a loss as they are tied to only a single insurance company.

Recently, the IRDAI (Insurance Regulatory and Development Authority of India) allowed Insurance Broking Agencies to appoint “Point of Sale Person” or POSPs. A POSP is a registered agent of  a Broking Agency and, these Agencies can access live quotes from multiple Insurance companies! Ever since the introduction of these POSPs, the benefits of comparing and buying insurance have increased significantly. With smart training courses of at least 15 hours with certification, the number of POSPs is on an exponential rise as the basic qualifications for them have been relaxed. The ‘survival of the fittest’ race has begun, because IRDAI has standardized agent commissions. This has now forced companies to increase their operational efficiency and reduce overheads to achieve scalability and remain profitable. To remain relevant and tackle competition effectively, insurance companies will have to use technology to focus on empowering these POSPs along with keeping an eye on their customers. Without that, the chances of success are fairly slim.

Technology and talent: The perfect combination

Insurance companies now have two models to choose from: the B2C model and the B2B model. Using the first model involves empowering the end-user to buy online insurance and bypass the agent model altogether. This requires substantial advertising budget and branding, which means a higher customer acquisition cost and low rates of conversion. With the B2B model, companies can empower POSPs and help them compare different insurance premiums to assist their customers buy the right policy. This has a significantly lesser acquisition cost and a much higher chance of conversions. Which one do you think is better? I definitely think the second one, because it has a direct impact on the business revenue and its bottom line. The technology challenges insurers face are complex, including the need for flexibility, better cost control, robust data analysis capabilities, talent retention and adapting to mobile tech and social media. These challenges are all related to capacity, and McKinsey research states that these changes can all be impacted through a culture of continuous improvement. Think first, then move on to implementation. How do you do that? Through something called as ‘Lean Management’.

Lean Management: Building a culture of efficiency with technology

Using the principles of lean management, scalable technology can be put into place. It manages a large workforce easily, across larger geographies and delivers more customer value. Not having a regional workforce employed to leverage scale, and having a mobile technology scalable POSP with Regional Managers managing their respective circles is a starting point to ensure lower operational overheads. Lean management for insurance companies means evaluating customer insurance needs, enabling price comparison, building larger scalable teams and analysing data in detail for effective customer acquisition and retention. Creating such a setup to empower POSPs is what will set apart successful insurers from those who fail to leave their mark behind.

This does not mean that transformation should happen at a large scale in the beginning itself. Start with a smaller area like a city, then evaluate the results, and then move on to larger areas. Insurance companies can thus scaleup without having a regional office everywhere! Automating various functionalities like getting online and offline quotes from insurance vendors, sending vehicle inspection reports and health reports using mobile technology is a great start. Enabling instant policy issuance and instant commission can give companies the edge in retain and hiring POSPs. Rewards and Recognition for POSPs like gathering reward points for redemption and discount coupon codes could easily be powered by technology, utilizing a minimal work force. Adapting to technology, thus, has become a necessity and is not a choice anymore. Disruption is the only way ahead, and the sooner industries realize this, the better chances they will have to succeed!

Posted in Artificial Intelligence, Blockchain, Insuretech | Leave a comment

Could InsureTech look at Crypto Currency as a premium payment alternative?

cryptocurrency

The blockchain truly has shaped up into one of the biggest technological disruptions of the decade. A digitized, distributed and secure ledger that guarantees immutable, transparent transactions, it gives both parties involved a proper breakdown for each transaction, thus ensuring credibility throughout the entire process. The most popular implementation of the blockchain are cryptocurrencies, the most well-known of these cryptocurrencies being the Bitcoin. A few problems relating to cryptocurrencies have been brought to light recently, like slow performance and processing of the public blockchain, excessive price volatility, energy consumption while mining and scams involving fraudulent ICOs (Initial Coin Offering). However, I think that all these problems can be solved with time. With increasing awareness, the cryptocurrency regulations will fall into place and as the security around blackchains becomes robust, these issues should subside.

Is cryptocurrency here to stay?

It’s like this: if blockchain is an umbrella, cryptocurrency is only one of the spokes of that umbrella. The blockchain can be used for various other things too! There are many debates happening about whether cryptocurrency will sustain in the longer run or simply be considered as another great technological invention that isn’t fruitful. I truly believe that cryptocurrency is here to stay. I also believe that it will definitely be used as an alternate currency source, if not mainstream, and it is only a matter of time before governments and financial institutions realise its potential and embrace it. While there are some international sanctions on cryptocurrencies in certain countries, fiat currencies also face this turbulence. But they have survived in these ecosystems. The combination of anonymity, ease of conversion to crypto, and the ability to move funds overseas makes cryptocurrencies a very attractive alternative and safety valve for citizens of any country. The sooner industries understand this, the more prepared they will be for the future. One industry that can benefit most from the blockchain and cryptocurrency is the insurance industry.

InsureTech: Becoming smarter with smart contracts

Leveraging blockchain as the distributed infrastructure can prevent fraud, and that is something InsureTech must implement. This can be done using smart contracts that help insurance companies and their clients come to common ground. A smart contract executes instantaneously when the constraints of all parties are met. How would it work? The consumer could set an upper limit for the insurance premium, add-ons and specials conditions that he/she is looking for from the insurance vendors. The insurance vendors could potentially bid for that contract as long as it is within their constraints. Only when the constraints on both ends are met will the contract be executed, with customers spending money on the policy they want that would be issued instantly! This could help consumers identify the exact details of the insurance they would want and to cap their budget. Insurance agents can help customers facilitate these conditions and receive commissions instantly. Since all this is instantaneous, un-manned, digital and devoid of any security risk, it will also allow for increased efficiency and quite a bit of time for both parties. Smart contracts would also lead to better settlement of claims since all past transactions would be recorded on the public blockchain and all processes would be completely transparent.

The future of cryptocurrency in the insurance sector

With cash acceptance declining around the globe, the potential for industries to take on cryptocurrencies is even higher now. Some insurance companies have already started implementing this. In April 2018, one of the world’s largest insurers, Allianz announced that it is testing the introduction of its own cryptocurrency in the form of an Allianz token. The intention is to increase efficiency while eliminating exchange rate risks in internal payment transactions. They feel that this will decrease their dependency on banking systems across the globe and also counter the challenge of converting and reconverting foreign currencies that they do not accept. This would result in saving a whole lot of commissions, and that money can be put to more optimal use.

Ryskex, a captech ecosystem founded in Berlin in 2017 specialises in solutions for captive companies, with focus on saving insurance tax, capacity bottlenecks of various insurance lines, and creation of new solutions for non-insurable risks. It uses the public Ethereum blockchain to mitigate risk hedging of captive owners and large corporates. The ecosystem has its own token to regulate payments, the Ryscoin. The company is currently working to cover cyber risks, recruitment problems and counter innovation failures.

With all of this being put into place, one thing is clear. Cryptocurrencies have moved way beyond the phase where they were considered part of a speculative bubble. They are fast becoming a reality, and one that all of us need to keep in mind and adapt to in the near future. There’s only one ground rule to succeed in matters of technology: to disrupt. And in my opinion, the future looks like a place where cryptocurrency is all set to disrupt InsureTech!

Posted in General | Leave a comment

Adding SSL certificate to Traefik on ECS

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

Traefik is awesome reverse proxy & load balancer. If you are not using Traefik already then I recommend using it in your next project. I can guarantee that you will not regret.

Setting up SSL certificate on Traefik is a cakewalk. While adding SSL on traefik, I realised how it outshine other reverse proxy (Nginx , HAProxy).

Traefik use LetsEncrypt to automatically generate and renew SSL certificates.

Dockerfile

FROM      traefik:v1.7-alpine

COPY      traefik_ecs.toml /etc/traefik/traefik.toml
RUN touch /etc/traefik/acme.json
RUN chmod +x /etc/traefik/acme.json

traefik_ecs.toml

defaultEntryPoints = ["https", "http"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
    entryPoint = "https"
  [entryPoints.https]
  address = ":443"
  [entryPoints.https.tls]
  [entryPoints.bar]
  address = ":8080"

[api]
entryPoint = "bar"
dashboard = true

[ecs]
clusters = ["YOUR_ECS_CLUSTER_NAME"]
watch = true
domain = "YOUR_DOMAIN_NAME"
autoDiscoverClusters = false
refreshSeconds = 15
exposedByDefault = true
region = "YOUR_AWS_REGION"
accessKeyID = "YOUR_AWS_ACCESS_KEY_ID"
secretAccessKey = "YOUR_AWS_SECRET_ACCESS_KEY"
[acme]
email = "YOUR_EMAIL"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"

Replace YOUR_* values with actual, build image using Dockerfile and deploy it on ECS. That’s it, Traefik will take care of rest and SSL certificate will be added to your domain. Isn’t Traefik awesome ? Let me know what you think through comments below.

References:

  1. https://www.smarthomebeginner.com/traefik-reverse-proxy-tutorial-for-docker/
  2. https://blog.networkprofile.org/my-traefik-reverse-proxy-setup/
  3. https://github.com/netbears/traefik-cluster-ecs

 

Posted in General | Tagged , , | Leave a comment

My Internship at Josh

It was during our sixth-semester, that the entire class received an email about the internship at Josh Software Pvt Ltd. We had to attend a coding round, and then two technical rounds followed by an HR round. Being a student of Industrial mathematics course I was not into programming much but still, I decided to appear for the coding round. Overall 40 students came for the internship drive. After the initial round due to some glitch, my name was not in the list of those selected for the next round. This was a major blow to me because I was hoping to make it to the next rounds. As I was about to leave the place with a heavy heart I was informed that I have also cleared the first round. I was literally on cloud nine by this time(So overwhelmed…that I almost shouted at my friend). Then came the technical and HR rounds, I excelled in all of them and ultimately bagged my first ever internship. When I look back the whole selection process itself was a learning experience. I got to know more about myself, my goals, strengths, and weaknesses. The fact that I was the only student from my class and the only female candidate who was awarded the internship helped in firmly establishing my belief in myself.

After joining as an intern at Josh, I went through a rigorous 10 days training. I learned and relearned a lot of stuff. It was exhausting and scary at the same time. The feeling of vulnerability and inadequacy would come now and then. One thing which helped me initially in gelling out with the people at Josh was the new year’s party.

My first office party!!

All the interns were introduced to the other team members at Josh. The environment at the party was very chill. From co-founders to new appointees everyone was present there and was enjoying freely. This was an ice-breaking moment for me because I realized that your work is all that matters. As long as you are doing your work sincerely, you don’t need to be scared of anyone. It doesn’t matter whether you are experienced or fresher. Also being an intern all I wanted was an environment where there is room for mistakes, improvements and a place where people are approachable. One thing which standout for Josh is its amazing work culture. It’s so conducive for one’s overall growth.

After a while, I was assigned a project manager and a mentor. I was very nervous and especially when I and my partner were called by our project manager Anil Kumar and mentor Rahul Ojha. We were asked to complete an assignment which was full of technical jargon. Being a naive programmer I was scared to death when I was asked for this. I requested them that I need time because I have never attempted such an assignment before. To my surprise, they understood and told me to start with elementary assignments. It took a lot of mistakes, learning and brainstorming sessions with Rahul but ultimately I was able to complete my assignment. This was a huge achievement for me. The way I was being monitored by my mentor was amazing. He was not just spoon feeding but was giving me hints about what to learn and where to look for the answers. It was like a puzzle. I enjoyed it thoroughly.
The first project was finally given to me and my partner after carefully analyzing our performances. We were working on the rails technology and were asked to make an app. From being a student to an intern to my first client meeting. It was a roller coaster ride. I got a chance to directly communicate with the client. It was a humbling experience. For the first time, I felt like I am an integral part of Josh and have been given responsibilities for which I am accountable.

Presently my project is at the fag end of completion. I would be lying if I say it was an easy journey from beginning to this end. There were many ups and downs. Often I found myself in the position where I couldn’t decide how to go about the problem. When one is not able to even pinpoint the problem, finding the solution is a different problem altogether to solve. But then a word with Anil or others would dissipate the clouds of dismay. Things became easier for us because we could ask our doubts to anyone and everyone was ready to help us with full enthusiasm.

I could mention so many incidents where I was helped by my team like the one time I was frustrated with the pace at which I was completing my work. This was hampering my performance also. I was like what I am even doing here. This is not for mathematics students and is tailor-made for computer science students. Like a cold breeze on a hot day, Rahul’s advice would come to stick with the problem and that with extra effort I could solve my problem. God and only he knows how much I have troubled him by my doubts. Shailesh for whom I had this impression that he is very reserved and strict proved me wrong by laughing at the memes I used to show him. The attitude of Ganesh whenever he sees a problem often leaves me speechless. His statement “ab to isko solve karenge hi” works like glucose. It provides immense energy to tackle any problem. Sahil’s perseverant attitude always pushed me to have the same tenacity on my work too. I still remember one talk delivered by Mr. Gautam Rege(Co-founder of Josh), it was accessible to anyone and was very motivating. All one needs at the start of his career is ample support and motivation. I really look up to you sir. Thank you for being so inspiring and motivating.

As I have mentioned earlier, the one thing which standout at Josh is its work culture. It’s a perfect blend of professionalism and flexibility. One gets appreciated for good work and at the same time, you can’t take your work for granted. I remember the numerous times I have been rebuked for my mistakes and also appreciated for my good work. My participation as a trainer during one of Rails girls meet up was appreciated in All hands. These little acts worked as a catalyst for my growth.

How many times does an intern get the chance to share lunch with co-founders? Yes, not many times but at Josh, the environment is very congenial. The monotonous office culture would sometimes take a toll on us and to kill the boredom we-the music lovers-would start playing songs. Instead of not allowing us Neha and Sai would just advice us to slightly lower the volume. The other thing which came to our rescue was carrom. I am nowhere qualified even to tell myself a naive player. The probability that I would not hit the piece I am targeting was more than I hit it. I was casually made fun of my bad shots by Mr. Umesh and Mr.Swapnil but now it seems that along with programming skills, I have also honed up my carrom skills.
This blog has been the hardest to write for me by far. In part, the challenge stems from trying to sum up months worth of experiences in just a few paragraphs.
My internship at Josh software has taught me more than I could have imagined. As an Intern, I feel my duties were diverse and ever-changing. Sometimes it’s tough to recall everything I have taken in over the past months, but I feel that these are some of the most beneficial lessons I have learned.

What I’ve Learned:

I’m not alone: Coming into this position, I felt that I had no idea where my career was going and I lacked confidence about what I could do and what I am really good at. My internship has definitely given me a better understanding of my skill set and where my career may take me, but most importantly, I’ve come to learn that I am not alone. This job has taught me that almost everybody is in the same position. Very few college students know what they want to do, and it is something that is simply not worth worrying about. Thanks to my internship I now know that if I continue to work hard things will fall into place.

How to behave in the office: This being my first position in an office atmosphere, I didn’t know exactly what to expect. The environment here at Josh is quite relaxed, yet it taught me how to behave in the workplace. Simply working in the office and getting used to everything here has definitely prepared me for whatever my next position may be. Just observing the everyday events has taught me more about teamwork, and how people can come together to get things done. Although sometimes I have to remind myself to use my inside voice, I feel I’ve adapted to office life relatively well.

How to build my resume: As I said, this internship has improved my skills a ton, both off the paper and on paper. I didn’t realize it all of this time, but this position served not only as a positive learning experience but a resume builder as well. I came into this with a resume that was basically naked, now I am leaving and I have lots of updating to do. My resume doesn’t need a makeover, it needs to be restarted from scratch, and that’s a good thing! I underestimated how much work I did that actually translates to my resume.

I’d like to thank everyone here at Josh who has helped me out. This has truly been a great learning experience and I’ll be forever indebted to those who gave me a hand here. As far as future interns are concerned I would advice to always be friendly, work hard, and ask questions. Always ask questions. Hopefully, you come away from your internship with as much as I did.

Posted in General | Leave a comment

GoLang with Rails

Content posted here with the permission of the author Shweta Kale who is currently employed at Josh Software. Original post available here.

GoLang with Rails? Wondering why one would use GoLang with Rails?

Read on to find out!!

This is purely based on our requirement but can surely benefit others looking forward to similar use-case. We had a web app written in Rails but facing performance bottleneck while processing large chunk of data. The natural choice seem to use power of GoLang concurrency.

In order to use GoLang with our Rails app few approaches came to my mind. But I found one or the other flaw:

  • Write api’s in GoLang APP and route request from nginx based on request URL. Simple but for using this approach we would also need to add authentication in GoLang app. So authentication will be Rails as well as in GoLang – This doesn’t seem correct, because if I had to change authentication mechanism, would need to make changes in two apps.

  • Use RestClient and call GoLang apis from Rails application. So request will be routed to Rails app and it will call api from GoLang app and serve response. Here I will achieve some level of performance but again my Rails app will have to serve request which GoLang app can directly serve and the response has to wait for response from GoLang app.

  • Use FFI. Using FFI we can call GoLang binary directly. You can watch this video to see how it can be done. This seems fine at first, but what if I had to load balance moving GoLang app to other server?

So which approach did I follow?

We went with NONE of the above, but a 4th idea using rack_proxy gem.

Here is sample code for middleware we wrote

class EventServiceProxy < Rack::Proxy
def initialize(app)
@app = app
end

def call(env)
original_host = env["HTTP_HOST"]
rewrite_env(env)
if env["HTTP_HOST"] != original_host
perform_request(env)
else
@app.call(env)
end
end

def rewrite_env(env)
request = Rack::Request.new(env)

if request.path.match('/events')
if env['warden'].authenticated?
env["HTTP_HOST"] = "localhost:8000"
env['HTTP_AUTHORIZATION'] = env['rack.session']['warden.user.user.key'][0]
end

env
end
end
end

And we inserted our middleware just after Warden (Devise uses this internally for authentication)

config.middleware.insert_after(Warden::Manager, EventServiceProxy)

In above code snippet we are just proxing our request to localhost:8000 where GoLang App is running and setting up user_id in header. Warden adds authenticated user_id in env['rack.session']['warden.user.user.key'][0] so now we know who is logged in at GoLang App from header.

We added middleware in GoLang which extracts user_id from header and sets curretUser details in context.

Important Note
Our GoLang application is exposed only to Rails application and not to the whole world so we are sending user_id in header.

The main advantages we saw using this approach are:

  • We could use existing authentication mechanism used in Rails application
  • If needed we can add load balancer to our Rails and/or GoLang application which is micro service.
  • If we have used FFI we had to put binary on same machine but here we can have application and GoLang service on different machines.
  • As request will be rewritten from Rack it saved redirect and going through entire stack of rails app.

This could be used with any framework similar to Rails.

By using above approach now we can use power of GoLang when needed and development speed of Rails 🙂

Posted in General | Leave a comment

Deploying Service Based Architecture Application on Amazon’s ECS (Elastic Container Service)

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

This blog is second part of Post .

If you have not already read it then I recommend going through it first, I have explained why we chose Service Based Architecture and how Docker helped us in setting up & starting application on local machine with just one command.

In this post we will see how to deploy our App on multiple docker container using Amazon’s ECS.

Why deploy container for each service

Deploying all service on single machine is possible but we should refrain from it. If we deploy all service on single machine then we are not utilising benefits of service based architecture (except manageable/easy-to-upgrade codebase).

2 Major benefits of container deployment for each service are:

  1. Isolation of Crash
  2. Independent Scaling

Isolation of Crash:

If one service in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.

Independent Scaling:

Amount of infrastructure and number of instances of each service can be scaled up and down independently.


Why we chose Amazon’s ECS

We mostly use Amazon’s AWS service for deploying our applications therefore our first preference is services provided by Amazon AWS for deploying containers.

For container deployment, Amazon provide 2 service to choose from

  1. EKS (Elastic Container Service for Kubernetes)
  2. ECS (Elastic Container Service)

Amazon is charging $0.2 per hour for each Amazon’s EKS cluster. We didn’t wanted to pay for services which is not directly impacting our business therefore we looked for alternatives.

Amazon does not charge for ECS. We have to pay only for the EC2 instance which are running. Another advantage of ECS is its learning curve which is much lower then EKS.

Therefore ECS is optimal for our use case.


Before we start using ECS, we should be familiar with components of ECS

Components of ECS

  • Task Definition
  • Task
  • Service
  • Cluster
  • ECR

Task Definition:

task definition is like a blueprint for your application. In this step, you will specify a task definition so Amazon ECS knows which Docker image to use for containers, how many containers to use in the task, and the resource allocation for each container.

Task:

Task is instance of a Task Definition. It is running container with the settings defined in the Task Definition

Service:

A service launches and maintains copies of the task definition in your cluster. For example, by running an application as a service, Amazon ECS will auto-recover any stopped tasks and maintain the number of copies you specify.

Cluster:

A logic group of EC2 instances. When an instance launches the ecs-agent software on the server registers the instance to an ECS Cluster.

ECR:

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications.


Launch Types:

Amazon ECS has two modes: Fargate launch type and EC2 launch type

  • Fargate
  • EC2

Fargate:

AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. All you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application

EC2:

EC2 launch type allows you to have server-level, more granular control over the infrastructure that runs your container applications. Amazon ECS keeps track of all the CPU, memory and other resources in your cluster, and also finds the best server for a container to run on based on your specified resource requirements. You are responsible for provisioning, patching, and scaling clusters of servers. You can decide which type of server to use, which applications and how many containers to run in a cluster to optimize utilization.

Choosing between Fargate & EC2

Fargate is more expensive than running and operating an EC2 instance yourself. Fargate price is reduced by 50% recently . To start with, we need more control over our infrastructure therefore we chose EC2 over Fargate. May be we will switch to Fargate in future when its cost is similar to EC2 and we have more experience in managing ECS infrastructure.


Create ECS Cluster

Go to Amazon ECS Service,

In few minute, your cluster will be created and you will see it under ECS service.

Traefik (Load Balance & Proxy Server)

Traefik (open source & production proven) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components and configures itself automatically and dynamically. Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world.

Traefik Overview

Traefik Web UI

Traefik provides a web UI for showing all running container and path on which they are served. Example:

Traefik Web UI

Deploy Traefik on ECS

Create a Task definition for Traefik, click new task definition.

Click on Add Container.

Click Create Create Task Definition.

Now we will create a service for running Traefik task

Click on create service. This will create a service, After Service is created, it will start running a Task for given task definition.

Edit Security Group Inbound port, Add following rule:

Now go to public IP address of EC2, example: 192.12.31.12:8080

You should see Traefik Dashboard.

Create ECR Repo for each service

Go to Amazon ECR service:

Logging

You can send each container instance’s ECS agent logs and Docker container logs to Amazon CloudWatch Logs to simplify issue diagnosis.

Edit Task definition to set log configuration

Deploying Rails API

  • Create a Task Definition for Rails API

After creating task definition, create a service to launch container

  • Service

Other steps is similar to Traefik service creation, as shown above.

traefik.frontend.rule in Docker label specify mapping for url & service. Example: Host:example.com;PathPrefixStrip:/rails-api, here /rails-api path is mapped with our rails-api container which is running on ECS.

Once service is live and task is running, curl example.com/rails-api and it will be served through rails-api container which we just deployed.

Deploying React APP

Deployment step for react is similar to rails app, only difference is creation of react image for production deployment.

My Dockerfile for react production deployment is:

FROM node:11.6.0-alpine

WORKDIR '/app'

# Install yarn and other dependencies via apk
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*

COPY package.json yarn.lock /app/

COPY . ./

RUN npm run build

# production environment
FROM nginx:1.13.9-alpine
ARG app_name
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
COPY --from=0 /app/build /usr/share/nginx/html/$app_name
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

conf is directory with following structure

---conf
  |
  ---conf.d
     |
     --- default.conf

default.conf contains

server {
  listen 80;
  root   /usr/share/nginx/html;
  index  index.html;
  location /react-web {
    try_files $uri $uri/ /react-web/index.html;
  }
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /usr/share/nginx/html;
  }
}

Here, I am serving my compiled HTML, CSS & JS through nginx.

My docker-compose-prod.yml

react-web:
    build:
      context: './react-web'
      dockerfile: $PWD/Dockerfile-React-Prod
      args:
        - app_name=react-web
    volumes:
      - $PWD/inventory-web/:/app/
    environment:
      - NODE_ENV=production

In package.json, I added:

"homepage": "/react-web"

and I added traefik frontend rule to map /react-web with react container.

Now create production image for react-web, push on ECR & deploy like traefik service. After deployment react-web should be accessible when accessed on /react-web path.


Deployment Script

I have written a shell script for deployment on ECS. My shell script requires AWS Command Line Interface (AWS CLI) & ecs-deploy.

#!/bin/sh

# Login to amazon ecr
eval $(aws ecr get-login --no-include-email)

# Build production image
docker-compose -f docker-compose-prod.yml -p prod build $1

# Tag image with latest tag
docker tag prod_$1:latest path-to-ecr-repo:latest

# Push image to ECR
docker push path-to-ecr-repo:latest

# Use ecs-deploy to deploy latest image from ECR
./ecs-deploy -c cluster-name -n $1 -i path-to-ecr-repo:latest

Save above script in deploy file.

For deployment:

./deploy NAME-OF-SERVICE
example: ./deploy rails-api


Summary

Learning curve for ECS is short and there is no extra cost for ECS service (charges applicable for EC2 instance only) therefore if you are getting started with container deployment on production then ECS is good fit.

In Next blog post I will write how to deploy Redis & Elasticsearch container on ECS and how to setup Network Discovery so that our Rails API container can communicate with Redis & Elasticsearch.

Posted in General | Leave a comment

Understanding Repaint and Reflow in JavaScript

Content posted here with the permission of the author Suhas More, who is currently employed at Josh Software. Original post available here.

Recently, while researching what makes React’s virtual DOM so fast, I realized how little are we aware about javascript performance. So I’m writing this article to help raise the awareness about Repaint and Reflow and JavaScript performance in general.”

Before we dig deeper, do we know how a browser works?

A picture is worth a thousand words. So, let’s have a high-level view of how a browser works!

hmm… what’s “browser engine” and “rendering engine”?

The primary job of a browser engine is to transform HTML documents and other resources of a web page into an interactive visual representation on a user’s device.

Besides “browser engine”, two other terms are in common use regarding related concepts: “layout engine” and “rendering engine”. In theory, layout and rendering (or “painting”) could be handled by separate engines. In practice, however, they are tightly coupled and rarely considered separately.


let’s understand how browsers draw a user interface on the screen.

When you hit enter on some link or URL browser make an HTTP request to that page and the corresponding server provides (often) HTML document in response. (a hell of a lot of things happen in between)

Step by step processing
  • The browser parses out the HTML source code and constructs a DOM treea data representation where every HTML tag has a corresponding node in the tree and the text chunks between tags get a text node representation too. The root node in the DOM tree is the documentElement (the tag)
  • The browser parses the CSS code, makes sense of it. The styling information cascades: the basic rules are in the User Agent stylesheets (the browser defaults), then there could be user stylesheets, author (as in author of the page) stylesheets – external, imported, inline, and finally styles that are coded into the style attributes of the HTML tags
  • Then comes the interesting part — constructing a render tree. The render tree is sort of like the DOM tree, but doesn’t match it exactly. The render tree knows about styles, so if you’re hiding a div with display: none, it won’t be represented in the render tree. Same for the other invisible elements, like head and everything in it. On the other hand, there might be DOM elements that are represented with more than one node in the render tree – like text nodes for example where every line in a needs a render node. A node in the render tree is called a frame, or a box (as in a CSS box, according to the box model). Each of these nodes has the CSS box properties – width, height, border, margin, etc
  • Once the render tree is constructed, the browser can paint (draw) the render tree nodes on the screen

Here is a snapshot of how browser draws user interface on screen.

It happens in the fraction of seconds that we don’t even notice that all this happened.

Look closely.
How browser drawing layout and trying to detect root element, siblings and it’s child element as node comes and rearranging it’s layout accordingly.


Let’s take one example

<html>
<head>
  <title>Repaint And Reflow</title>
</head>
<body>
    
  <p>
    <strong>How's The Josh?</strong>
    <strong><b> High Sir...</b></strong>
  </p>
  <div style="display: none">
    Nothing to display
  </div>
  <div>
    <img src="..." />
  </div
  ...
</body> 
</html>

The DOM tree that represents this HTML document basically has one node for each tag and one text node for each piece of text between nodes (for simplicity let’s ignore the fact that whitespace is text nodes too) :

documentElement (html)
    head
        title
    body
        p
            strong
                [text node]
        p
            strong
                b
                    [text node]         
        div 
            [text node]
        
        div
            img
        
        ...

The render tree would be the visual part of the DOM tree. It is missing some stuff — the head and the hidden div, but it has additional nodes (aka frames, aka boxes) for the lines of text.

root (RenderView)
    body
        p
            line 1
        line 2
        line 3
        ...
        
    div
        img
        
    ...

The root node of the render tree is the frame (the box) that contains all other elements. You can think of it as being the inner part of the browser window, as this is the restricted area where the page could spread. Technically WebKit calls the root node RenderView and it corresponds to the CSS initial containing block, which is basically the viewport rectangle from the top of the page (00) to (window.innerWidthwindow.innerHeight)

Figuring out what and how exactly to display on the screen involves a recursive walk down (a flow) through the render tree.

Repaint and Reflow

There’s always at least one initial page layout together with a paint (unless, of course you prefer your pages blank :)). After that, changing the input information which was used to construct the render tree may result in one or both of these:

  1. parts of the render tree (or the whole tree) will need to be revalidated and the node dimensions recalculated. This is called a reflow, or layout, or layouting. Note that there’s at least one reflow — the initial layout of the page
  2. parts of the screen will need to be updated, either because of changes in geometric properties of a node or because of stylistic change, such as changing the background color. This screen update is called a repaint, or a redraw.

Repaints and reflows can be expensive, they can hurt the user experience, and make the UI appear sluggish

Repaint
As the name suggests repaint is nothing but the repainting element on the screen as the skin of element change which affects the visibility of an element but do not affects layout.
Example.
1. Changing visibility of an element.
2. Changing outline of the element.
3. Changing background.
Would trigger a repaint.

According to Opera, the repaint is an expensive operation as it forces the browser to verify/check visibility of all other dom nodes.

Reflow
Reflow means re-calculating the positions and geometries of elements in the document, for the purpose of re-rendering part or all of the document. Because reflow is a user-blocking operation in the browser, it is useful for developers to understand how to improve reflow time and also to understand the effects of various document properties (DOM depth, CSS rule efficiency, different types of style changes) on reflow time. Sometimes reflowing a single element in the document may require reflowing its parent elements and also any elements which follow it.


Virtual DOM VS Real DOM

Every time the DOM changes browser need to recalculate the CSS, do layout and repaint web page. This is what takes time in real dom.

To minimize this time Ember use key/value observation technique and Angular uses dirty checking. Using this technique they can only update changed dom node or the node which are marked as dirty in case of Angular.

If this was not the case then you are not able to see new email as soon as it comes while writing a new email in Gmail.

But, browser are becoming smart enough nowadays they are trying to shorten the time it takes to repaint the screen. The biggest thing that can be done is to minimize and batch the DOM changes that make repaints.

The strategy of reducing and baching DOM changes taken to another level of abstraction is the idea behind React’s Virtual DOM.

What makes React’s virtual DOM so fast?

React doesn’t really do anything new. It’s just a strategic move. What it does is It stores a replica of real DOM in memory. When you modify the DOM, it first applies these changes to the in-memory DOM. Then, using it’s diffing algorithm, figures out what has really changed.

Finally, it batches the changes and call applies them on real-dom in one go. Thus, minimizing the re-flow and re-paint.

Interested in reading more on that? Well, that’s a topic for another post?


Posted in General | Leave a comment

Docker Setup for Service Based Architecture Application

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

What is Service Based Architecture ?

At first Micro Service Architecture & Service Based Architecture looks similar but both are different from each other.

Micro service architecture advocates smaller components. An application can consist of hundred or thousands of micro services whereas Service based architecture advocates breaking the code apart in the domain-centric way. An application can consist of 10–12 deployable services. These services may have separate database or they may share same database.

Managing few micro services is easy but as number of micro services increases, challenges to manage them is not an easy task. Number of network calls also increases.

In case of Service based architecture, number of services are limited therefore managing it is not a challenge. Also number of network call is less therefore should give better performance.

ThoughtWorks director Neal Ford argued in a talk that organizations transition more easily from a monolithic architecture to a service-based architecture than to a microservices architecture

Ref: https://www.infoq.com/news/2016/10/service-based-architecture

Why we chose Service Based Architecture Over Micro Services

Background: We are building an ERP Software. It is going to use by 50–100 people at a time. We are team of 3 developer and we need to deliver first release in 3 months.

We aim to build scalable and maintainable product. Monolith is out of option. We had 2 options, Micro Service OR Service Based Architecture. Micro Service requires complex setup and will double our efforts. As we have limited team size and our timelines are fixed therefore Service Based Architecture with common database made more sense for us.

Challenges we faced

We had 8 repository, one for each services. Setting up project on local for new developer is very time consuming. Every service needed to setup separately.

Apart from setting up all services. We need to install postgres, redis & elasticsearch. If you are stuck while installing any one of it then it may eat up whole day.

Also starting up application required starting all 8 services manually (which is not interesting thing to do everyday)

Docker for our rescue

We created a single repository for all services. Now getting all changes on local is just a git pull command away.

With docker, we can setup all services with all dependency with just one command.

docker-compose build

And we start our application (all services) by

docker-compose up

Setting up docker compose for an application which consist of 8 services (4 Rails-Api Backend & 4 React Frontend)

Application Directory structure looks like:

project
│
│
└───service-1-api
│
|───service-1-web
│   
└───service-2-api
│
|───service-2-web
│
└───service-3-api
│
|───service-3-web
│
└───service-4-api
│
|───service-4-web
│
└───docker-compose.yml
│
└───Dockerfile
│
└───Dockerfile-React

* Dockerfile is for api images
* Dockerfile-React is for react application images

Of course our services are not named as service-1 & service-2. 
I have changed it deliberately for privacy.

Our docker-compose.yml:

version: '3.6'

services:
  db:
    image: postgres

redis:
    image: 'redis:latest'

elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

service-1-api:
    build:
      context: './service-1-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-1-api:/app
    command: bundle exec puma -p 3000
    ports:
      - 3000:3000
    depends_on:
      - db

service-1-web:
    build:
      context: './service-1-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-1-web/:/app/
    ports:
      - 3001:3001
    environment:
      NODE_ENV: development
      CHOKIDAR_USEPOLLING: 'true'

service-2-sidekiq:
    depends_on:
      - db
      - redis
      - elasticsearch
    build:
      context: './service-2-api'
      dockerfile: $PWD/Dockerfile
    command: bundle exec sidekiq -C config/sidekiq.yml
    volumes:
      - $PWD/service-2-api:/app

service-2-api:
    build:
      context: './service-2-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-2-api:/app
    command: bundle exec puma -p 3002
    ports:
      - 3002:3002
    depends_on:
      - db
      - elasticsearch
      - service-2-sidekiq
    stdin_open: true
    tty: true

service-2-web:
    build:
      context: './service-2-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-2-web/:/app/
    command: npm start
    ports:
      - 3003:3003
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

service-3-sidekiq:
    depends_on:
      - db
      - redis
      - elasticsearch
    build:
      context: './service-3-api'
      dockerfile: $PWD/Dockerfile
    command: bundle exec sidekiq -C config/sidekiq.yml
    volumes:
      - $PWD/service-3-api:/app

service-3-api:
    build:
      context: './service-3-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-3-api:/app
    command: bundle exec puma -p 3004
    ports:
      - 3004:3004
    depends_on:
      - db
      - elasticsearch
      - service-3-sidekiq
    stdin_open: true
    tty: true

service-3-web:
    build:
      context: './service-3-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-3-web/:/app/
    command: npm start
    ports:
      - 3005:3005
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

service-4-api:
    build:
      context: './service-4-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-4-api:/app
    command: bundle exec puma -p 3006
    ports:
      - 3006:3006
    depends_on:
      - db
    stdin_open: true
    tty: true

service-4-web:
    build:
      context: './service-4-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-4-web/:/app/
    working_dir: /app
    command: npm start
    ports:
      - 3007:3007
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

volumes:
  esdata1:
    driver: local

*using this docker-compose.yml configuration, service restart is not required on code change.

Dockerfile:

FROM ruby:2.5.3-alpine

RUN apk add --update bash build-base postgresql-dev tzdata
RUN gem install rails -v '5.1.6'

WORKDIR /app
ADD Gemfile Gemfile.lock /app/
RUN bundle install
COPY . /app/

Dockerfile-React

FROM node:11.6.0-alpine

WORKDIR '/app'

# Install yarn and other dependencies via apk
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*

COPY package.json yarn.lock /app/

RUN yarn install
RUN yarn global add react-scripts

COPY . ./

CMD ["npm", "run", "start"]

For adding new gems in rails api service, add gem in Gemfile and build new image for that service, example:

docker-compose build service-1-api

For adding new package in react app service, use

docker-compose run service-1-web yarn add `package-name`

Conclusion:

Service Based Architecture is good alternative for applications where Manpower & Time are constraints.

In Next Blog I will write about deploying this Application on Amazon ECS(Elastic Container Service).

Posted in General | 1 Comment