Docker Setup for Service Based Architecture Application

Content posted here with the permission of the author Anil Kumar Maurya, who is currently employed at Josh Software. Original post available here.

What is Service Based Architecture ?

At first Micro Service Architecture & Service Based Architecture looks similar but both are different from each other.

Micro service architecture advocates smaller components. An application can consist of hundred or thousands of micro services whereas Service based architecture advocates breaking the code apart in the domain-centric way. An application can consist of 10–12 deployable services. These services may have separate database or they may share same database.

Managing few micro services is easy but as number of micro services increases, challenges to manage them is not an easy task. Number of network calls also increases.

In case of Service based architecture, number of services are limited therefore managing it is not a challenge. Also number of network call is less therefore should give better performance.

ThoughtWorks director Neal Ford argued in a talk that organizations transition more easily from a monolithic architecture to a service-based architecture than to a microservices architecture

Ref: https://www.infoq.com/news/2016/10/service-based-architecture

Why we chose Service Based Architecture Over Micro Services

Background: We are building an ERP Software. It is going to use by 50–100 people at a time. We are team of 3 developer and we need to deliver first release in 3 months.

We aim to build scalable and maintainable product. Monolith is out of option. We had 2 options, Micro Service OR Service Based Architecture. Micro Service requires complex setup and will double our efforts. As we have limited team size and our timelines are fixed therefore Service Based Architecture with common database made more sense for us.

Challenges we faced

We had 8 repository, one for each services. Setting up project on local for new developer is very time consuming. Every service needed to setup separately.

Apart from setting up all services. We need to install postgres, redis & elasticsearch. If you are stuck while installing any one of it then it may eat up whole day.

Also starting up application required starting all 8 services manually (which is not interesting thing to do everyday)

Docker for our rescue

We created a single repository for all services. Now getting all changes on local is just a git pull command away.

With docker, we can setup all services with all dependency with just one command.

docker-compose build

And we start our application (all services) by

docker-compose up

Setting up docker compose for an application which consist of 8 services (4 Rails-Api Backend & 4 React Frontend)

Application Directory structure looks like:

project
│
│
└───service-1-api
│
|───service-1-web
│   
└───service-2-api
│
|───service-2-web
│
└───service-3-api
│
|───service-3-web
│
└───service-4-api
│
|───service-4-web
│
└───docker-compose.yml
│
└───Dockerfile
│
└───Dockerfile-React

* Dockerfile is for api images
* Dockerfile-React is for react application images

Of course our services are not named as service-1 & service-2. 
I have changed it deliberately for privacy.

Our docker-compose.yml:

version: '3.6'

services:
  db:
    image: postgres

redis:
    image: 'redis:latest'

elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

service-1-api:
    build:
      context: './service-1-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-1-api:/app
    command: bundle exec puma -p 3000
    ports:
      - 3000:3000
    depends_on:
      - db

service-1-web:
    build:
      context: './service-1-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-1-web/:/app/
    ports:
      - 3001:3001
    environment:
      NODE_ENV: development
      CHOKIDAR_USEPOLLING: 'true'

service-2-sidekiq:
    depends_on:
      - db
      - redis
      - elasticsearch
    build:
      context: './service-2-api'
      dockerfile: $PWD/Dockerfile
    command: bundle exec sidekiq -C config/sidekiq.yml
    volumes:
      - $PWD/service-2-api:/app

service-2-api:
    build:
      context: './service-2-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-2-api:/app
    command: bundle exec puma -p 3002
    ports:
      - 3002:3002
    depends_on:
      - db
      - elasticsearch
      - service-2-sidekiq
    stdin_open: true
    tty: true

service-2-web:
    build:
      context: './service-2-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-2-web/:/app/
    command: npm start
    ports:
      - 3003:3003
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

service-3-sidekiq:
    depends_on:
      - db
      - redis
      - elasticsearch
    build:
      context: './service-3-api'
      dockerfile: $PWD/Dockerfile
    command: bundle exec sidekiq -C config/sidekiq.yml
    volumes:
      - $PWD/service-3-api:/app

service-3-api:
    build:
      context: './service-3-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-3-api:/app
    command: bundle exec puma -p 3004
    ports:
      - 3004:3004
    depends_on:
      - db
      - elasticsearch
      - service-3-sidekiq
    stdin_open: true
    tty: true

service-3-web:
    build:
      context: './service-3-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-3-web/:/app/
    command: npm start
    ports:
      - 3005:3005
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

service-4-api:
    build:
      context: './service-4-api'
      dockerfile: $PWD/Dockerfile
    volumes:
      - $PWD/service-4-api:/app
    command: bundle exec puma -p 3006
    ports:
      - 3006:3006
    depends_on:
      - db
    stdin_open: true
    tty: true

service-4-web:
    build:
      context: './service-4-web'
      dockerfile: $PWD/Dockerfile-React
    volumes:
      - $PWD/service-4-web/:/app/
    working_dir: /app
    command: npm start
    ports:
      - 3007:3007
    environment:
      - NODE_ENV=development
      - CHOKIDAR_USEPOLLING=true

volumes:
  esdata1:
    driver: local

*using this docker-compose.yml configuration, service restart is not required on code change.

Dockerfile:

FROM ruby:2.5.3-alpine

RUN apk add --update bash build-base postgresql-dev tzdata
RUN gem install rails -v '5.1.6'

WORKDIR /app
ADD Gemfile Gemfile.lock /app/
RUN bundle install
COPY . /app/

Dockerfile-React

FROM node:11.6.0-alpine

WORKDIR '/app'

# Install yarn and other dependencies via apk
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*

COPY package.json yarn.lock /app/

RUN yarn install
RUN yarn global add react-scripts

COPY . ./

CMD ["npm", "run", "start"]

For adding new gems in rails api service, add gem in Gemfile and build new image for that service, example:

docker-compose build service-1-api

For adding new package in react app service, use

docker-compose run service-1-web yarn add `package-name`

Conclusion:

Service Based Architecture is good alternative for applications where Manpower & Time are constraints.

In Next Blog I will write about deploying this Application on Amazon ECS(Elastic Container Service).

Advertisements
Posted in General | 1 Comment

Conversations that matter: Driving digital disruption in Banking

Banking and Chatbots

The word on the street says that banking and finance are moving towards digital transformation more aggressively than ever. Disruptive technologies like artificial intelligence and machine learning are key focus areas for FinTech leaders today. Developing long term solutions at scale to simplify finance operations is what will get developers the most brownie points!

Gartner predicts that companies offering personalization will outperform brands who don’t. With IoT in place now, the number of connected devices in the market has only increased, and the world already has 2.5 billion smartphone users. More than 37% of the world is using messaging apps today, and approximately 20 million already use smart speakers. The paradigm shift from non-personalized marketing to social media is representative of new age interactive customer experiences, and that’s the big fish to catch. Conversational commerce is a key driver of this transformation, and its recent popularity is well deserved. This also includes text based chatbots that increase consumer engagement especially in service-based sectors. In the last year, especially, this has grown incredibly and enabled businesses to connect to 5 times more the number of customers than usual. Impact? Their revenue grew by 10-20%. That is massive! Imagine the kind of growth opportunities out there!

Conversational Banking: Embracing digital transformation
The key differentiator for conversational commerce is that it allows users to converse through a platform of their choice, along with greater transparency. It is cost effective for banks and financial institutions, as a chatbot is simply a conversational algorithm embedded within a chat interface, i.e. a one-time investment. Intelligent chatbots generate a human like conversation with consumers, provide businesses with a dynamic understanding of their needs and while at it, optimize user data in real time. The smarter your bot becomes, the more data it collects. Initially, FinTech chatbots focused on customer experience but more recently, investments in contextual insights driven communication have made bots the new age contact executives, says PWC. Bots have also overtaken IVR and are helping users authenticate transactions seamlessly. In fact, chatbots can also provide CXOs with operational information and thus help them focus more on strategic business objectives, rather than remain caught up in day to day activities.

Adopting holistic, AI empowered support systems for back office

While optimizing customer experiences is priority, chatbots can be used to solve operational back office problems too. Backed by advanced machine learning and natural language processing (NLP), chatbots are essentially conversational analytics platforms that initiate actions without human intervention. A well designed chatbot reduces turn around time, provides instant information, enhances cross selling, improvises mundane queries and has the ability to provide omni channel experiences. For example, if a customer wants their bank statements, all they need to do is send a message to the chatbot. The details will be furnished to them within seconds! Based on the customer’s history and digital profile, chatbots also recommend investment options, provide market related news and suggest ways to utilize credit card points. Proactive suggestions for the win! That’s not all. Advanced chatbots can even analyze complex legal contracts much faster than lawyers, saving a large chunk of manpower and resources in the process. Granting access to software systems, resetting passwords and handling day to day IT operations is also achievable. Cognitive intelligence can further be utilized to pay down debt. Personal banking assistants are already anticipating questions for thousands of common FAQs, reducing the need for time consuming telephonic conversations.

Roadmap for the future: Intelligent conversations
An insights-driven bank complete with sales and marketing functions, and custom offerings based on global trends is the future of FinTech. Imagine an institution empowered with technology that can engage with consumers in real time, bridge gaps between existing legacy infrastructure through predictive data analytics and keep track of everything, all in one place. It will benefit not only the end consumer, but the bank’s employees as well by cutting down hours of repetitive work. For example, if a bank runs a loyalty program and wants to find out customers with the most number of transactions, they don’t need to manually look through their records. An AI and RPA powered chatbot can easily look through the customer data and respond via text! Scalable, high performance open source solutions implemented through apps add to dynamic UX as well. Mobile payments, digital wallets and UPI have also seen a massive escalation in the recent years, and non banking transactions like bill payments are adding to the boost. Soon, front office banking systems will be overtaken by mobile apps, and ticketing and back office systems will run on data analytics and blockchain. While certain processes like KYCs and internal employee management will definitely rely on the human touch, disruptive tech is here to stay. Conversational commerce may be in its infancy stage right now, but it holds the power to build strong business-consumer relationships.. The next era of FinTech will transform banks of today into cognitive financial institutions of tomorrow, and I, for one, can’t wait to see that happen!

Posted in Artificial Intelligence, Insuretech | Leave a comment

Android SMS Retriever​ API: To Auto Verify SMS

Content posted here with the permission of the author Chandrashekhar Sahu, who is currently employed at Josh Software. Original post available here.

The Android app needs SMS receive/read permission to retrieve SMS content.

Imagine an application where the use case is to get the SMS only for validating the user using OTP. And rest of the app does not use SMS reading feature again. Then in this case, it is a waste of the resources & time and of course code to check the SMS permissions.

To solve this problem, Google has introduced SMS Retriever API, this API allows to retrieve the OTP without needing of the SMS permission in your application.

Image Credit: Google

Dependency for SMS Retriever API

implementation 'com.google.android.gms:play-services-base:16.0.1'
implementation 'com.google.android.gms:play-services-identity:16.0.0'
implementation 'com.google.android.gms:play-services-auth:16.0.1'
implementation 'com.google.android.gms:play-services-auth-api-phone:16.0.0'

Obtain the user’s phone number (Phone Selector API)

First, we need the number of the user on which the OTP will be received. We create a hint request object and set the phone number identifier supported field to true.

HintRequest hintRequest = new HintRequest.Builder()
        .setHintPickerConfig(newCredentialPickerConfig.Builder().setShowCancelButton(true).build())
        .setPhoneNumberIdentifierSupported(true)
        .build();

Then, we get a pending intent from that hint request for the phone number selector dialogue.

GoogleApiClient apiClient = new GoogleApiClient.Builder(getContext()).addApi(Auth.CREDENTIALS_API).enableAutoManage(getActivity(),
        GoogleApiHelper.getSafeAutoManageId(), new GoogleApiClient.OnConnectionFailedListener() {
            @Override
            public void onConnectionFailed(@NonNull ConnectionResult connectionResult) {
                Log.e(TAG, "Client connection failed: " + connectionResult.getErrorMessage());
            }).build();

            PendingIntent intent = Auth.CredentialsApi.getHintPickerIntent(apiClent, hintRequest);
            startIntentSenderForResult(intent.getIntentSender(),RESOLVE_HINT, null,0,0,0);

Once the user selects the phone number, that phone number is returned to our app in the onActivityResult().

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == RC_PHONE_HINT) {
       if (data != null) {
          Credential cred = data.getParcelableExtra(Credential.EXTRA_KEY);
            if (cred != null) {
               final String unformattedPhone = cred.getId();
            }
       }    
    }
}

Start the SMS retriever

When we are ready to verify the user’s phone number, get an instance of the SmsRetrieverClientobject. Will call startSmsRetriever and attach success and failure listeners to the SMS retrieval task:

SmsRetrieverClient client = SmsRetriever.getClient(mContext);

// Starts SmsRetriever, waits for ONE matching SMS message until timeout
// (5 minutes).
Task<Void> task = client.startSmsRetriever();

// Listen for success/failure of the start Task. 
task.addOnSuccessListener(new OnSuccessListener<Void>() {
    @Override
    public void onSuccess(Void aVoid) {
        // Log.d(TAG,"Successfully started retriever");
    }
});

task.addOnFailureListener(new OnFailureListener() {
    @Override
    public void onFailure(@NonNull Exception e) {
        Log.e(TAG, "Failed to start retriever");
    });
);

Our server can then send the message to the phone using existing SMS infrastructure or service. When this message is received, Google Play services broadcasts an intent which contains the text of the message.

public class MySMSBroadcastReceiver extends BroadcastReceiver {
    @Override
    public void onReceive(Context context, Intent intent) {
        if (SmsRetriever.SMS_RETRIEVED_ACTION.equals(intent.getAction())) {
            Bundle extras = intent.getExtras();
            Status status = (Status) extras.get(SmsRetriever.EXTRA_STATUS);
            switch (status.getStatusCode()) {
                case CommonStatusCodes.SUCCESS:
                    // Get SMS message contents
                    String message = (String) extras.get(SmsRetriever.EXTRA_SMS_MESSAGE);
                    // Extract one-time code from the message and complete verification
                    // by sending the code back to your server for SMS authenticity.
                    break;
                case CommonStatusCodes.TIMEOUT:
                    // Waiting for SMS timed out (5 minutes)
                    // Handle the error ...
                    break;
            }
        }
    }
}

We need to register this BroadcastReceiver in our Manifest file as follows

<receiver
    android:name=".MySMSBroadcastReceiver"
    android:exported="true">
    <intent-filter>
        <action android:name="com.google.android.gms.auth.api.phone.SMS_RETRIEVED" />
    </intent-filter>
</receiver>

Construct a verification message:

When our server receives a request to verify a phone number, first construct the verification message that you will send to the user’s device. This message must:

Otherwise, the contents of the verification message can be whatever you choose. It is helpful to create a message from which you can easily extract the one-time code later on. For example, a valid verification message might look like the following:

<#> Use 123456 as your verification code 
FC+7qAH5AZu

Optional: Save the phone number with Smart Lock for Passwords

Optionally, after the user has verified their phone number, We can prompt the user to save this phone number account with Smart Lock for Passwords so it will be available automatically in other apps and on other devices without having to type or select the phone number again.

Click here to get the source code.

Happy Coding 🙂

Posted in General | Leave a comment

What the next year looks like for InsurTech

insurtech 2

Over the last two years, investments in the global InsurTech market have crossed $2.3 billion and the sector has continued to grow at an astounding rate of 3-4%.  EY estimated the net income growth rate to cross 23% as opposed to 14% in 2017, all of this only point towards one thing: the immense possibilities for the future. With more than 24 life insurance companies and 33 non-life insurance companies, the market is booming in India today. This is mind blowing, but one of the biggest challenges that the insurance sector faces, is the way it is perceived. People no longer want to think of it as boring paperwork, long calls with agents and multiple forms to file for one claim. Consumers, especially millennials are looking for specific policies with small premiums and maximized benefits that they can purchase conveniently, preferably online. Here’s where a combination of insurance and intelligent technology comes into play. InsurTech led by disruptive tech like AI sure has been more than a buzzword in the last few years!

InsurTech in 2018

The Insurance sector in India has seen many improvements in the past year, both in terms of innovation and premium growth. While digital innovations successfully disrupt the traditional ways of functioning, the government is taking more steps towards financial inclusion and a better-connected India. The income of the Indian middle class is also rising steadily, and this means that insurance companies are better suited to plan their premium offerings. Life Insurance premiums have been growing at a CAGR of 12.49%, and Non-life premium at a CAGR of 11.05%. More and more Point of Sale people are becoming part of this industry, and thus, consumers are becoming more and more comfortable with the idea of buying insurance online. This landmark decision by the IRDAI will surely increase business potential quite drastically. Broking Agencies like PolicyBazaar, QuickInsure and TurtleMint have seen investments of more than $250 million in 2018. There’s so much more to come!

Disrupting InsurTech with AI and IoT

With more than 337 million smartphones and millennials who form almost half of India’s population, IoT is also booming. This means only one thing: more consumer data. Data is the backbone of artificial intelligence, and going ahead, AI coupled with predictive analytics will pave the way towards mapping trends and consumer behavior better, in turn, helping insurance companies make informed policy decisions. As IoT devices becomes cheaper and better integrated in our daily life cycle, AI will use this data for real-time risk evaluation and ensure the right premiums for every customer! Paperwork? Forms? Manual intervention. All gone! Policy issuance will become so much simpler, faster and more efficient. IoT will play a major role in the future with real-time metrics being a major factor to issue better premiums. Imagine if you get a drive-score to build your driving profile that can get you a reduced premium, and if you ever need to claim the insurance, you don’t even have to stand in line because it gets credited to your account through the click of a button. Sounds awesome, right? AI is all set to disrupt traditional claims, distribution, underwriting, and pricing, and this solution is closer than you think!

Predictions for 2019: Customized Insurance Plans, Cryptocurrency and more

While all insurance companies want to make their premiums as affordable as possible, that will always depend on the customer’s profile. However, in the near future, you could get customised insurance products, negotiated on the terms and benefits of a policy Vs the premium, and this would all be automated for each customer separately. Customers want solutions, not services, and if you can give them that, you’ve won the race already. The emergence of online third party platforms that build an entire insurance ecosystem online for consumers to choose solutions from is going to be a key trend this year. India will also see microinsurance coming up, in line with the government’s goals of financial inclusion mentioned earlier. McKinsey states that item insurance, or the concept of ‘insurance as a service’ will escalate and allow people to insure items only when they are being used. Gamification, chatbots and mobile tools to ensure constant user engagement cannot be missed out. After all, it is the ‘connected generation’ that insurers have to target.

Adopting newer tech to gain an edge over competitors by improving operational efficiency is the mantra we all need to follow. BigData, AI, and IoT are here to make their mark, and the in near future, crypto-currencies may disrupt the InsurTech sector as well. Here’s to a year full of tech innovations that matter!

Posted in Insuretech | Leave a comment

BE CAREFUL WHILE QUERYING INNER OBJECTS IN ELASTICSEARCH

Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

In elasticsearch we can store closely related entities within a single document. For example, we can store a blog post and all of its comments together, by passing an array of comments.

{
  "title": "Invest Money",
  "body": "Please start investing money as soon...",
  "tags": ["money", "invest"],
  "published_on": "18 Oct 2017",
  "comments": [
    {
      "name": "William",
      "age": 34,
      "rating": 8,
      "comment": "Nice article..",
      "commented_on": "30 Nov 2017"
    },
    {
      "name": "John",
      "age": 38,
      "rating": 9,
      "comment": "I started investing after reading this.",
      "commented_on": "25 Nov 2017"
    },
    {
      "name": "Smith",
      "age": 33,
      "rating": 7,
      "comment": "Very good post",
      "commented_on": "20 Nov 2017"
    }
  ]
}

So we have an elasticsearch document describing a post and an inner object comments containing all the comments on a post. But inner objects in elasticsearch do not work as we expect. How ? We will see it soon.

Problem

Now suppose we want to find all blog posts on which user {name: john, age: 34} has commented. So lets again look at our sample document above and find the users who had commented.

name age
William 34
John 38
Smith 33

From the list we can clearly see that there is no user John of 34 years age. For simplicity consider we have only 1 document in elasticsearch index. Lets verify the same by querying the index:

curl -XGET 'localhost:9200/blog/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        { "match": { "comments.name": "John" }},
        { "match": { "comments.age":  34 }}
      ]
    }
  }
}

Our sample document is returned in response. Surprised ?. Now that is why I said:

inner objects in elasticsearch do not work as expected

The problem here is that the library used by elasticsearch(lucene) has no concept of inner objects, so as a result inner objects are flattened into a simple list of field name and values. Our document is internally stored as:

{
  "title":                    [ invest, money ],
  "body":                     [ as, investing, money, please, soon, start ],
  "tags":                     [ invest, money ],
  "published_on":             [ 18 Oct 2017 ]
  "comments.name":            [ smith, john, william ],
  "comments.comment":         [ after, article, good, i, investing, nice, post, reading, started, this, very ],
  "comments.age":             [ 33, 34, 38 ],
  "comments.rating":          [ 7, 8, 9 ],
  "comments.commented_on":    [ 20 Nov 2017, 25 Nov 2017, 30 Nov 2017 ]
}

As you can clearly see above that the relationship between comments.name and comments.age has been lost. So that is why our document matches a query for john and 34.

Solution

To solve this problem we just need to make a small change in mapping of elasticsearch. If you have a look at the mapping of index you will find that the type of comments field is object. We need to update it to type nested.

We can simply update the mapping of our index by running the below query:

curl -XPUT 'localhost:9200/blog' -d'
{
  "mappings": {
    "blog": {
      "properties": {
        "title": { "type": "string" },
        "body": { "type": "string" },
        "tags": { "type": "text" },
        "published_on": { "type": "text" },
        "comments": {
          "type": "nested",
          "properties": {
            "name":    { "type": "string"  },
            "comment": { "type": "string"  },
            "age":     { "type": "short"   },
            "rating":   { "type": "short"   },
            "commented_on":    { "type": "text"    }
          }
        }
      }
    }
  }
}

After changing the mapping to type nested, there is a slight change in the way we can query the index. We need to use nested query. Given below is the nested query example:

curl -XGET 'localhost:9200/blog/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        {
          "nested": {
            "path": "comments",
            "query": {
              "bool": {
                "must": [
                  {
                    "match": {
                      "comments.name": "john"
                    }
                  },
                  {
                    "match": {
                      "comments.age": 34
                    }
                  }
                ]
              }
            }
          }
        }
      ]
    }
  }
}

The above query will return no document in response as there is no match of user {name: john, age: 34}.

Surprised again ? Just a small change solved a problem in no time. It may be a smaller change from our side, but a lot has changed in the way elasticsearch stores our document. Internally, nested objects index each object in the array as a separate hidden document, meaning that each nested object can be queried independently of the others.

Given below is the internal representation of sample document after changing mapping:

{
  {
    "comments.name":    [ john ],
    "comments.comment": [ after i investing started reading this ],
    "comments.age":     [ 38 ],
    "comments.rating":  [ 9 ],
    "comments.date":    [ 25 Nov 2017 ]
  },
  {
    "comments.name":    [ william ],
    "comments.comment": [ article, nice ],
    "comments.age":     [ 34 ],
    "comments.rating":   [ 8 ],
    "comments.date":    [ 30 Nov 2017 ]
  },
  {
    "comments.name":    [ smith ],
    "comments.comment": [ good, post, very],
    "comments.age":     [ 33 ],
    "comments.rating":   [ 7 ],
    "comments.date":    [ 20 Nov 2017 ]
  },
  {
    "title":            [ invest, money ],
    "body":             [ as, investing, money, please, soon, start ],
    "tags":             [ invest, money ],
    "published_on":     [ 18 Oct 2017 ]
  }
}

As you can see each inner object is stored as a separate hidden document internally. This maintains the relationship between their fields.

Conclusion:

So if you are using inner objects in index and querying them too, verify that the type of inner object is nested. Else the query may return invalid result documents.

Thanks for reading. Please like and share so that it can reach out to other valuable readers too.

Posted in General | Leave a comment

Making Manufacturing Smart: Predictive Maintenance

Predictive Maintenance (1).png

A growing middle-class population, higher spending power and per capita income and the increasing share of young professionals in India today have given way to initiatives like Make in India that aims at increasing the contribution of the manufacturing sector to the country’s GDP. Manufacturing is growing at an astounding rate, and with the government’s support along with both, domestic and foreign investments, predictions state that India is on its way to becoming the fifth largest manufacturing hub in the world. Many global companies and MNCs have set up their operational centers here. As manufacturing looks to play a larger role in our economy, technology advancement and tech intervention in this sector will continue to be a great opportunity for the entire IT industry.

Challenges in the manufacturing sector

The goal of every manufacturing organization is the same: to maximize machine efficiency. This is by no means easy, especially because of the rate at which the demands for goods is rising. This also means that machines producing these goods will have to be serviced periodically, as poor maintenance strategies single handedly decrease efficiency. The biggest challenge faced by the manufacturing industry is to provide seamless, consistent performance, because routine failures and downtime are a very real threat to the overall performance. When you have machines that perform repetitive tasks every single day, this is bound to happen. Sometimes, maximum utilization of machine parts (to the extent till when they break off!) may lead to catastrophic, even permanent damage and lead on to a longer downtime. Then, of course, there are the failures that we don’t see coming and the unplanned downtimes. If you change parts frequently, that’s an additional overhead cost, and may cause unnecessary changes to a daily routine. Often, companies may end up with a spare parts surplus, which ultimately impacts the business’s bottom line, and not in a good way. The real question is, can there be something that helps professionals gauge how and when they should get machines serviced?

Prevention is better that cure: Predictive Maintenance

The answer is yes. Enter Predictive Maintenance!  but the use of deep learning technology is leading towards A new age method backed by deep learning and advanced technology, the purpose of predictive maintenance is to safeguard the health of machines and make sure they are not being overused. It aims at avoiding unplanned downtime and minimizing planned downtime. We are now living in the fourth industrial revolution, and it is time for manufacturers to shift from ‘Why fix something that is not broken’ to ‘Let’s prevent it from breaking down in the first place.’ In essence, the requirement of the industry is to move from a reactive chain of thought to an anticipated one, and that is exactly what predictive maintenance offers.

Imagine how much easier life would be if you knew beforehand which machine part needed servicing. Instead of breaking open the entire machine (which by this time, in all probability has stopped working) and figuring out where the problem lies and ordering spare parts because you didn’t know which part would need replacing, you could just keep the required part ready. So much time, energy and money saved! This also means that your downtime is planned, rather, it’s optimized. Undertaking predictive maintenance regularly also means that equipment life increases significantly because it is well taken care of. Moreover, one of the greatest advantages predictive learning offers is a boost in employee productivity since it lowers crucial callouts, saves time and in turn, reduces stress. You are happy. Your machines are well serviced. Your team is at peace. Works like a charm, right?

 

Tech intervention as the base of Predictive Maintenance

This sounds pretty awesome, but it’s not that easy to implement. Predictive maintenance is far from being only a plug and play solution; it is so much more. Without technologies like IoT, data analysis and deep learning, predictive maintenance cannot function. There are hundreds of layers of data that need to be collected over time to keep this up and running, because only properly analyzed data from critical equipment sensors, ERP systems and computerized maintenance management systems can give you an accurate Human to Machine (H2M) interaction. Different organizations and machines may also be at different stages of maturity, but all of them need to be monitored constantly. IoT is the biggest piece of this puzzle because it translated physical actions from machines to digital signals that are analyzed along with this data. It is thus, the key to a successful production network. Then come predictive algorithms and business intelligence tools that read this data, trigger reaction and close the digital-to-physical loop. Deep neural networks are also used in this approach to learn from data sequences and extract valuable insights.

All of this being put into place together provides you with your predictive maintenance strategy, which is then implemented by your organization’s task force. The true impact of these strategies is not immediate, but most definitely measurable. It is still in the early stages of development right now as organizations begin to realize the value that technological disruptions can bring about. Much like a good wine, predictive maintenance is also sure to get better with time. Here’s raising a glass to the future!

 

Posted in Artificial Intelligence, General, Manufacturing industry | Leave a comment

Girls Rule — and we’ve just taken over Go!

Content posted here with the permission of the author Bhuvana Prabhu, who is currently employed at Josh Software. Original post available here.

On 25th of November, I had the honour of being the host of the first ever Golang Girls. I’d like to share my experience of the whole event as to why one should look forward to next Golang Girls. Because I certainly am.

For starters — What is Golang Girls ?

It is an initiative to introduce girls to the power of Go language. Give them a platform to meet various Golang enthusiasts who share one common goal: to get stuck into Go and meet other like minded individuals. Girls do run the world! But women, ladies and even boys are allowed to attend, don’t care whether you are a beginner , intermediate or advanced.

Pioneers of Golang!

So, the first ever Golang girls took place in Pune at Josh Software (Love saying ‘first ever’ — first time is always special, right? :P). Among the 45 participants, including coaches, there were not only students but also working professionals and even a few boys! The most interesting attendee was a boy still in school, in the 8th grade! (Note to self: It’s never too early to start!).

We had teams of 4–5 attendees and had 2 coaches per team. It was lovely to see how people chose not to slip into their blankets on Sunday and rather learn something new.

Pretty Gophers aka coaches!

It all started off with a super motivating talk by Gautam Rege, a big Golang enthusiast (and also someone I look up to). He enlightened us about why Go is sky-rocketing in popularity and why we need to really dive into Go. The eagerness of audience after the talk to get Go-ing was incredible!

Then there was a session by Varsha where attendees did a deep dive on Go Playground. Getting your hands dirty with any language teaches you the nuances and magic of that language. And that’s what really happened — attendees got so engrossed that they were delaying lunch and I had to announce for * times(I don’t really remember the count) to get their attention to food!

Once the lunch was over, we all did something crazy. All of us took up Didi challenge (Mind it, it’s Didi, not Kiki challenge 😛 ). It felt a little idiotic in the beginning but trust me, the fun that it brings is BOMB!. It also took away the drowsiness after the delectable lunch.

Finally, it was time to build an app in Go. And what could be more engaging and interesting than a chat application. (Something that we can’t stop doing). The idea was simple, build a server-less terminal based chat app with Go Routines and gRPC, peer-to-peer and broadcast chat, etc. Attendees were briefed about these concepts. And then they started off with their App. Coaches were constantly around them to guide them whenever needed.

After a couple of hours of struggling with completing the TODOs in the code, guess what? EVERYONE finished the application — even the 8th grade student! It really left me in awe. To reward everyone for their excellent effort and enthusiasm, we had lovely Gopher cupcakes for them(Freakishly cute!)

Tasted as good as they look!

What did I learn at the end of the workshop?

First, it’s never too early to start learning something new 😛 (Thanks to that kid).

Second, Go is the next BIG thing. Definitely going to motivate my peers to dive into it.

Third, being a part of such an informative and motivating initiative gives me another level of contentment.

I am now eagerly looking forward to the next Golang Girls !

Posted in General | Leave a comment