Girls Rule — and we’ve just taken over Go!

Content posted here with the permission of the author Bhuvana Prabhu, who is currently employed at Josh Software. Original post available here.

On 25th of November, I had the honour of being the host of the first ever Golang Girls. I’d like to share my experience of the whole event as to why one should look forward to next Golang Girls. Because I certainly am.

For starters — What is Golang Girls ?

It is an initiative to introduce girls to the power of Go language. Give them a platform to meet various Golang enthusiasts who share one common goal: to get stuck into Go and meet other like minded individuals. Girls do run the world! But women, ladies and even boys are allowed to attend, don’t care whether you are a beginner , intermediate or advanced.

Pioneers of Golang!

So, the first ever Golang girls took place in Pune at Josh Software (Love saying ‘first ever’ — first time is always special, right? :P). Among the 45 participants, including coaches, there were not only students but also working professionals and even a few boys! The most interesting attendee was a boy still in school, in the 8th grade! (Note to self: It’s never too early to start!).

We had teams of 4–5 attendees and had 2 coaches per team. It was lovely to see how people chose not to slip into their blankets on Sunday and rather learn something new.

Pretty Gophers aka coaches!

It all started off with a super motivating talk by Gautam Rege, a big Golang enthusiast (and also someone I look up to). He enlightened us about why Go is sky-rocketing in popularity and why we need to really dive into Go. The eagerness of audience after the talk to get Go-ing was incredible!

Then there was a session by Varsha where attendees did a deep dive on Go Playground. Getting your hands dirty with any language teaches you the nuances and magic of that language. And that’s what really happened — attendees got so engrossed that they were delaying lunch and I had to announce for * times(I don’t really remember the count) to get their attention to food!

Once the lunch was over, we all did something crazy. All of us took up Didi challenge (Mind it, it’s Didi, not Kiki challenge 😛 ). It felt a little idiotic in the beginning but trust me, the fun that it brings is BOMB!. It also took away the drowsiness after the delectable lunch.

Finally, it was time to build an app in Go. And what could be more engaging and interesting than a chat application. (Something that we can’t stop doing). The idea was simple, build a server-less terminal based chat app with Go Routines and gRPC, peer-to-peer and broadcast chat, etc. Attendees were briefed about these concepts. And then they started off with their App. Coaches were constantly around them to guide them whenever needed.

After a couple of hours of struggling with completing the TODOs in the code, guess what? EVERYONE finished the application — even the 8th grade student! It really left me in awe. To reward everyone for their excellent effort and enthusiasm, we had lovely Gopher cupcakes for them(Freakishly cute!)

Tasted as good as they look!

What did I learn at the end of the workshop?

First, it’s never too early to start learning something new 😛 (Thanks to that kid).

Second, Go is the next BIG thing. Definitely going to motivate my peers to dive into it.

Third, being a part of such an informative and motivating initiative gives me another level of contentment.

I am now eagerly looking forward to the next Golang Girls !

Advertisements
Posted in General | Leave a comment

Changing the game: Technological disruptions in the Indian Insurance Sector

Josh Banner (1)-01.pngDid you know that there are more than 55 life insurance and non-life insurance companies that operate in India alone? That’s a huge number, and it allows for fierce competition! Owing to individuals’ higher disposable income, increasing life expectancy, economic growth of the country and the Government’s increased FDI limit, investments in the insurance sector have increased manifold and the horizon for growth has expanded even further. With a CAGR of 14.4 percent, research and predictions state that the Indian insurance industry will reach $280 billion by 2020. While the last decade saw a lot of scale, the upcoming decade is all about operational efficiency backed by technology! The Insurance companies that do not leverage technology to reduce their overheads and increase operation efficiency will find it very difficult to sustain themselves. The most important aspect that will lead this growth on is consumer behaviour along with scalable distribution channels and lower overheads. In today’s technological day and age, most customers have turned to digital channels to understand more about premiums, compare products and analyse diverse insurance offerings. It is imperative that the insurance sector implements technology wisely to achieve a holistic growth.

Retaining the human factor with point of sales persons (POSP)

I say that technology is one of the main drivers of innovation for almost every industry today. That, however, does not mean that we can let go of the human factor completely. It needs to be a combination of both, because the value of human experience and understanding is unparalleled. While we know that there are various offerings that each insurance industry provides to its customers, the fact of the matter remains that the level of penetration in the country is still low. To increase penetration, we need distribution models that can explain to the masses the benefits of insurance and what all it entails. These distributors until recently operated as “Insurance Agents”. Earlier, when people were not that aware about insurance, these agents would sell insurance policies on behalf of the insurance company. Awareness amongst the masses has risen and now, consumers themselves want to compare insurance quotes. This means that insurance agents are now at a loss as they are tied to only a single insurance company.

Recently, the IRDAI (Insurance Regulatory and Development Authority of India) allowed Insurance Broking Agencies to appoint “Point of Sale Person” or POSPs. A POSP is a registered agent of  a Broking Agency and, these Agencies can access live quotes from multiple Insurance companies! Ever since the introduction of these POSPs, the benefits of comparing and buying insurance have increased significantly. With smart training courses of at least 15 hours with certification, the number of POSPs is on an exponential rise as the basic qualifications for them have been relaxed. The ‘survival of the fittest’ race has begun, because IRDAI has standardized agent commissions. This has now forced companies to increase their operational efficiency and reduce overheads to achieve scalability and remain profitable. To remain relevant and tackle competition effectively, insurance companies will have to use technology to focus on empowering these POSPs along with keeping an eye on their customers. Without that, the chances of success are fairly slim.

Technology and talent: The perfect combination

Insurance companies now have two models to choose from: the B2C model and the B2B model. Using the first model involves empowering the end-user to buy online insurance and bypass the agent model altogether. This requires substantial advertising budget and branding, which means a higher customer acquisition cost and low rates of conversion. With the B2B model, companies can empower POSPs and help them compare different insurance premiums to assist their customers buy the right policy. This has a significantly lesser acquisition cost and a much higher chance of conversions. Which one do you think is better? I definitely think the second one, because it has a direct impact on the business revenue and its bottom line. The technology challenges insurers face are complex, including the need for flexibility, better cost control, robust data analysis capabilities, talent retention and adapting to mobile tech and social media. These challenges are all related to capacity, and McKinsey research states that these changes can all be impacted through a culture of continuous improvement. Think first, then move on to implementation. How do you do that? Through something called as ‘Lean Management’.

Lean Management: Building a culture of efficiency with technology

Using the principles of lean management, scalable technology can be put into place. It manages a large workforce easily, across larger geographies and delivers more customer value. Not having a regional workforce employed to leverage scale, and having a mobile technology scalable POSP with Regional Managers managing their respective circles is a starting point to ensure lower operational overheads. Lean management for insurance companies means evaluating customer insurance needs, enabling price comparison, building larger scalable teams and analysing data in detail for effective customer acquisition and retention. Creating such a setup to empower POSPs is what will set apart successful insurers from those who fail to leave their mark behind.

This does not mean that transformation should happen at a large scale in the beginning itself. Start with a smaller area like a city, then evaluate the results, and then move on to larger areas. Insurance companies can thus scaleup without having a regional office everywhere! Automating various functionalities like getting online and offline quotes from insurance vendors, sending vehicle inspection reports and health reports using mobile technology is a great start. Enabling instant policy issuance and instant commission can give companies the edge in retain and hiring POSPs. Rewards and Recognition for POSPs like gathering reward points for redemption and discount coupon codes could easily be powered by technology, utilizing a minimal work force. Adapting to technology, thus, has become a necessity and is not a choice anymore. Disruption is the only way ahead, and the sooner industries realize this, the better chances they will have to succeed!

Posted in Artificial Intelligence, Blockchain, Insuretech | Leave a comment

JobIntentService Android: How to Example

Content posted here with the permission of the author Sambhaji Karad, who is currently employed at Josh Software. Original post available here.

What is JobIntentService?

Helper for processing work that has been enqueued for a job/service. When running on Android O or later, the work will be dispatched as a job via JobScheduler.enqueue. When running on older versions of the platform, it will use Context.startService.

You must publish your subclass in your manifest for the system to interact with. This should be published as a JobService, as described for that class, since on O and later platforms it will be executed that way.

Use enqueueWork(Context, Class, int, Intent) to enqueue new work to be dispatched to and handled by your service. It will be executed in onHandleWork(Intent).

One of Android’s greatest strengths is its ability to use system resources in the background regardless app execution. sometimes it became the behaviour to use system resources excessively.

Checkout Background Execution Limits in Android O

Lets Start

IntentService.class is an extensively used service class in Android because of its simplicity and robust in nature. Since the release of Oreo, made things more difficult for developers to use this class in a full swing for their applications. For those who are relied on IntentService class, from oreo onwards you cannot simply use this class. I was also searching for an exact and efficient alternative for this class so that I can change old intentservice class to that. The search ended up in JobIntentService which is exactly does the same job of IntentService by using new Job APIs of oreo. This class is available in the support library of SDK 26.

Implementation of JobIntentService class is a simple process. Also, You can easily migrate from IntentServiceClass to JobIntentServiceClass.For device targeting SDK 26 or later, This class’ works are dispatched via JobScheduler class and for SDK 25 or below devices, It uses Context.startService() (Same as in IntentService). First, compile the dependency on your app level gradle.

implementation 'com.android.support:support-compat:27.0.0'

or later version

Steps to implement JobIntentService

1. Create a subclass of JobIntentService

2. Override onHandleWork() method

3. Expose enqueueWork() method

4. Write code in Manifest file

– For Pre-Oreo devices

  • add in Manifest WAKE-LOCK permission

– For Oreo device or above

  • Allow JobIntentService to user JobScheduler API
  • Declare android.premssion.BIND_JOb_SERVICE

1) Create a JobService.java extends JobIntentService

private static final String TAG = JobService.class.getSimpleName();
public static final String RECEIVER = "receiver";
public static final int SHOW_RESULT = 123;
/**
 * Result receiver object to send results
 */
private ResultReceiver mResultReceiver;
/**
 * Unique job ID for this service.
 */
static final int DOWNLOAD_JOB_ID = 1000;
/**
 * Actions download
 */
private static final String ACTION_DOWNLOAD = "action.DOWNLOAD_DATA";

/**
 * Convenience method for enqueuing work in to this service.
 */
public static void enqueueWork(Context context, ServiceResultReceiver workerResultReceiver) {
    Intent intent = new Intent(context, JobService.class);
    intent.putExtra(RECEIVER, workerResultReceiver);
    intent.setAction(ACTION_DOWNLOAD);
    enqueueWork(context, JobService.class, DOWNLOAD_JOB_ID, intent);
}

@SuppressLint("DefaultLocale")
@Override
protected void onHandleWork(@NonNull Intent intent) {
    Log.d(TAG, "onHandleWork() called with: intent = [" + intent + "]");
    if (intent.getAction() != null) {
        switch (intent.getAction()) {
            case ACTION_DOWNLOAD:
                mResultReceiver = intent.getParcelableExtra(RECEIVER);
                for(int i=0;i<10;i++){
                    try {
                        Thread.sleep(1000);
                        Bundle bundle = new Bundle();
                        bundle.putString("data",String.format("Showing From JobIntent Service %d", i));
                        mResultReceiver.send(SHOW_RESULT, bundle);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
                break;
        }
    }
}

2) Add WAKELOCK Permission to Manifest.xml

<uses-permission android:name="android.permission.WAKE_LOCK" />

3) Add JobIntentService class to Manifest.xml

<service
    android:name=".JobService"
    android:permission="android.permission.BIND_JOB_SERVICE"
    android:exported="true"/>

4) Create a ServiceResultReceiver.java to communicate with Activity from JobIntentService

private Receiver mReceiver;

/**
 * Create a new ResultReceive to receive results.  Your
 * {@link #onReceiveResult} method will be called from the thread running
 * <var>handler</var> if given, or from an arbitrary thread if null.
 *
 * @param handler the handler object
 */

public ServiceResultReceiver(Handler handler) {
    super(handler);
}

public void setReceiver(Receiver receiver) {
    mReceiver = receiver;
}


@Override
protected void onReceiveResult(int resultCode, Bundle resultData) {
    if (mReceiver != null) {
        mReceiver.onReceiveResult(resultCode, resultData);
    }
}

public interface Receiver {
    void onReceiveResult(int resultCode, Bundle resultData);
}

5) Create MainActivity.java to enqueue work to JobIntentService, Initialise the ServiceResultReceiver, Show data from the Service

public class MainActivity extends AppCompatActivity implements ServiceResultReceiver.Receiver {

    private ServiceResultReceiver mServiceResultReceiver;
    private TextView mTextView;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        mServiceResultReceiver = new ServiceResultReceiver(new Handler());
        mServiceResultReceiver.setReceiver(this);
        mTextView = findViewById(R.id.textView);
        showDataFromBackground(MainActivity.this, mServiceResultReceiver);
    }

    private void showDataFromBackground(MainActivity mainActivity, ServiceResultReceiver mResultReceiver) {
        JobService.enqueueWork(mainActivity, mResultReceiver);
    }

    public void showData(String data) {
        mTextView.setText(String.format("%s\n%s", mTextView.getText(), data));
    }

    @Override
    public void onReceiveResult(int resultCode, Bundle resultData) {
        switch (resultCode) {
            case SHOW_RESULT:
                if (resultData != null) {
                    showData(resultData.getString("data"));
                }
                break;
        }
    }
}

Download source code from Github

For any questions or suggestions, please leave comments.

Thank you Happy Coding

Posted in General | 1 Comment

Journey from NoSQL to SQL(Part II) – Data Transfer

Content posted here with the permission of the author Meenakshi Kumari, who is currently employed at Josh Software. Original post available here.

In my last blog, I have shared my experience of gem changes and preparation of schema for PostgreSQL database. In this blog i’ll be covering the data transfer process from MongoDB to PostgreSQL database without any inconsistency. And will also cover what all challenges we faced and their solutions.

Data transfer from MongoDB to PostgreSQL database.

As our project was updated with PostgreSQL and database schema prepared, the next hurdle in-front of us was how can we import the data of MongoDB to PostgreSQL without affecting the data quality and disturbing the old data.

After lot of research, we found gem Sequel. It is a simple, flexible, and powerful SQL database access toolkit for Ruby. It basically has ability to connect to different type of databases (in our case MongoDB & PostgreSQL), read/write to those connected databases etc. SEQUEL can do all this because it includes a comprehensive Object Relational Mapping (ORM) layer for mapping records to Ruby objects and handling associated records. We used it like connect to destination PostgreSQL DB to write data which is read from our MongoDB.

So let’s start with the data transfer, steps which were followed are:

  1. Switch to the MongoDB branch and there we need to install sequel and pg gem. In my previous blog I had mentioned that we had two separate branches for the MongoDB and PostgreSQL code in the same GitHub project repository. All this data transfer was done on the MongoDB branch.
  2. Now we have to write rake tasks for data mapping. I’ll be taking the previous blog’s table example i.e. Address. You have already seen the table schema in the previous section, now i’ll show how to write rake task for data import for this particular table using Sequel. In the below rake task following this will be covered:
  • Database connection is established using :
DB = Sequel.postgres(‘database_name’, user: ‘username’, password:     ‘password’, host:‘localhost’, port: port_number)
  • Extensions for pg_hstore and pg_array are added to support the PostgreSQL hstore and array type to Sequel.
DB.extension :pg_array
DB.extension :pg_hstore
  • Loop over table in batches (for fast execution) and then make a hash containing one to one mapping for MongoDB record fields with respective PostgreSQL table fields.
  • In below rake task you will see “safe_task” method, this is to handle exceptions and throw error message. Below is the code for safe_task;
def safe_task(&proc)
  begin
    yield
    return true
  rescue StandardError => e
    puts "Exception Message: #{e.message}"
    puts "Exception Class: #{e.class.name}" 
  end
  false
end
  • Handle associations: In our table, address belongs_to company and company has_many address. To store company_id inside address table we have to import company records before address data. And then we will fetch associated company_id from the PostgreSQL company table and update the address table accordingly and also we will be keeping the mongo_id for the same company in relation_ids field of address.
## Retrieve associated company data from the PostgreSQL database
company = DB[:companies].select(:id).where(mongo_id: address.company_id.to_s).first
  • Hstore and array fields are firstly mapped directly to their PostgreSQL fields and as they cannot be directly inserted into the tables through Sequel, we have handled them as shown in the following part of code:
## Delete keys who has empty array values 
mapping.delete_if { |key, value| value.class == Array && value.empty? }
## Edit hash values        
mapping.each do |key, value| 
  Hash.include?(value.class) && mapping.update(key => Sequel.hstore(value)) 
end
  • In the last we have to insert the mapping hash inside the PostgreSQL database.
record_no = DB[:addresses].insert(mapping)

Below is the consolidated rake task code for the Address table;

desc 'Address  Data Migrate' 
task :addresses => :environment do

  ## Establish Connection
  ## Add extension for hstore and array 
  ## Loop over address table and start one-to-one field mapping
  ## Retrieve associated company data from the PostgreSQL database
 
    task = safe_task do
      mapping = {  
        mongo_id:    address.id.to_s, ## Actual table mongo_id
        ## Map PostgreSQL fields: MongoDB fields
        company_id:  company[:id], ## Storing PostgreSQL parent_id

        ## For storing associated table mongo_id
        relation_ids: {  
          company_id: ## Store company mongo_id
        }
      }

      ## Handle hstore and array fields(if any) before insertion

      ## Insert record          
      record_no = DB[:addresses].insert(mapping)
    end
  end
end

You have observed following things in the above rake task:

  • “mongo_id” and “relation_ids” fields are used as was mentioned in my previous blog, and we are storing the mongodb id into mongo_id for the record and associated tables mongo_id in relation_ids for future reference. So if anything goes wrong while data transfer task it can be handled while checking these fields and also for cross checking consistency of the data records which are transferred from MongoDB to PostgreSQL.

TIP: You can print number of the records which are imported with their respective mongo_id’s and reason of the failed record insertions(if any) just to keep track of the task status.

NOTE: We needed to store the PostgreSQL id of the parent table into their child tables, so the data for parent is needed to be populated first in the database that’s why we started with the rake tasks of parent tables and then moved on to their children and the process continues till the leaf tables. And the same sequence we followed while running these rake tasks.

One of the biggest challenge we faced was: we had a table in our system with more then 10 millions of records, and our rake task was taking more then 30 hours for data importing and was getting break at many points due to ‘cursor not found error’ and “SIGHUP” as we were trying to fetch the data in batches over the huge table records. It wasn’t getting imported in a single flow as it was taking to much time to load the whole table. We even tried loading in batches but that also didn’t helped .

Solution: Initially we were iterating over the whole table and then mapping their fields and then inserting. Instead of this we started with the up to down approach. Let me explain following example;

class Grandparent < ApplicationRecord
  has_many :parents
end
class Parent < ApplicationRecord
  has_many   :children
  belongs_to :grandparent
end
class Child < ApplicationRecord
  belongs_to :parent
end

Let’s say we have to transfer ‘Child’ table records from MongoDB to PostgreSQL database. So instead of directly looping over the ‘Child’ table, we will start with the grandparent table and then for that particular grandparent we will loop over their parents and then proceed to their children. Have a look into below pseudo code for more explanation;

desc 'Child  Data Migrate' 
task :children => :environment do

  puts "Rake task 'children' start time #{Time.now}"

  ## Establish Connection

  ## Add extension for hstore and array 

  total_count = Child.count
  success_count = 0

  Grandparent.each do |grand_parent|
    grandparent.parents.each do |parent|
      parent.children.each do |child|
        ## Map all the fields of Child table and then insert into DB
      end
    end
  end
end

We followed the above approach and this reduced the time for data import to 5 hours. And we were done with our data transfer. Sequel really made this whole procedure very easy for us.

After this phase we faced many issues related to the updated versions of Ruby on Rails and changed database, which i’ll cover in my next part of this blog.

Posted in General | Leave a comment

Could InsureTech look at Crypto Currency as a premium payment alternative?

cryptocurrency

The blockchain truly has shaped up into one of the biggest technological disruptions of the decade. A digitized, distributed and secure ledger that guarantees immutable, transparent transactions, it gives both parties involved a proper breakdown for each transaction, thus ensuring credibility throughout the entire process. The most popular implementation of the blockchain are cryptocurrencies, the most well-known of these cryptocurrencies being the Bitcoin. A few problems relating to cryptocurrencies have been brought to light recently, like slow performance and processing of the public blockchain, excessive price volatility, energy consumption while mining and scams involving fraudulent ICOs (Initial Coin Offering). However, I think that all these problems can be solved with time. With increasing awareness, the cryptocurrency regulations will fall into place and as the security around blackchains becomes robust, these issues should subside.

Is cryptocurrency here to stay?

It’s like this: if blockchain is an umbrella, cryptocurrency is only one of the spokes of that umbrella. The blockchain can be used for various other things too! There are many debates happening about whether cryptocurrency will sustain in the longer run or simply be considered as another great technological invention that isn’t fruitful. I truly believe that cryptocurrency is here to stay. I also believe that it will definitely be used as an alternate currency source, if not mainstream, and it is only a matter of time before governments and financial institutions realise its potential and embrace it. While there are some international sanctions on cryptocurrencies in certain countries, fiat currencies also face this turbulence. But they have survived in these ecosystems. The combination of anonymity, ease of conversion to crypto, and the ability to move funds overseas makes cryptocurrencies a very attractive alternative and safety valve for citizens of any country. The sooner industries understand this, the more prepared they will be for the future. One industry that can benefit most from the blockchain and cryptocurrency is the insurance industry.

InsureTech: Becoming smarter with smart contracts

Leveraging blockchain as the distributed infrastructure can prevent fraud, and that is something InsureTech must implement. This can be done using smart contracts that help insurance companies and their clients come to common ground. A smart contract executes instantaneously when the constraints of all parties are met. How would it work? The consumer could set an upper limit for the insurance premium, add-ons and specials conditions that he/she is looking for from the insurance vendors. The insurance vendors could potentially bid for that contract as long as it is within their constraints. Only when the constraints on both ends are met will the contract be executed, with customers spending money on the policy they want that would be issued instantly! This could help consumers identify the exact details of the insurance they would want and to cap their budget. Insurance agents can help customers facilitate these conditions and receive commissions instantly. Since all this is instantaneous, un-manned, digital and devoid of any security risk, it will also allow for increased efficiency and quite a bit of time for both parties. Smart contracts would also lead to better settlement of claims since all past transactions would be recorded on the public blockchain and all processes would be completely transparent.

The future of cryptocurrency in the insurance sector

With cash acceptance declining around the globe, the potential for industries to take on cryptocurrencies is even higher now. Some insurance companies have already started implementing this. In April 2018, one of the world’s largest insurers, Allianz announced that it is testing the introduction of its own cryptocurrency in the form of an Allianz token. The intention is to increase efficiency while eliminating exchange rate risks in internal payment transactions. They feel that this will decrease their dependency on banking systems across the globe and also counter the challenge of converting and reconverting foreign currencies that they do not accept. This would result in saving a whole lot of commissions, and that money can be put to more optimal use.

Ryskex, a captech ecosystem founded in Berlin in 2017 specialises in solutions for captive companies, with focus on saving insurance tax, capacity bottlenecks of various insurance lines, and creation of new solutions for non-insurable risks. It uses the public Ethereum blockchain to mitigate risk hedging of captive owners and large corporates. The ecosystem has its own token to regulate payments, the Ryscoin. The company is currently working to cover cyber risks, recruitment problems and counter innovation failures.

With all of this being put into place, one thing is clear. Cryptocurrencies have moved way beyond the phase where they were considered part of a speculative bubble. They are fast becoming a reality, and one that all of us need to keep in mind and adapt to in the near future. There’s only one ground rule to succeed in matters of technology: to disrupt. And in my opinion, the future looks like a place where cryptocurrency is all set to disrupt InsureTech!

Posted in General | Leave a comment

Journey from NoSQL to SQL (Part I) – Schema Designing

Content posted here with the permission of the author Meenakshi Kumari, who is currently employed at Josh Software. Original post available here.

My project is an B2B(business-to-business) website, where vendors can sell their products to shopkeepers directly or with the help of agents and vice-versa. When I started with this project, it had MongoDB database(NoSQL) along with Rails(v4.1.1) framework written in Ruby(v2.1.0).

MongoDB is a fast NoSQL database. Unfortunately, it’s wasn’t the cure for all the performance troubles. Many issues regarding our site unavailability were reported which was caused due to slow querying on associations and indexes by MongoDB. One particular case was: we were exporting reports in our site, which was retrieving data from many associated and embedded documents in database, which was very slow process. Data update and create tasks were taking more time because of complex transactions over highly associated data. MongoDB is not ACID compliance , consistency and availability are incompatible in Mongo due to the CAP theorem.(tip: MongoDB ACID compilianceNoSQL vs SQL)

So we wanted to switch to database which had transaction support and is ACID compliance. To enhance our website speed and availability, our team decided to migrate project database to PostgreSQL. It is an object-relational database management system (ORDBMS) with an emphasis on extensibility and also supported NoSQL features. Along with database migration we also upgraded our Ruby on Rails versions.

After this successful migration, our site availability and resilience improved as PostgreSQL performed much better for indexes and joins and our service became faster and snappier as a result. And also our database size reduced since it stores information more efficiently.

I’ll be sharing my experience of project migration in following series of blogs:

  1. Gem changes and Preparation of schema for PostgreSQL database.
  2. Data transfer from MongoDB to PostgreSQL database, without any inconsistency.
  3. Problems faced before launching updated project.

In this blog i’ll be explaining how to update Gemfile and also the schema designing for PostgreSQL from the MongoDB.

So come along with me on my journey of this migration.


Gem changes and Preparation of schema for PostgreSQL database.

NOTE: We had two separate branches for the MongoDB and PostgreSQL code in the same GitHub project repository.

We have to update our project Gemfile to PostgreSQL by replacing all mongo related gems by pg gems, for eg:

  • mongoid with pg
  • mongoid_tree with ltree_hierarchy
  • mongoid_search with pg_search
  • mongoid_observers with rails_observers
  • mongoid_audit with audited
  • carrierwave_mongoid with carrierwave

Next step was, preparation of a schema for our PostgreSQL database from the MongoDB collection. Replace mongoid.yml file with database.yml file and create database using rake db:create command. We have to make several changes in the data type, relations, etc in PostgreSQL database, some of them are as follows:

  1. Symbol type field of MongoDB document was changed to string and while retrieving the data from DB it was to be converted to_sym explicitly.
  2. MongoDB has ‘embeds_one ’, ‘embeds_many’ relation which was converted to ‘has_one’ ‘has_many’’ relation in PostgreSQL. For example:
##### MONGODB CODE #####
class Company
  embeds_many :addresses
end
##### POSTGRESQL CODE #####
class Company < ApplicationRecord
  has_many :addresses, dependent: :destroy
end

3. For has_and_belongs_to relation, a third table in schema was created in PostgreSQL, where both the tables ids was stored. For example:

##### MONGODB CODE #####
class Company
  has_and_belongs_to_many :users
end

class User
  has_and_belongs_to_many :companies
end
##### POSTGRESQL CODE #####
class Company < ApplicationRecord
  has_and_belongs_to_many :users, association_foreign_key: 'user_id', 
    join_table: 'companies_users'
end

class User < ApplicationRecord
  has_and_belongs_to_many :companies, join_table: 'companies_users'
end
##### POSTGRESQL SCHEMA FOR COMPANIES_USERS TABLE #####
create_table "companies_users", force: :cascade do |t|
  t.bigint "company_id"
  t.bigint "user_id"
  t.index ["company_id"], name: "index_companies_users_on_company_id"
  t.index ["user_id"], name: "index_companies_users_on_user_id"
end

NOTE: Primary key default was changed from Integer to BIGINT for PostgreSQL from rails 5.1.

TIP: To check that correct data is imported from mongodb to PostgreSQL, we stored the mongo_id of the imported mongo record in a string field namedmongo_id and mongo_ids of all the associated tables of that record, in field called relation_ids which is of type hstore. So if anything goes wrong while data transfer task it can be handled while checking these fields and also for cross checking consistency of the data records which are transferred from MongoDB to PostgreSQL. Both of these records are for future reference and they can be removed later when you are sure about the imported data.

Likewise we had to write migrations for each table in order to prepare our schema. Sample example for migration and corresponding schema table of Address is as follows:

##### MONGODB MODEL #####
class Address
  field :flat_no, type: Integer
  field :pincode, type: Symbol
  field :city,  type: String
  field :state,  type: String, default: ‘’
  
  belongs_to :company
end
##### POSTGRESQL MIGRATION #####
class CreateAddresses < ActiveRecord::Migration[5.2]
  def change
    create_table :addresses do |t|
      t.integer     :flat_no
      t.string      :pincode
      t.string      :city 
      t.string      :state , default: ‘’
     
      ## newly introduced fields ##
      t.string :mongo_id, default: ‘’
      t.hstore :relation_ids, default: {}

      t.timestamps

      t.belongs_to :company, index: true
    end
  end
end

Now our schema was ready and we were all geared up for next phase for this procedure which was data transfer from MongoDB to PostgreSQL database which i’ll explain in my next part of the blog.


 

Posted in General | 1 Comment

Would blockchain and AI re-define the future of InsureTech?

Insurtech

While the insurance sector is increasingly adopting new technologies, there still exists a huge gap in the efficiency of processing different insurance product offerings and customer expectations. For example, it is still very frustrating for customers to wait on phone calls to get through to executives even when they need to file emergency claims. However, disruptive technologies like artificial intelligence have the power to improve such situations. Backed by predictive analytics and data collected through interconnected consumer devices, there is a huge potential for the insurance sector to improve its efficiency and accuracy, and ultimately provide consumers with better plans and potentially cheaper premiums.

Moving from assumptions to concrete results with data analysis

Consumer data is a goldmine waiting to be tapped into, especially for insurance companies. Right from customer behaviour, personal information, shopping patterns, locations, health patterns, driving patterns and lifestyle history, everything is stored in databases somewhere! This can drive artificial intelligence engines to make great advances in decision making for insurance policies and premiums. Interconnected devices keep track of the minutest changes that customers undertake, and even these changes can have a huge impact on the way these customers purchase their insurance. A wearable that is connected to an actuarial database could power an AI engine to calculate a consumer’s personal risk score based on daily activities, their fitness patterns and risk patterns.

This will also work in favour of consumers too, not just insurance companies. The insurance industry too seems to think along these lines. Research states that more than 80 percent of insurance executives believe that technology will disrupt the sector by leaps and bounds. How? Imagine having to not make cold calls anymore, simply because the need will not exist. An AI engine or chatbot will pull out data on its own, and once this data has been processed and analysis results predicted, insurance research executives can spend time personalizing marketing and sales strategies, thus working towards building sustainable client relationships. This will remove the traditional underwriting strategies which are very human centric. Instead the underwriting will not be based on facts and not assumptions as well accurate data. This will in-turn allow addition of a host of new insurance products which are specific towards each customer’s needs than just generic offering to the masses!

Harnessing the power of experiential learning

Being human means having the power to learn from experience. Taking this experience one step further while combining it with technology has given us machine learning abilities. NLP (Natural Language Processing) backed by an AI engine and ML has enabled creating experiences that are intuitive, conversational and real time. A Gartner report predicts that by 2020, 85% of customer interactions will be managed without a human through digital assistants. Customers also prefer interacting with companies through modern technologies rather than outdated processes, because this saves time, is more reliable and most importantly, very convenient. So, while historically, the insurance sector was driven by mathematics and in most cases, human instinct, it will now be driven by concrete data and insights.

Blockchain : The disruptor for InsureTech

One of the biggest technological disruptions of our time has also been the blockchain – the distributed serverless ledger. The first problem area that blockchain can help eradicate is insurance fraud. Enough number of insurance frauds have caused sleepless nights for executives. On a distributed ledger, insurers can record permanent immutable transactions while protecting the data integrity. This would help insurers collaborate and identify suspicious behavior across the ecosystem, for example, the validity of ‘no claim bonus” which reduces the premium.  This reduces the margin for fraud, raises margins for companies overall, and helps them come up with better premium plans for customers.

The blockchain could also provide a better means of facilitation of policy issuance! Using smart contracts, the payment for the premium can be triggered only after the underwriting process constraints have been met and the customer requirements are met. The agent commission would be automatically and instantly paid and the policy issuance would also be accurate and immediate! This would drastically change the way the insurance industry works by reducing the policy issuance time and ensuring that it’s accurate and beneficial for both customers and insurance companies.

Another example would be related to renewal of insurance; A customer could budget a certain amount of money for various insurance renewals and set constraints on them – the smart contract would be executed only when these constraints are met and could get consumers a better bargaining chip. For example, a smart contract could have the customer specify the upper bound for ₹ 15,000 for the car insurance subject to specific add-ons and benefits. Similarly, the insurance vendors could “bid” for the insurance based on the condition of the car (age, no claim bonus, add-ons etc.). The contract would be executed instantly when both parties’ constraints are met, and every party gets their dues! This could bring power to the consumer in choosing the right insurance and could help agents setup the right constraints for their customers. On the other hand, the insurance company is satisfied because their criteria also have been met before policy issuance!

These ideas could also be the beginning of insurance companies using Cryptocurrency for insurance. While the idea may seem to be too far-fetched right now, I assure you, it isn’t, because there’s only one rule to follow if you want to succeed as a part of the ever-evolving technological ecosystem.

Break all rules and choose to disrupt!

 

 

Posted in General | Leave a comment