Journey from NoSQL to SQL (Part I) – Schema Designing

Content posted here with the permission of the author Meenakshi Kumari, who is currently employed at Josh Software. Original post available here.

My project is an B2B(business-to-business) website, where vendors can sell their products to shopkeepers directly or with the help of agents and vice-versa. When I started with this project, it had MongoDB database(NoSQL) along with Rails(v4.1.1) framework written in Ruby(v2.1.0).

MongoDB is a fast NoSQL database. Unfortunately, it’s wasn’t the cure for all the performance troubles. Many issues regarding our site unavailability were reported which was caused due to slow querying on associations and indexes by MongoDB. One particular case was: we were exporting reports in our site, which was retrieving data from many associated and embedded documents in database, which was very slow process. Data update and create tasks were taking more time because of complex transactions over highly associated data. MongoDB is not ACID compliance , consistency and availability are incompatible in Mongo due to the CAP theorem.(tip: MongoDB ACID compilianceNoSQL vs SQL)

So we wanted to switch to database which had transaction support and is ACID compliance. To enhance our website speed and availability, our team decided to migrate project database to PostgreSQL. It is an object-relational database management system (ORDBMS) with an emphasis on extensibility and also supported NoSQL features. Along with database migration we also upgraded our Ruby on Rails versions.

After this successful migration, our site availability and resilience improved as PostgreSQL performed much better for indexes and joins and our service became faster and snappier as a result. And also our database size reduced since it stores information more efficiently.

I’ll be sharing my experience of project migration in following series of blogs:

  1. Gem changes and Preparation of schema for PostgreSQL database.
  2. Data transfer from MongoDB to PostgreSQL database, without any inconsistency.
  3. Problems faced before launching updated project.

In this blog i’ll be explaining how to update Gemfile and also the schema designing for PostgreSQL from the MongoDB.

So come along with me on my journey of this migration.

Gem changes and Preparation of schema for PostgreSQL database.

NOTE: We had two separate branches for the MongoDB and PostgreSQL code in the same GitHub project repository.

We have to update our project Gemfile to PostgreSQL by replacing all mongo related gems by pg gems, for eg:

  • mongoid with pg
  • mongoid_tree with ltree_hierarchy
  • mongoid_search with pg_search
  • mongoid_observers with rails_observers
  • mongoid_audit with audited
  • carrierwave_mongoid with carrierwave

Next step was, preparation of a schema for our PostgreSQL database from the MongoDB collection. Replace mongoid.yml file with database.yml file and create database using rake db:create command. We have to make several changes in the data type, relations, etc in PostgreSQL database, some of them are as follows:

  1. Symbol type field of MongoDB document was changed to string and while retrieving the data from DB it was to be converted to_sym explicitly.
  2. MongoDB has ‘embeds_one ’, ‘embeds_many’ relation which was converted to ‘has_one’ ‘has_many’’ relation in PostgreSQL. For example:
##### MONGODB CODE #####
class Company
  embeds_many :addresses
class Company < ApplicationRecord
  has_many :addresses, dependent: :destroy

3. For has_and_belongs_to relation, a third table in schema was created in PostgreSQL, where both the tables ids was stored. For example:

##### MONGODB CODE #####
class Company
  has_and_belongs_to_many :users

class User
  has_and_belongs_to_many :companies
class Company < ApplicationRecord
  has_and_belongs_to_many :users, association_foreign_key: 'user_id', 
    join_table: 'companies_users'

class User < ApplicationRecord
  has_and_belongs_to_many :companies, join_table: 'companies_users'
create_table "companies_users", force: :cascade do |t|
  t.bigint "company_id"
  t.bigint "user_id"
  t.index ["company_id"], name: "index_companies_users_on_company_id"
  t.index ["user_id"], name: "index_companies_users_on_user_id"

NOTE: Primary key default was changed from Integer to BIGINT for PostgreSQL from rails 5.1.

TIP: To check that correct data is imported from mongodb to PostgreSQL, we stored the mongo_id of the imported mongo record in a string field namedmongo_id and mongo_ids of all the associated tables of that record, in field called relation_ids which is of type hstore. So if anything goes wrong while data transfer task it can be handled while checking these fields and also for cross checking consistency of the data records which are transferred from MongoDB to PostgreSQL. Both of these records are for future reference and they can be removed later when you are sure about the imported data.

Likewise we had to write migrations for each table in order to prepare our schema. Sample example for migration and corresponding schema table of Address is as follows:

##### MONGODB MODEL #####
class Address
  field :flat_no, type: Integer
  field :pincode, type: Symbol
  field :city,  type: String
  field :state,  type: String, default: ‘’
  belongs_to :company
class CreateAddresses < ActiveRecord::Migration[5.2]
  def change
    create_table :addresses do |t|
      t.integer     :flat_no
      t.string      :pincode
      t.string      :city 
      t.string      :state , default: ‘’
      ## newly introduced fields ##
      t.string :mongo_id, default: ‘’
      t.hstore :relation_ids, default: {}


      t.belongs_to :company, index: true

Now our schema was ready and we were all geared up for next phase for this procedure which was data transfer from MongoDB to PostgreSQL database which i’ll explain in my next part of the blog.


Posted in General | Leave a comment

Would blockchain and AI re-define the future of InsureTech?


While the insurance sector is increasingly adopting new technologies, there still exists a huge gap in the efficiency of processing different insurance product offerings and customer expectations. For example, it is still very frustrating for customers to wait on phone calls to get through to executives even when they need to file emergency claims. However, disruptive technologies like artificial intelligence have the power to improve such situations. Backed by predictive analytics and data collected through interconnected consumer devices, there is a huge potential for the insurance sector to improve its efficiency and accuracy, and ultimately provide consumers with better plans and potentially cheaper premiums.

Moving from assumptions to concrete results with data analysis

Consumer data is a goldmine waiting to be tapped into, especially for insurance companies. Right from customer behaviour, personal information, shopping patterns, locations, health patterns, driving patterns and lifestyle history, everything is stored in databases somewhere! This can drive artificial intelligence engines to make great advances in decision making for insurance policies and premiums. Interconnected devices keep track of the minutest changes that customers undertake, and even these changes can have a huge impact on the way these customers purchase their insurance. A wearable that is connected to an actuarial database could power an AI engine to calculate a consumer’s personal risk score based on daily activities, their fitness patterns and risk patterns.

This will also work in favour of consumers too, not just insurance companies. The insurance industry too seems to think along these lines. Research states that more than 80 percent of insurance executives believe that technology will disrupt the sector by leaps and bounds. How? Imagine having to not make cold calls anymore, simply because the need will not exist. An AI engine or chatbot will pull out data on its own, and once this data has been processed and analysis results predicted, insurance research executives can spend time personalizing marketing and sales strategies, thus working towards building sustainable client relationships. This will remove the traditional underwriting strategies which are very human centric. Instead the underwriting will not be based on facts and not assumptions as well accurate data. This will in-turn allow addition of a host of new insurance products which are specific towards each customer’s needs than just generic offering to the masses!

Harnessing the power of experiential learning

Being human means having the power to learn from experience. Taking this experience one step further while combining it with technology has given us machine learning abilities. NLP (Natural Language Processing) backed by an AI engine and ML has enabled creating experiences that are intuitive, conversational and real time. A Gartner report predicts that by 2020, 85% of customer interactions will be managed without a human through digital assistants. Customers also prefer interacting with companies through modern technologies rather than outdated processes, because this saves time, is more reliable and most importantly, very convenient. So, while historically, the insurance sector was driven by mathematics and in most cases, human instinct, it will now be driven by concrete data and insights.

Blockchain : The disruptor for InsureTech

One of the biggest technological disruptions of our time has also been the blockchain – the distributed serverless ledger. The first problem area that blockchain can help eradicate is insurance fraud. Enough number of insurance frauds have caused sleepless nights for executives. On a distributed ledger, insurers can record permanent immutable transactions while protecting the data integrity. This would help insurers collaborate and identify suspicious behavior across the ecosystem, for example, the validity of ‘no claim bonus” which reduces the premium.  This reduces the margin for fraud, raises margins for companies overall, and helps them come up with better premium plans for customers.

The blockchain could also provide a better means of facilitation of policy issuance! Using smart contracts, the payment for the premium can be triggered only after the underwriting process constraints have been met and the customer requirements are met. The agent commission would be automatically and instantly paid and the policy issuance would also be accurate and immediate! This would drastically change the way the insurance industry works by reducing the policy issuance time and ensuring that it’s accurate and beneficial for both customers and insurance companies.

Another example would be related to renewal of insurance; A customer could budget a certain amount of money for various insurance renewals and set constraints on them – the smart contract would be executed only when these constraints are met and could get consumers a better bargaining chip. For example, a smart contract could have the customer specify the upper bound for ₹ 15,000 for the car insurance subject to specific add-ons and benefits. Similarly, the insurance vendors could “bid” for the insurance based on the condition of the car (age, no claim bonus, add-ons etc.). The contract would be executed instantly when both parties’ constraints are met, and every party gets their dues! This could bring power to the consumer in choosing the right insurance and could help agents setup the right constraints for their customers. On the other hand, the insurance company is satisfied because their criteria also have been met before policy issuance!

These ideas could also be the beginning of insurance companies using Cryptocurrency for insurance. While the idea may seem to be too far-fetched right now, I assure you, it isn’t, because there’s only one rule to follow if you want to succeed as a part of the ever-evolving technological ecosystem.

Break all rules and choose to disrupt!



Posted in General | Leave a comment

Wearable technology empowering the sports industry

  Wearable tech in sports

From the introduction of wireless trackers for heartbeat monitoring in the 1980s to apps and websites that came up in 2005, to the introduction of FitBit, the biggest gamechanger in the wearable tech industry in 2007, it has been a long journey. Now, smart fabrics have also been introduced and smart clothing is driving the revolution ahead. The development and miniaturization of sensors has also made wearable devices possible. Smart Phones have been instrumental in the evolution of smaller and lightweight wearable gadgets that can track information easily via Bluetooth or BLE. Cloud based AI tools managing large amounts of data through predictive analytics have made it possible for this data to provide people with recommendations for a healthier life!


Real-time track base for fitness

Keeping track of fitness in real time is better than post training analysis, especially for sportspeople. It is also readily accessible as compared to the idea of sportspeople sitting in labs after their performance. Devices used to collect biometrical data can provide a rich source of information to all stakeholders. This may determine the success of sportspeople’s training. For athletes, preventing the onset of injury is always a constant battle as motivation drives them to push harder in the pursuit of better performance. However, the adoption of wearable devices may help change that as now coaches can monitor the sportsperson’s performance instantaneously. Creation of personalized exercise plans for athletes basis this data monitoring can also help them improve.


Use of wearable tech by sportspeople influences masses

Oddly wired and clunky devices earlier worn only by competitive athletes, have transformed into stylish, everyday accessories worn by casual joggers and stroller-pushing parents who just want to keep track of their daily fitness, be it something as simple as the number of steps they walk in a day. There is a reason wearable device have become a favorite gift for employers to give employees. Healthier employees mean lower healthcare costs. Companies that provide medical insurance benefits have started looking at employee health metrics to negotiate for lower insurance premiums! And there are now apps and devices for just about any activity, including tennis, golf, skiing, and swimming.


Wearable tech and data protection: drawing a line

Transparency presents a practical challenge in the context of wearables: user interfaces are generally small, and nowadays it may not be reasonable to expect users to read full privacy notices provided in physical or online user manuals. To ensure compliance, wearable providers should consider the use of standardized icons for better communication. Most concern around the use of wearables stems from uncertainties around the third parties with whom personal data may be shared. Wearable technology often involves a complex network of data controllers all sharing personal data with each other and getting user consent before disseminating this data is required. Providers need to take care of data accessibility and set privacy policies according to regulation standards, because what’s most important is maintaining credibility and trust within your consumer base. Everything else is secondary.


Wearable technology in sports is a HUGE benefit to professional athletes, a good source of motivation for the masses to improve their health and a potential goldmine for Insurance companies and Employee benefits. Global revenues for sports, fitness and activity monitors are expected to grow from $1.9bn in 2013 to $2.8 bn in 2019, according to technology industry analysis firm IHS Technology. Just like all Formula-1technology related to speed, safety and efficiency slowly makes its way into cars that are driven on the roads, data from wearable technology will soon make its way into predictive analysis for the healthcare industry on a global level as well!

Posted in Artificial Intelligence, General, Healthcare | Leave a comment


Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

As a Go programmer or while learning Go you might have heard that everything in Go is passed by value.

Also in the official FAQ it says that:

As in all languages in the C family, everything in Go is passed by value. That is, a function always gets a copy of the thing being passed, as if there were an assignment statement assigning the value to the parameter. For instance, passing an int value to a function makes a copy of the int, and passing a pointer value makes a copy of the pointer, but not the data it points to.

What does that mean ?

It means that if you pass a variable to a function, the function always gets a copy of it. Remember always. So the caller and callee have two independent variables with the same value. Hence, If the callee modifies the parameter variable, the effect is not visible to the caller.

Lets prove it via an example

package main

import "fmt"

type person struct {
    Name string

func main() {
    p := person{Name: "Smith"}
    fmt.Println("Value of name before calling updateName() is: ", p.Name)
    fmt.Println("Value of name after calling updateName() is: ", p.Name)

func updateName(p person) {
    p.Name = "John"


Value of name before calling updateName() is:  Smith
Value of name after calling updateName() is:  Smith

As you can clearly see that the value of name even after calling updateName function is unchanged.

Lets try the same with Go slices

I tried the same on Go slices and it demonstrated some pretty interesting or you can say surprising behaviour. Lets quickly have a look at the code below:

package main

import (

func main() {
    greetings := []string{"Hi", "Welcome", "Hola"}

func updateGreetings(greetings []string) {
    greetings[0] = "नमस्ते"


[नमस्ते Welcome Hola]

As you can see the value of first element of slice greeting is changed. Now this is completely opposite of what we have seen for Go struct.

Why slices are behaving differently ?

We are seeing this change in behaviour because the way Go slices are implemented internally. Lets take a minute to understand how Go slices are implemented. Have a look at the diagram below:

So when we make a slice of string, Go internally is creating two separate data structures.

The first is what we refer to as the slice. The slice is a data structure that has 3 elements inside it:

  1. Pointer to array: is a pointer over to the underlying array that represents the actual list of items.
  2. Capacity: is how many elements it can contain at present
  3. Length: is the number of elements referred to by the slice

The second is the actual array that represents the actual list of items. So lets have a look at what happens in memory when we declare a slice.

As you can see the slice at address 0002 is pointing to array stored at address 0003. Lets have a look at what happens when we pass the greetings slice to function updateGreetings.

As you can see, Go is still behaving as a pass by value language, as it is making of a copy of the slice data structure at address 0005. Now here is the very important thing, even though the slice data structure is copied, it is still pointing at the original array in memory at address 0003.

When we modify the slice inside the function, we are modifying the same array that both copies of slice pointing to. So in Go slices are what referred as reference types. 

Are there any more reference types in Go ?

So slices are not the only data structure that behave in this fashion, there are also other types which behave exactly the same way. In the below diagram, I have segregated the value types and reference types in Go.

The point to note here is that while passing on reference types, we do not need to pass address of the type. Go will handle it and any change in the variable will be reflected in the caller function. When we are passing value types like int, bool etcand we expect that the changes in value should be reflected, we must use pointers.




This is one Go gotcha which can lead to many many issues when we start Go programming. Just keep in mind the diagram of value and reference types. Hope this post helps you to avoid getting into issues in your program. Thanks for reading. Please like and share the post so that it can reach to other valuable readers too.


Posted in General | Leave a comment

Does Sports Technology impact the Healthcare Sector?

Wearable Tech &amp; Preventative Healthcare_LinkedIn

In the ever-evolving technological landscape, emerging disruptive technologies like machine learning, deep learning and artificial intelligence have empowered industries like healthcare and sports significantly. Both industries are interconnected. Both cannot function without human involvement, and they are also dependent on informative data from each other due to the increasing demand of predictive analytics. Predictive analytics applications use metrics that can be measured and analyzed to predict the likely behavior of individuals, machinery or other entities. Today’s tech-savvy audience is constantly on the look-out for technology that is efficient, quick and time-saving.

The rising data revolution with digital transformation

Data analytics is becoming increasingly popular within healthcare, sports and life sciences professionals.  All these industries are also currently embracing innovations such as wearable-based technologies, faster computing and smaller form factor or devices.

“Paradoxically, the evolution of machine learning, which aims to raise the threshold of intelligent analysis beyond that of the human brain, can teach us more about what it means to be human.”

Today our smart-phones can not only be used as biometric devices but also can be used as a platform from which to deliver tailored algorithm analysis that can optimize personal metrics in real time.

The need for real-time data analysis is more real than ever. Analytics based reports and surveys are becoming increasingly popular with researchers because this helps them monitor trends in real time and directly impacts innovation in products. The entry of chatbots in these industries is an example of a perfect solution that helps in bringing all the initial data together, with absolute accuracy and less time consumption. This data can be collected as per different parameters, such as age, gender, location, medical history, diet, fitness regime and what not, and in turn to be used by, let’s say insurance companies when they chart out plans for a premium. Pretty amazing, right?


Emergence of Telemedicine 

Telemedicine is gaining popularity within the masses, but at the same time, it can safely be assumed that telemedicine is not going to replace visits to the doctor completely. Extending healthcare accessed within the home will lower healthcare costs. The departure from the treatment of illnesses to the renewed convergence on prevention is a symbol of the new 2020 healthcare patient. Health care is about more patient outcomes and less about elaborate fee structures. Technologies, primarily involving chatbots, have paved their way into the healthcare industry, allowing automation of services and leading to increased productivity with optimum results.

Precision medicine

Treating individuals by using therapies specific to them with the help of pools of data collected through smartphone apps and mobile biometrics is the backend of precision medicine. This provides patients the information about their health while simultaneously analyzing data. Misuse this data and personal information is prevented by adhering to Health standards like HIPPA. What precision medicine does is this: instead of viewing patients as end users of healthcare services, it engages more with them like partners, a role that is key to accelerate such initiatives further. It integrates patient-generated health data from different devices to better understand the disease and how it can not only be a physical, but mental burden for them. Using data analytics to better patient care is a sure shot method of moving towards efficient, sustainable models of care that are driven by data and technology and are mutually beneficial for healthcare professionals and patients alike.

There are certain cases where innovative techniques are being used to gather information, for example information from parents about infants in the Intensive Care Nursery (or NICU). Parents refrain from filling out a survey every day because it can be quite stressful and repetitive. Instead, UCSF and Benioff’s Children hospital now use an intelligent chatbot to communicate with parents. This has reduced stress for the parents as it’s now personal and now they feel they are talking to someone. The chatbot also converses intelligently to gather the baby’s symptoms for the doctor’s diagnosis later on. It also educates the parents with videos and web links so that parents can learn more about the medical condition of their baby and be more aware!

Data analytics for healthcare: Learning from sports technologies!

There is an incredible wealth of available data in sports but capturing and making use of that data in a way that will lead to better outcomes for the team remains a major problem. It has also been observed that many sports organizations find traditional data science methods to be out of their league.

Let’s take the example of NBA. In the last decade, the NBA has undergone a data science revolution that has entirely changed the game. They have used data to optimize performance in real time and built strategies to increase the chances of teams winning. Basketball is an incredibly difficult game to study, simply because it’s quick and difficult to keep track of in comparison to cricket and baseball. But NBA didn’t give up. Sophisticated tracking systems that kept their eyes on every player, machine learning and cartography helped them analyse which players were helping their teams win. Right from rebounds to three-pointers to assists, every move was analysed. Almost every NBA team had a data analyst on board to make sure this was taking place. And why did all of this happen? Because they had a senior leadership team that was invested more in the future than the present.

In healthcare, once a treatment or chain of thought becomes popular, it is hard to dislodge. It is hard to disrupt. But disruption must prevail. The medical fraternity needs to learn from the above example and take that leap of faith. Smart leaders need to be educated about the gold mine data analytics can prove to be. Somewhere, healthcare is still stuck in the data collection phase. There’s so much raw data collection happening, and private data sets, health surveys, billing records, medical sensors- everything is involved. But not all of it is being shared freely across organizations, hence, we are losing out on many insights that can be obtained. While systems are being modernized and the need for expert data scientists is now more real than ever, there are still not enough of these people on board. The result is a huge missed opportunity to deploy data in a meaningful way. It’s still not too late though, and the earlier this is recognized, the closer we will be to unleashing the true power that technology holds in the future for healthcare!

Posted in Artificial Intelligence, General, Healthcare | Tagged , , , , , , | Leave a comment

AI-led Chatbots – A boon to Healthcare Industry

The Dawn of Bots_Infographic LinkedIn

The demand of healthcare services is higher than ever, and this industry is deeply sensitive and complex in nature. In the recent years, innovations in AI-led technologies have tried to maximize productivity and help in saving time and effort of healthcare professionals. Technologies, primarily involving chatbots, have paved their way into the healthcare industry, allowing automation of services and leading to increased productivity with optimum results.

Chatbots: First level support for doctors

Chatbots are a great tool for first level support for doctors. For e.g. gathering initial symptoms, taking surveys and helping doctors take informed decisions of the patient’s condition. While the healthcare industry is sensitive to patient privacy and involves the risk of incomplete or incorrect diagnosis, technologies like chatbots can ensure that all information is gathered accurately and well in time. This will indeed save a doctor’s time and help them provide patients with a better-informed diagnosis. However, it is important that Chatbots should be used only for first level support and not for generating a diagnosis or treatment because nothing can replace a doctor’s analysis, their expertise and the required human intervention.

Optimum use of Chatbots in Health insurance

Popular names like Siri, Alexa, Cortana and Google Assistant are all chatbots that have seeped into the field of Healthcare to enhance its functioning. Healthcare insurance demands immense amount of data gathering and analysis with systematic mapping of patients, records, medical history etc. The traditional methods of health insurance lack the efficiency that AI-based Chatbots bring to the table. A decision tree based Chatbot can help health insurance companies gather data from its customers or potential customers using interesting, innovative surveys, and friendly chats with people that enable them to give out more data in a recurring manner. In a world that is guided by disruptive technology, these developments in the healthcare field and especially for services like health insurance are nothing less than a revolution. With the evolution of Chatbots, it would be very convenient for health tech companies in data management and patients’ assistance through optimum utilization of Chatbots.

Usage of Chatbots is scalable and agile

It’s always convenient to chat with someone than filling out a form instead. Usage of smart phones, languages and texting has become so common, and therefore, Chatbots have become an ideal AI-led tool is used to gather data from the audience without being intrusive in nature. The received data is priceless for many healthtech companies as they not only get past medical history but also current health statistics. Also, it helps in a company’s brand as it helps them keep track of customers better and thus chart out strategies for proper customer retention management.

Huge business potential for Chatbots in the future

There is a huge business potential for Chatbots in the healthcare industry, as it there is massive change incoming with digital transformation. To be completely tech-driven for real-time results is the need of the hour. Currently, the growing implementation of Chatbots in the healthcare sector is not a development that is temporary. With a demand of frequent data analysis reports, like research studies and surveys to name a few, there are some initial steps for product creation in that also need to be monitored. Chatbots ’entry in this industry is a perfect solution which helps in receiving all the initial data with absolute accuracy and less time consumption.

According to an insights study, it is estimated that the global Chatbot market will reach $1.23 billion by year 2025. The staggering numbers are a sign of the developments that shall shape the future of the healthcare industry, allowing both, the doctor and the patient to have a fluent interpretation and implementation of things.

These evident advancements have paved way for an enticing future for technological developments like Chatbots to make a turn for the better in the healthcare Industry. By 2019, up to 40 percent of large businesses are likely to integrate virtual assistants like Microsoft Cortana, Apple’s Siri, Amazon Alexa, or Google Assistant into their day-to-day workflows. With these established figures that are witnessing constant growth, healthcare, in terms of technology and efficiency, has the potential to redefine the field at a pace that once could not be imagined.



Posted in Artificial Intelligence, General, Mobile Development, Search, Search Engine Optimization | Leave a comment

9 Awesome Tips for Go Developer

Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

I have just started learning Go and found it to be a very interesting language. It bridges the gap between rapid development and performance by offering high performance like C, C++ along with rapid development like Ruby, Python.

Through this blog, I wanted to share some behaviours of Go that i found tricky along with some style guidelines.

Un-exported fields in struct can be a mystery

Yes it was a mystery for me when I started. My use case was simple, I was having an object of Person struct and wanted to marshal it using encoding/json package.

package main

import (

type Person struct {
    name string
    age  int

func main() {
    p := Person{name: "Anuj Verma", age: 25}
    b, err := json.Marshal(p)
    if err != nil {
        fmt.Printf("Error in marshalling: %v", err)



Oh things worked fine without any error. But wait, why is the response empty ? I thought it must be some typo. I checked and checked and checked…
I had no idea why things were not working. Then I asked to every developer’s god(Google). You will not believe me but this was the first time  I understood the real importance of exported and un-exported identifiers in Go.
Since encoding/json is a package outside main and the fields inside our struct name and age are un-exported (i.e begins with a small case), therefore encoding/json package does not have access to Person struct fields and it cannot marshal it.

So to solve this problem, I renamed fields of Person struct to Name, Age and it worked like a charm. Check here

json.Decode vs json.Unmarshal ?

I was once writing an application which makes HTTP call to Github api. The api response was in JSON format.
So to receive the response and use it I created a Go struct (GithubResponse) matching the format of API response. The next step was to deserialise it. After looking up from the internet I came up with two possible ways to do it.

var response GithubResponse
err = json.NewDecoder(req.Body).Decode(&response)
var response GithubResponse
bodyBytes, _ := ioutil.ReadAll(req.Body)
err := json.Unmarshal(bodyBytes, &response)

Both will exactly do the same thing and will de-serialise a JSON payload to our go struct.  So I was confused about which one to use ? After some research I was surprised to know that using json.Decode  to de-serialise a JSON response is not a recommended way. It is not recommended because it is designed explicitly for JSON streams.

I have heard JSON, what is JSON stream ?
Example JSON:

  "total_count": 3,
  "items": [
        "language": "ruby"
        "language": "go"

Example JSON stream:

{"language": "ruby"}
{"language": "go"}
{"language": "c"}
{"language": "java"}

So JSON streams are just JSON objects concatenated. So if you have a use case where you are streaming structured data live from an API, then you should go for json.Decode. As it has the ability to de-serialise an input stream.
If you are working with single JSON object at a time (like our example json shown above), go for json.Unmarshal.

var declaration vs :=

So this one is just a cosmetic suggestion, Remember when declaring a variable which does not needs an initial value prefer:

var list []string


list := []string{}

There’s no difference between them, except that the former may be used at package level (i.e. outside a function), but the latter may not. But still if you are inside a function where you have the choice to use both, It is a recommended style to use the former one.

Rule of thumb is to avoid using shorthand syntax if you are not initialising a variable.

Imports using blank identifier

In one of my application we are using postgres database. I am using “lib/pq” which is a go postgres driver for database. I was going through the documentation here and I saw this:

import (

    _ ""

func main() {
    connStr := "user=pqgotest dbname=pqgotest sslmode=verify-full"
    db, err := sql.Open("postgres", connStr)
    if err != nil {

Is this correct ? Why are we using an underscore in front of a package import. Checking on the internet I found that it is an anonymous import. It will import the package, but not give you access to the exported entities.

So the next question is very obvious:
If I do not have access to package entities, why we are importing it?

You remember when I said Go is an interesting language. In Go we can define an init() function in each source file, which allows us to setup things before the program executes. So sometimes we need to import a package so that its init() function gets called, without using the package directly in code.

Now lets understand why in code snippet above is imported as a blank identifier. Package database/sql has a function

func Register(name string, driver driver.Driver)

which needs to be called to register a driver for database. If you have a look at this line from lib/pq library, things become more clearer. So lib/pq is calling the Register function to register an appropriate database driver even before our main function executes.

So even we are not using lib/pq directly from our code, but we need it to register driver postgres  before calling sql.Open().

Naked Returns

In Go return values can be named. When we name a return value, they are treated as variables defined at top of the function.

func Insert(list []string) (err error) {
    // Do Stuff here

This creates a function-local variable by name err, and if you just call return with no parameters, it returns the local variable err.

Rule of thumb is that we should use naked return if the function is short (handful of lines). They can harm readability in longer functions.

Use shorter variable names in limited scope

In most of the languages you might have observed that it is advised to use descriptive variable names. For example use index instead of i. But in Go it is advised to use shorter variable names for variables with limited scopes.

For a method receiver, one or two letters is sufficient. Common variables such as loop indices and readers can be a single letter (ir). More unusual things and global variables need more descriptive names.

Rule of thumb is:

The further from its declaration that a name is used, the more descriptive the name must be.


Good Style

// Global Variable: Use descriptive name as it can be used anywhere in file
var shapesMap map[string]interface{}

// Method
// c for receiver is fine because it has limited scope
// r for radius is also fine
func(c circle) Area(r float64) float64 {
  return math.Pi * r * r


Explicitly ignore a json field

If you want to ignore a field of struct while serialising/de-serialising a json, you can use json:"-". Have a look at an example below:

type Person struct {
    ID      int    `json:"-"`
    Name    string `json:"name"`
    Age     int    `json:"age"`
    Address string `json:"address"`

In above struct ID field will be ignored while serialising/de-serialising.

Backquotes to the rescue

The back quotes are used to create raw string literals which can contain any type of character. So if you want to create a multi line string in Go, you can use back quotes. This will help you to save the effort for using escape characters inside string.

For example, suppose you want to define a string containing a JSON body:

{"name": "anuj verma", "age": 25}

See the below two ways:

b := "{\"name\": \"anuj verma\", \"age\": 25}"// Bad Style
b := `{"name": "anuj verma", "age": 25}`// Good Style

Comparing strings can be tricky

If in your code you need to compare a string with empty string, before comparison do not forget to trim spaces.
resultString == “”, may produce incorrect results as resultString can contain extra spaces(”    “)

strings.TrimSpace(resultString) == "" // good style


What am I missing here? Let me know in the comments and I’ll add it in. If you enjoyed this post, I’d be very grateful if you’d help it spread by sharing. Thank you.

Posted in General | Leave a comment