Would blockchain and AI re-define the future of InsureTech?

Insurtech

While the insurance sector is increasingly adopting new technologies, there still exists a huge gap in the efficiency of processing different insurance product offerings and customer expectations. For example, it is still very frustrating for customers to wait on phone calls to get through to executives even when they need to file emergency claims. However, disruptive technologies like artificial intelligence have the power to improve such situations. Backed by predictive analytics and data collected through interconnected consumer devices, there is a huge potential for the insurance sector to improve its efficiency and accuracy, and ultimately provide consumers with better plans and potentially cheaper premiums.

Moving from assumptions to concrete results with data analysis

Consumer data is a goldmine waiting to be tapped into, especially for insurance companies. Right from customer behaviour, personal information, shopping patterns, locations, health patterns, driving patterns and lifestyle history, everything is stored in databases somewhere! This can drive artificial intelligence engines to make great advances in decision making for insurance policies and premiums. Interconnected devices keep track of the minutest changes that customers undertake, and even these changes can have a huge impact on the way these customers purchase their insurance. A wearable that is connected to an actuarial database could power an AI engine to calculate a consumer’s personal risk score based on daily activities, their fitness patterns and risk patterns.

This will also work in favour of consumers too, not just insurance companies. The insurance industry too seems to think along these lines. Research states that more than 80 percent of insurance executives believe that technology will disrupt the sector by leaps and bounds. How? Imagine having to not make cold calls anymore, simply because the need will not exist. An AI engine or chatbot will pull out data on its own, and once this data has been processed and analysis results predicted, insurance research executives can spend time personalizing marketing and sales strategies, thus working towards building sustainable client relationships. This will remove the traditional underwriting strategies which are very human centric. Instead the underwriting will not be based on facts and not assumptions as well accurate data. This will in-turn allow addition of a host of new insurance products which are specific towards each customer’s needs than just generic offering to the masses!

Harnessing the power of experiential learning

Being human means having the power to learn from experience. Taking this experience one step further while combining it with technology has given us machine learning abilities. NLP (Natural Language Processing) backed by an AI engine and ML has enabled creating experiences that are intuitive, conversational and real time. A Gartner report predicts that by 2020, 85% of customer interactions will be managed without a human through digital assistants. Customers also prefer interacting with companies through modern technologies rather than outdated processes, because this saves time, is more reliable and most importantly, very convenient. So, while historically, the insurance sector was driven by mathematics and in most cases, human instinct, it will now be driven by concrete data and insights.

Blockchain : The disruptor for InsureTech

One of the biggest technological disruptions of our time has also been the blockchain – the distributed serverless ledger. The first problem area that blockchain can help eradicate is insurance fraud. Enough number of insurance frauds have caused sleepless nights for executives. On a distributed ledger, insurers can record permanent immutable transactions while protecting the data integrity. This would help insurers collaborate and identify suspicious behavior across the ecosystem, for example, the validity of ‘no claim bonus” which reduces the premium.  This reduces the margin for fraud, raises margins for companies overall, and helps them come up with better premium plans for customers.

The blockchain could also provide a better means of facilitation of policy issuance! Using smart contracts, the payment for the premium can be triggered only after the underwriting process constraints have been met and the customer requirements are met. The agent commission would be automatically and instantly paid and the policy issuance would also be accurate and immediate! This would drastically change the way the insurance industry works by reducing the policy issuance time and ensuring that it’s accurate and beneficial for both customers and insurance companies.

Another example would be related to renewal of insurance; A customer could budget a certain amount of money for various insurance renewals and set constraints on them – the smart contract would be executed only when these constraints are met and could get consumers a better bargaining chip. For example, a smart contract could have the customer specify the upper bound for ₹ 15,000 for the car insurance subject to specific add-ons and benefits. Similarly, the insurance vendors could “bid” for the insurance based on the condition of the car (age, no claim bonus, add-ons etc.). The contract would be executed instantly when both parties’ constraints are met, and every party gets their dues! This could bring power to the consumer in choosing the right insurance and could help agents setup the right constraints for their customers. On the other hand, the insurance company is satisfied because their criteria also have been met before policy issuance!

These ideas could also be the beginning of insurance companies using Cryptocurrency for insurance. While the idea may seem to be too far-fetched right now, I assure you, it isn’t, because there’s only one rule to follow if you want to succeed as a part of the ever-evolving technological ecosystem.

Break all rules and choose to disrupt!

 

 

Advertisements
Posted in General | Leave a comment

Wearable technology empowering the sports industry

  Wearable tech in sports

From the introduction of wireless trackers for heartbeat monitoring in the 1980s to apps and websites that came up in 2005, to the introduction of FitBit, the biggest gamechanger in the wearable tech industry in 2007, it has been a long journey. Now, smart fabrics have also been introduced and smart clothing is driving the revolution ahead. The development and miniaturization of sensors has also made wearable devices possible. Smart Phones have been instrumental in the evolution of smaller and lightweight wearable gadgets that can track information easily via Bluetooth or BLE. Cloud based AI tools managing large amounts of data through predictive analytics have made it possible for this data to provide people with recommendations for a healthier life!

 

Real-time track base for fitness

Keeping track of fitness in real time is better than post training analysis, especially for sportspeople. It is also readily accessible as compared to the idea of sportspeople sitting in labs after their performance. Devices used to collect biometrical data can provide a rich source of information to all stakeholders. This may determine the success of sportspeople’s training. For athletes, preventing the onset of injury is always a constant battle as motivation drives them to push harder in the pursuit of better performance. However, the adoption of wearable devices may help change that as now coaches can monitor the sportsperson’s performance instantaneously. Creation of personalized exercise plans for athletes basis this data monitoring can also help them improve.

 

Use of wearable tech by sportspeople influences masses

Oddly wired and clunky devices earlier worn only by competitive athletes, have transformed into stylish, everyday accessories worn by casual joggers and stroller-pushing parents who just want to keep track of their daily fitness, be it something as simple as the number of steps they walk in a day. There is a reason wearable device have become a favorite gift for employers to give employees. Healthier employees mean lower healthcare costs. Companies that provide medical insurance benefits have started looking at employee health metrics to negotiate for lower insurance premiums! And there are now apps and devices for just about any activity, including tennis, golf, skiing, and swimming.

 

Wearable tech and data protection: drawing a line

Transparency presents a practical challenge in the context of wearables: user interfaces are generally small, and nowadays it may not be reasonable to expect users to read full privacy notices provided in physical or online user manuals. To ensure compliance, wearable providers should consider the use of standardized icons for better communication. Most concern around the use of wearables stems from uncertainties around the third parties with whom personal data may be shared. Wearable technology often involves a complex network of data controllers all sharing personal data with each other and getting user consent before disseminating this data is required. Providers need to take care of data accessibility and set privacy policies according to regulation standards, because what’s most important is maintaining credibility and trust within your consumer base. Everything else is secondary.

 

Wearable technology in sports is a HUGE benefit to professional athletes, a good source of motivation for the masses to improve their health and a potential goldmine for Insurance companies and Employee benefits. Global revenues for sports, fitness and activity monitors are expected to grow from $1.9bn in 2013 to $2.8 bn in 2019, according to technology industry analysis firm IHS Technology. Just like all Formula-1technology related to speed, safety and efficiency slowly makes its way into cars that are driven on the roads, data from wearable technology will soon make its way into predictive analysis for the healthcare industry on a global level as well!

Posted in Artificial Intelligence, General, Healthcare | Leave a comment

IS EVERYTHING REALLY PASSED BY VALUE IN GO?

Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

As a Go programmer or while learning Go you might have heard that everything in Go is passed by value.

Also in the official FAQ it says that:

As in all languages in the C family, everything in Go is passed by value. That is, a function always gets a copy of the thing being passed, as if there were an assignment statement assigning the value to the parameter. For instance, passing an int value to a function makes a copy of the int, and passing a pointer value makes a copy of the pointer, but not the data it points to.

What does that mean ?

It means that if you pass a variable to a function, the function always gets a copy of it. Remember always. So the caller and callee have two independent variables with the same value. Hence, If the callee modifies the parameter variable, the effect is not visible to the caller.

Lets prove it via an example

package main

import "fmt"

type person struct {
    Name string
}

func main() {
    p := person{Name: "Smith"}
    fmt.Println("Value of name before calling updateName() is: ", p.Name)
    updateName(p)
    fmt.Println("Value of name after calling updateName() is: ", p.Name)
}

func updateName(p person) {
    p.Name = "John"
}

Output:

Value of name before calling updateName() is:  Smith
Value of name after calling updateName() is:  Smith

As you can clearly see that the value of name even after calling updateName function is unchanged.

Lets try the same with Go slices

I tried the same on Go slices and it demonstrated some pretty interesting or you can say surprising behaviour. Lets quickly have a look at the code below:

package main

import (
    "fmt"
)

func main() {
    greetings := []string{"Hi", "Welcome", "Hola"}
    updateGreetings(greetings)
    fmt.Println(greetings)
    
}

func updateGreetings(greetings []string) {
    greetings[0] = "नमस्ते"
}

Output:

[नमस्ते Welcome Hola]

As you can see the value of first element of slice greeting is changed. Now this is completely opposite of what we have seen for Go struct.

Why slices are behaving differently ?

We are seeing this change in behaviour because the way Go slices are implemented internally. Lets take a minute to understand how Go slices are implemented. Have a look at the diagram below:

So when we make a slice of string, Go internally is creating two separate data structures.

The first is what we refer to as the slice. The slice is a data structure that has 3 elements inside it:

  1. Pointer to array: is a pointer over to the underlying array that represents the actual list of items.
  2. Capacity: is how many elements it can contain at present
  3. Length: is the number of elements referred to by the slice

The second is the actual array that represents the actual list of items. So lets have a look at what happens in memory when we declare a slice.

As you can see the slice at address 0002 is pointing to array stored at address 0003. Lets have a look at what happens when we pass the greetings slice to function updateGreetings.

As you can see, Go is still behaving as a pass by value language, as it is making of a copy of the slice data structure at address 0005. Now here is the very important thing, even though the slice data structure is copied, it is still pointing at the original array in memory at address 0003.

When we modify the slice inside the function, we are modifying the same array that both copies of slice pointing to. So in Go slices are what referred as reference types. 

Are there any more reference types in Go ?

So slices are not the only data structure that behave in this fashion, there are also other types which behave exactly the same way. In the below diagram, I have segregated the value types and reference types in Go.

The point to note here is that while passing on reference types, we do not need to pass address of the type. Go will handle it and any change in the variable will be reflected in the caller function. When we are passing value types like int, bool etcand we expect that the changes in value should be reflected, we must use pointers.

Reference:

  1. https://www.udemy.com/go-the-complete-developers-guide/
  2. https://blog.golang.org/go-slices-usage-and-internals

Conclusion

This is one Go gotcha which can lead to many many issues when we start Go programming. Just keep in mind the diagram of value and reference types. Hope this post helps you to avoid getting into issues in your program. Thanks for reading. Please like and share the post so that it can reach to other valuable readers too.

 

Posted in General | Leave a comment

Does Sports Technology impact the Healthcare Sector?

Wearable Tech & Preventative Healthcare_LinkedIn

In the ever-evolving technological landscape, emerging disruptive technologies like machine learning, deep learning and artificial intelligence have empowered industries like healthcare and sports significantly. Both industries are interconnected. Both cannot function without human involvement, and they are also dependent on informative data from each other due to the increasing demand of predictive analytics. Predictive analytics applications use metrics that can be measured and analyzed to predict the likely behavior of individuals, machinery or other entities. Today’s tech-savvy audience is constantly on the look-out for technology that is efficient, quick and time-saving.

The rising data revolution with digital transformation

Data analytics is becoming increasingly popular within healthcare, sports and life sciences professionals.  All these industries are also currently embracing innovations such as wearable-based technologies, faster computing and smaller form factor or devices.

“Paradoxically, the evolution of machine learning, which aims to raise the threshold of intelligent analysis beyond that of the human brain, can teach us more about what it means to be human.”

Today our smart-phones can not only be used as biometric devices but also can be used as a platform from which to deliver tailored algorithm analysis that can optimize personal metrics in real time.

The need for real-time data analysis is more real than ever. Analytics based reports and surveys are becoming increasingly popular with researchers because this helps them monitor trends in real time and directly impacts innovation in products. The entry of chatbots in these industries is an example of a perfect solution that helps in bringing all the initial data together, with absolute accuracy and less time consumption. This data can be collected as per different parameters, such as age, gender, location, medical history, diet, fitness regime and what not, and in turn to be used by, let’s say insurance companies when they chart out plans for a premium. Pretty amazing, right?

 

Emergence of Telemedicine 

Telemedicine is gaining popularity within the masses, but at the same time, it can safely be assumed that telemedicine is not going to replace visits to the doctor completely. Extending healthcare accessed within the home will lower healthcare costs. The departure from the treatment of illnesses to the renewed convergence on prevention is a symbol of the new 2020 healthcare patient. Health care is about more patient outcomes and less about elaborate fee structures. Technologies, primarily involving chatbots, have paved their way into the healthcare industry, allowing automation of services and leading to increased productivity with optimum results.

Precision medicine

Treating individuals by using therapies specific to them with the help of pools of data collected through smartphone apps and mobile biometrics is the backend of precision medicine. This provides patients the information about their health while simultaneously analyzing data. Misuse this data and personal information is prevented by adhering to Health standards like HIPPA. What precision medicine does is this: instead of viewing patients as end users of healthcare services, it engages more with them like partners, a role that is key to accelerate such initiatives further. It integrates patient-generated health data from different devices to better understand the disease and how it can not only be a physical, but mental burden for them. Using data analytics to better patient care is a sure shot method of moving towards efficient, sustainable models of care that are driven by data and technology and are mutually beneficial for healthcare professionals and patients alike.

There are certain cases where innovative techniques are being used to gather information, for example information from parents about infants in the Intensive Care Nursery (or NICU). Parents refrain from filling out a survey every day because it can be quite stressful and repetitive. Instead, UCSF and Benioff’s Children hospital now use an intelligent chatbot to communicate with parents. This has reduced stress for the parents as it’s now personal and now they feel they are talking to someone. The chatbot also converses intelligently to gather the baby’s symptoms for the doctor’s diagnosis later on. It also educates the parents with videos and web links so that parents can learn more about the medical condition of their baby and be more aware!

Data analytics for healthcare: Learning from sports technologies!

There is an incredible wealth of available data in sports but capturing and making use of that data in a way that will lead to better outcomes for the team remains a major problem. It has also been observed that many sports organizations find traditional data science methods to be out of their league.

Let’s take the example of NBA. In the last decade, the NBA has undergone a data science revolution that has entirely changed the game. They have used data to optimize performance in real time and built strategies to increase the chances of teams winning. Basketball is an incredibly difficult game to study, simply because it’s quick and difficult to keep track of in comparison to cricket and baseball. But NBA didn’t give up. Sophisticated tracking systems that kept their eyes on every player, machine learning and cartography helped them analyse which players were helping their teams win. Right from rebounds to three-pointers to assists, every move was analysed. Almost every NBA team had a data analyst on board to make sure this was taking place. And why did all of this happen? Because they had a senior leadership team that was invested more in the future than the present.

In healthcare, once a treatment or chain of thought becomes popular, it is hard to dislodge. It is hard to disrupt. But disruption must prevail. The medical fraternity needs to learn from the above example and take that leap of faith. Smart leaders need to be educated about the gold mine data analytics can prove to be. Somewhere, healthcare is still stuck in the data collection phase. There’s so much raw data collection happening, and private data sets, health surveys, billing records, medical sensors- everything is involved. But not all of it is being shared freely across organizations, hence, we are losing out on many insights that can be obtained. While systems are being modernized and the need for expert data scientists is now more real than ever, there are still not enough of these people on board. The result is a huge missed opportunity to deploy data in a meaningful way. It’s still not too late though, and the earlier this is recognized, the closer we will be to unleashing the true power that technology holds in the future for healthcare!

Posted in Artificial Intelligence, General, Healthcare | Tagged , , , , , , | Leave a comment

AI-led Chatbots – A boon to Healthcare Industry

The Dawn of Bots_Infographic LinkedIn

The demand of healthcare services is higher than ever, and this industry is deeply sensitive and complex in nature. In the recent years, innovations in AI-led technologies have tried to maximize productivity and help in saving time and effort of healthcare professionals. Technologies, primarily involving chatbots, have paved their way into the healthcare industry, allowing automation of services and leading to increased productivity with optimum results.

Chatbots: First level support for doctors

Chatbots are a great tool for first level support for doctors. For e.g. gathering initial symptoms, taking surveys and helping doctors take informed decisions of the patient’s condition. While the healthcare industry is sensitive to patient privacy and involves the risk of incomplete or incorrect diagnosis, technologies like chatbots can ensure that all information is gathered accurately and well in time. This will indeed save a doctor’s time and help them provide patients with a better-informed diagnosis. However, it is important that Chatbots should be used only for first level support and not for generating a diagnosis or treatment because nothing can replace a doctor’s analysis, their expertise and the required human intervention.

Optimum use of Chatbots in Health insurance

Popular names like Siri, Alexa, Cortana and Google Assistant are all chatbots that have seeped into the field of Healthcare to enhance its functioning. Healthcare insurance demands immense amount of data gathering and analysis with systematic mapping of patients, records, medical history etc. The traditional methods of health insurance lack the efficiency that AI-based Chatbots bring to the table. A decision tree based Chatbot can help health insurance companies gather data from its customers or potential customers using interesting, innovative surveys, and friendly chats with people that enable them to give out more data in a recurring manner. In a world that is guided by disruptive technology, these developments in the healthcare field and especially for services like health insurance are nothing less than a revolution. With the evolution of Chatbots, it would be very convenient for health tech companies in data management and patients’ assistance through optimum utilization of Chatbots.

Usage of Chatbots is scalable and agile

It’s always convenient to chat with someone than filling out a form instead. Usage of smart phones, languages and texting has become so common, and therefore, Chatbots have become an ideal AI-led tool is used to gather data from the audience without being intrusive in nature. The received data is priceless for many healthtech companies as they not only get past medical history but also current health statistics. Also, it helps in a company’s brand as it helps them keep track of customers better and thus chart out strategies for proper customer retention management.

Huge business potential for Chatbots in the future

There is a huge business potential for Chatbots in the healthcare industry, as it there is massive change incoming with digital transformation. To be completely tech-driven for real-time results is the need of the hour. Currently, the growing implementation of Chatbots in the healthcare sector is not a development that is temporary. With a demand of frequent data analysis reports, like research studies and surveys to name a few, there are some initial steps for product creation in that also need to be monitored. Chatbots ’entry in this industry is a perfect solution which helps in receiving all the initial data with absolute accuracy and less time consumption.

According to an insights study, it is estimated that the global Chatbot market will reach $1.23 billion by year 2025. The staggering numbers are a sign of the developments that shall shape the future of the healthcare industry, allowing both, the doctor and the patient to have a fluent interpretation and implementation of things.

These evident advancements have paved way for an enticing future for technological developments like Chatbots to make a turn for the better in the healthcare Industry. By 2019, up to 40 percent of large businesses are likely to integrate virtual assistants like Microsoft Cortana, Apple’s Siri, Amazon Alexa, or Google Assistant into their day-to-day workflows. With these established figures that are witnessing constant growth, healthcare, in terms of technology and efficiency, has the potential to redefine the field at a pace that once could not be imagined.

 

 

Posted in Artificial Intelligence, General, Mobile Development, Search, Search Engine Optimization | Leave a comment

9 Awesome Tips for Go Developer

Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

I have just started learning Go and found it to be a very interesting language. It bridges the gap between rapid development and performance by offering high performance like C, C++ along with rapid development like Ruby, Python.

Through this blog, I wanted to share some behaviours of Go that i found tricky along with some style guidelines.

Un-exported fields in struct can be a mystery

Yes it was a mystery for me when I started. My use case was simple, I was having an object of Person struct and wanted to marshal it using encoding/json package.

package main

import (
    "encoding/json"
    "fmt"
)

type Person struct {
    name string
    age  int
}

func main() {
    p := Person{name: "Anuj Verma", age: 25}
    b, err := json.Marshal(p)
    if err != nil {
        fmt.Printf("Error in marshalling: %v", err)
    }
    fmt.Println(string(b))
}

Output:

{}

Oh things worked fine without any error. But wait, why is the response empty ? I thought it must be some typo. I checked and checked and checked…
I had no idea why things were not working. Then I asked to every developer’s god(Google). You will not believe me but this was the first time  I understood the real importance of exported and un-exported identifiers in Go.
Since encoding/json is a package outside main and the fields inside our struct name and age are un-exported (i.e begins with a small case), therefore encoding/json package does not have access to Person struct fields and it cannot marshal it.

So to solve this problem, I renamed fields of Person struct to Name, Age and it worked like a charm. Check here

json.Decode vs json.Unmarshal ?

I was once writing an application which makes HTTP call to Github api. The api response was in JSON format.
So to receive the response and use it I created a Go struct (GithubResponse) matching the format of API response. The next step was to deserialise it. After looking up from the internet I came up with two possible ways to do it.

var response GithubResponse
err = json.NewDecoder(req.Body).Decode(&response)
var response GithubResponse
bodyBytes, _ := ioutil.ReadAll(req.Body)
err := json.Unmarshal(bodyBytes, &response)

Both will exactly do the same thing and will de-serialise a JSON payload to our go struct.  So I was confused about which one to use ? After some research I was surprised to know that using json.Decode  to de-serialise a JSON response is not a recommended way. It is not recommended because it is designed explicitly for JSON streams.

I have heard JSON, what is JSON stream ?
Example JSON:

{
  "total_count": 3,
  "items": [
    {
        "language": "ruby"
    },
    {
        "language": "go"
    }
  ]
}

Example JSON stream:

{"language": "ruby"}
{"language": "go"}
{"language": "c"}
{"language": "java"}

So JSON streams are just JSON objects concatenated. So if you have a use case where you are streaming structured data live from an API, then you should go for json.Decode. As it has the ability to de-serialise an input stream.
If you are working with single JSON object at a time (like our example json shown above), go for json.Unmarshal.

var declaration vs :=

So this one is just a cosmetic suggestion, Remember when declaring a variable which does not needs an initial value prefer:

var list []string

over

list := []string{}

There’s no difference between them, except that the former may be used at package level (i.e. outside a function), but the latter may not. But still if you are inside a function where you have the choice to use both, It is a recommended style to use the former one.

Rule of thumb is to avoid using shorthand syntax if you are not initialising a variable.

Imports using blank identifier

In one of my application we are using postgres database. I am using “lib/pq” which is a go postgres driver for database. I was going through the documentation here and I saw this:

import (
    "database/sql"

    _ "github.com/lib/pq"
)

func main() {
    connStr := "user=pqgotest dbname=pqgotest sslmode=verify-full"
    db, err := sql.Open("postgres", connStr)
    if err != nil {
        log.Fatal(err)
    }
}

Is this correct ? Why are we using an underscore in front of a package import. Checking on the internet I found that it is an anonymous import. It will import the package, but not give you access to the exported entities.

So the next question is very obvious:
If I do not have access to package entities, why we are importing it?

You remember when I said Go is an interesting language. In Go we can define an init() function in each source file, which allows us to setup things before the program executes. So sometimes we need to import a package so that its init() function gets called, without using the package directly in code.

Now lets understand why in code snippet above github.com/lib/pq is imported as a blank identifier. Package database/sql has a function

func Register(name string, driver driver.Driver)

which needs to be called to register a driver for database. If you have a look at this line from lib/pq library, things become more clearer. So lib/pq is calling the Register function to register an appropriate database driver even before our main function executes.

So even we are not using lib/pq directly from our code, but we need it to register driver postgres  before calling sql.Open().

Naked Returns

In Go return values can be named. When we name a return value, they are treated as variables defined at top of the function.

func Insert(list []string) (err error) {
    // Do Stuff here
    return
}

This creates a function-local variable by name err, and if you just call return with no parameters, it returns the local variable err.

Rule of thumb is that we should use naked return if the function is short (handful of lines). They can harm readability in longer functions.

Use shorter variable names in limited scope

In most of the languages you might have observed that it is advised to use descriptive variable names. For example use index instead of i. But in Go it is advised to use shorter variable names for variables with limited scopes.

For a method receiver, one or two letters is sufficient. Common variables such as loop indices and readers can be a single letter (ir). More unusual things and global variables need more descriptive names.

Rule of thumb is:

The further from its declaration that a name is used, the more descriptive the name must be.

Examples:

Good Style

// Global Variable: Use descriptive name as it can be used anywhere in file
var shapesMap map[string]interface{}

// Method
// c for receiver is fine because it has limited scope
// r for radius is also fine
func(c circle) Area(r float64) float64 {
  return math.Pi * r * r
}

 

Explicitly ignore a json field

If you want to ignore a field of struct while serialising/de-serialising a json, you can use json:"-". Have a look at an example below:

type Person struct {
    ID      int    `json:"-"`
    Name    string `json:"name"`
    Age     int    `json:"age"`
    Address string `json:"address"`
}

In above struct ID field will be ignored while serialising/de-serialising.

Backquotes to the rescue

The back quotes are used to create raw string literals which can contain any type of character. So if you want to create a multi line string in Go, you can use back quotes. This will help you to save the effort for using escape characters inside string.

For example, suppose you want to define a string containing a JSON body:

{"name": "anuj verma", "age": 25}

See the below two ways:

b := "{\"name\": \"anuj verma\", \"age\": 25}"// Bad Style
b := `{"name": "anuj verma", "age": 25}`// Good Style

Comparing strings can be tricky

If in your code you need to compare a string with empty string, before comparison do not forget to trim spaces.
resultString == “”, may produce incorrect results as resultString can contain extra spaces(”    “)

strings.TrimSpace(resultString) == "" // good style

Conclusion

What am I missing here? Let me know in the comments and I’ll add it in. If you enjoyed this post, I’d be very grateful if you’d help it spread by sharing. Thank you.

Posted in General | Leave a comment

What I learned from my first ever software development internship

Content posted here with the permission of the author Viraj Chavan, who is currently employed at Josh Software. Original post available here.

I was a student at an engineering college in India. After 3 and a half years years of learning computer science academically, I now had a chance to test my knowledge in the real world through an internship.

In this article, I’ll be sharing my internship experience at Josh Software, Pune with the hope that it is helpful to other IT and computer engineering students that are looking for internships.

Like most of my colleagues at the college, I had a very limited view about software development in general and didn’t know what to expect from an internship.

Lucky for me, I was assigned a live project, which was based on Ruby on Rails, something that I had already developed an interest for.

After I had learned PHP and MySQL in the 2nd year of my studies, I built a basic web app, and all that it did was some CRUD (Create, Read, Update, Destroy) operations. I remember talking with a friend who had similar skills to mine, and said “Even we can build Facebook now that we know PHP and MySQL!”

How ridiculously simple things seemed at that time. Now I understand how complex building/maintaining a software can be.

So here’s what I learned from my Internship while working on a live project.

 

General lessons

Scale Makes a huge difference

  • How many users are going to use the software?
  • How much data will be processed?
  • What are the expected response times for a function?

These are questions that we, as college students, hardly think about. Our college projects were usually short-sighted. In real-world projects though, the above questions fundamentally affect decisions about hardware, technologies/tools to be used, system architecture, algorithms, and so on.

Working with a large codebase

Back in college, we used to work on projects that had like 15 – 20 files or so. Built in under a week, the whole project could be understood in a few hours.

Now the project I’m working on has hundreds of files spread across dozens of folders. It can take months to understand the whole project, and hours to debug a bug that’s spread across multiple files. And the first time you look at the whole project directory, you don’t know where to start understanding the code.

Writing maintainable code

Knowing that the code you write will be read, understood, and improved/changed by someone else (or even yourself) in the future makes you write code that’s maintainable.

In college, all I focused on was getting the expected functionality to be complete, and never considered whether the code I wrote was maintainable.

This resulted in scrambled pieces of code that somehow worked at the time. But two days later even I wouldn’t understand why I had written a certain piece of code that way. And changing some part of the code almost always broke other parts. 😆

Code Maintainability is easier to recognise by its absence, like when something you thought should take an hour ends up taking a week.

Using a version control system – properly

When I first started building small software, all the files existed on my own development machine, and maybe they were backed up to Google Drive as regular files.

Then I got to know about GitHub, but I merely used it as a safe storage place for my code. I used the GitHub desktop app to commit all changes on just the master branch. I even hesitated using it through the command line.

Now not a day goes by that I don’t use Git. It’s such a great tool for collaboratively writing code, distributed development, branching out for new features, pull requests, and so on.

Here’s a little article on why version control systems are awesome!

The importance of using a Test Driven Development approach

During my internship, I was assigned to work on a new feature that was to be added to the main project .

I wrote the code and tested if it was working the way it was supposed to. It worked perfectly, or so I thought. I deployed the feature to the production confidently, and moved on to work on something else.

After a few hours, Rollbar, a real time error reporting tool burst with a number of errors in our code deployed to production. I checked the errors and they seemed unrelated to anything I had ever worked on.

After some debugging, all of those errors traced back to a single method. A method that was called in numerous places, and in which I had modified just a single line, and hadn’t checked where else it was used.

Now this could’ve been avoided if the code that used that method had test cases written for it, and if I had checked if all the test cases ran successfully before deploying the code. That made me realize the importance of test driven development.

Here’s an article to understand why writing test cases is important.

Things specific to Ruby on Rails/ Web Development

The MVC Architecture

Back in my college days, when I developed applications in PHP, I had no clue what Model, View, and Controller were. Any project was so complexly scrambled that I couldn’t find in which file a piece of important logic was written. The HTML embedded PHP scripts at odd places and I had placed all the files in just one folder.

Then I learned about the Rails framework, and got accustomed with the MVC architecture.

Model-View-Controller (MVC) is an architectural pattern that separates an application into three main logical components – Model, View, and Controller. Each of these components are built to handle specific development aspects of an application (source)

MVC really simplifies things and is an important part of many major frameworks.

Dealing with Databases

In the last 6 months, I haven’t written a single direct SQL database query. Yet I deal with databases everyday, even doing some complex operations. This is thanks to the ORM (Object Relational Mapper) that Ruby On Rails uses.

ORMs convert object-oriented programming language such as Ruby into database lingo in which to perform operations. Which makes data access more portable and abstracted from the required database queries that are necessary when manipulating data.

Thanks to ORM, it’s much much easier to query the database. This gives a big advantage to beginners, who can start writing applications without even knowing SQL.

Writing/Using REST APIs (Application Programming Interfaces)

APIs make it easier for one application to talk to another.

APIs make some other applications’s functionalities easily accessible to our application. For example, I once developed a Road Trip Planner application that used the Google Maps API to show various places on a map that a user could visit on a particular route.

APIs can also be used to separate the front-end and the back-end completely. For example, we can write the back-end as an API-only Rails application that can be used by a web site, an Android/iOS application, or even some third party applications.

Using ElasticSearch for searching

Although I don’t know much about ElasticSearch so far, but I’ve learned that it’s a NOSQL, distributed full text database. It acts as a distributed search engine that is incredibly easy to scale and returns results at lightning speed.

Why would we need it for searching? Because having millions of records in a regular database can make it really complex to make efficient searches.
With Elasticsearch, we can index the documents needed to be searched and it can perform queries across all those millions of documents and return accurate results in a fraction of a second.

Elasticsearch has a Restful API, which makes it really easy to query the searches and get the results.

Here’s a tutorial that helped me, and here are some use cases of Elasticsearch.

Using asynchronous/background tasks

Sometimes the user will perform an action on our application that takes a considerable amount of time to complete. We don’t want the user to sit there waiting for this action to complete, so we send it off to a background worker.

Here’s a link that explains it better.

In Ruby On Rails, I came across Sidekiq, which makes it easy to handle background tasks efficiently.


Thanks for reading! If you found this article helpful, give me some claps. 👏

There’s still a long way to go!

Check out my Github profile here.

Posted in General | Leave a comment