Preventing Machine Downtime by Predicting it Beforehand

For the past few months, I have been observing the growth of the manufacturing sector in India, and how the contribution of the manufacturing sector to the India’s gross domestic product (GDP) will increase from the current levels of ~16% to 25% by 2022.

One of the major concerns and challenges of having a seamless manufacturing output is to prevent & avoid unfavorable machine performance. With the assumption that machines will degrade over time, manufacturing companies, prior to advanced technology intervention, aimed at focusing on preventive and reactive maintenance of the health of their machines, but the use of deep learning technology is leading towards a new age term method to safeguarding the health of machines, coined in the industry as predictive maintenance.

Predictive maintenance technology approaches can help the manufacturing sector to find the optimal inflection point between costs and machine failures. But, predictive maintenance is not as simple as a plug ‘n’ play solution as the requirement of machine learning requires layers of historic data to be collected over time.

Consider the life-cycle of a CNC machine. Today, most CNC manufacturers define the maintenance cycles based on the type of work the CNC machine does for their customer. It is based on their individual experience and judgement. However, if we were to get not just real-time data on the display but also store and analyze the historical data and use of the CNC machine, deep learning algorithms could find out the pattern of use and predict the maintenance and life of the CNC machine.

False positives would occur, i.e. a situation where the algorithm may predict the maintenance incorrectly based on the parameters it has to play with. With some human intervention, this pattern is corrected, learnt, and applied on the following data set to improve the result. So, the algorithm can learn from its mistake and give more relevant and accurate results over time.

Using cloud based scalable technologies, we could reduce the infrastructure requirements at each premise and even customize the maintenance cycle for each CNC machine based on the customer’s usage patterns. This will not only reduce the cost of maintenance but also improve the efficiency – a win win for both the CNC machine manufacturer and their customer!

Deep Neural Networks are used in this approach to learn from sequences from data. Unscheduled machine downtime can be damaging for any business. Preemptive identification of these issues can help enhance quality of production and significantly improve supply chain processes. The advantages of using predictive maintenance strategies can enhance overall operational efficiency.

Predictive Maintenance strategy is built on the fundamental methodology of Internet of Things (IoT). IoT will not be functional without data and machine learning. This approach is not only about gathering data, but also creating an ecosystem to predict and make decisions as a response to the sequences of data collected. Predictive maintenance will be a larger opportunity as global economies progress, and IT solutions providers need to look at this opportunity to further innovate to help manufacturing companies disrupt their industries.

Posted in General | Leave a comment

10 Signs of a good Ruby on Rails Developer

Content posted here with the permission of the author Pramod Shinde, who is currently employed at Josh Software. Original post available here.

I have been working as Ruby on Rails developer since last five years with Josh Software, I felt that I should write down my learnings about the best practices followed by the RoR developer. How I learned …? of course to learn something you need to commit the mistakes, thats how we learn right?

Let’s see, What all you should follow to be a ‘Good’ Ruby on Rails developer.

1. Your migrations are “thoughtful” …

Whenever you come across database table schema design do you think through all the aspects like

  • The table which is being designed where its going to be used? How much it might grow in terms of the data size? (Imagine the worst future of your design)
  • Have I kept correct data types, defaults, constraints if any? Most of the times we really don’t need integer columns, We can use smallint instead for smaller set of integers, similarly varchar(10) vsvarchar(255) vs text.
  • Have I added indexes wherever necessary? Thinking through What kind of the queries this table is going to handle?

A special point…Do you write multiple migrations for same table? if yes, its a bad habit.

Often we don’t think through all the points mentioned above and end up creating multiple migrations for same table which causes codebase to look scary.

Instead you should use up and down on the migration to fix or alter the table, change in the requirement is an exception to this.

2. You always follow the single responsibility principle

We all know a convention of “skinny controller and fat model”, some of us already follow this but do we follow it wisely.

We are living in the Rails 5 era, so Why to overload models?

Why not to follow “keep everything skinny move extra fat to concerns or service objects from the models”, the classes in the codebase should be designed to handle single responsibility.

I came across the following posts about how to organise controllers and using service objects in Rails.

3. You write test cases to test the “code”

I have seen many applications whose CI builds takes ages to finish, What exactly they are testing?

Your test cases should be testing the “code” not the machine performance, better test suits

  • Share objects between different examples.
  • Uses method stubs and avoid the repetitive calls to the methods.
  • Don’t test same code twice, if you have sharable piece of code and used at multiple places then don’t write test cases in multiple places.
  • Does not creates unnecessary test records, unknowingly many developers end up creating unnecessary test records.

If your are using gems like fakerfactory_bot_rails and database_cleaner to create and clean test records then creating unnecessary records can cost you time and speed.

Simple example,

create_list(:user, 10)

Much better will be reduce the list size, if you are not doing anything special with 10 users.

create_list(:user, 2)

To know how to write better Rspec then this guide is for you.

4. You keep production environment healthy

If you are an engineer and reduce efforts of others, then you use the utilities of other engineers to reduce your efforts.

A healthy Rails production environment always have

  • Monit  – Is everything up and running? if not get notified.
  • logrotate – rotates, compresses, and mails system logs.
  • crontabs with whenever, schedules work for you.
  • Database backup scripts running in maintenance window.
  • Exception notifiers like Senty or Rollbar or ‘anything that suits you’.

5. You follow basic git etiquettes

If you are working in a team and using git then you follow the git etiquettes like

  • Don’t commit untracked files – we often keep git untracked files like ‘something.swp’, ‘backup.sql’, ‘schema.rb or structure.sql backups’, ‘some.test.script’, you should not commit such files.
  • Branch naming – naming something is always difficult but you have to do it, the feature branches should have sensible names, don’t use names like ‘something-wip’, ‘somthing-test’.
  • Delete the feature branches after merge – no explanation required.
  • Commit messages – your commit messages must have ‘Github issue number’ or ‘any project management story number/link’,  ‘brief description about feature/task’

6. You don’t ignore

Remember you are not the only one who is going to work on particular application for your lifetime. Someone will takeover you and he should not waste his time in figuring out that how to setup things.

Your application repository must have updated with detail steps about setting up a application for the first time.

7. Secrets are “really” secrets for you

We often use credentials for database configs, secrets.yml, third party api’s like AWS, payment gateway, sentry etc.

You should not commit such credentials/secrets/environment variables to the Github instead you keep them secure with gems like dotenv-rails,figaro or simple . files that are not committed to the repository.

A sample file of such credentials should be committed and updated regularly.

8. You do code reviews and discuss feature with team

While working in a team you should get your feature reviewed from another team mate or before starting on any feature discuss it with the team thoroughly, advantages of the code reviews or feature discussion are you will come across many scenarios that are not thought of.

If you are the only one who is working on a application then you must criticise your own code and cover all the scenarios in test cases.   

9. You are up-to-date and keep updating

In open source community we get frequent updates or releases for ruby, rails and gems, you must keep yourself aware and informed by subscribing to the repositories or mailing lists and update your application libraries.

Also you should stay alert on security fixes about the production operating system, database so you can take necessary action on time.

10. Need not to say…

You write clean and maintainable code and your codebase is

Well there are many more points that can be included in this list but I feel these are the most important to fill in first into this list, If you find that I have missed anything more important then you can comment on this post.

Thanks for the reading upto here, hope this will help you to become a ‘Good’ developer.

Posted in General | Leave a comment

Android Accessibility Service Customization For KeyPress Event

Content posted here with the permission of the author Shekhar Sahu, who is currently employed at Josh Software. Original post available here.

Accessibility services are a feature of the Android framework designed to provide alternative navigation feedback to the user on behalf of applications installed on Android devices.

It runs in the background and receives callbacks from the system when accessibility event is fired. Of course when accessibility is enabled on the device.

Examples of common accessibility services

  • Voice Assistance.
  • Switch-Access: Allows Android users with mobility limitations to interact with devices using one or more switches.
  • Talkback: A screen reader commonly used by visually impaired or blind users.

Sometimes there are unique requirements. For instance, let’s say, on pressing “Caps Lock” instead of relying on the talkback (that speaks out, “Caps Lock On” & “Caps Lock Off”) we want to play an audio file instead. This is more relevant when the user does not know “English” & hence the default talkback, which is in English is not going to work. The solution is to use an audio file in the localized language.

Creating an accessibility service

We can build our own accessibility service as per application requirements to make it more accessible.

Let’s take an example of Typing Tutor app, where we may need to override hardware keyboard event using accessibility service.

In this example, we are going to override the windows key press event, to start your app’s home menu, instead of device start menu (By default on android it opens Google assistant).


  • To Register your accessibility service, create a service class which receives accessibility events
public class MyAccessibilityService extends AccessibilityService {
    public void onAccessibilityEvent(AccessibilityEvent event) {
         // your code...

    public void onInterrupt() {

  • Like any other service, you also have to register it in the manifest file. Remember to specify that it handles the android.accessibility intent, so that the service is called when applications fire an AccessibilityEvent.
<service android:name=".MyAccessibilityService"
<action android:name="android.accessibilityservice.AccessibilityService" />
. . .

Configuration Service

An accessibility service can be configured to receive specific types of accessibility events,  In our case, it should be keyboardEvent. 

We can also add a filter like it can listen only to the specific app (package name), specific time duration, can work only with particular activity etc.

there are two ways to configure service event settings:

  1. Via meta-data entry in the manifest file.
  2. Programmatically, by calling setServiceInfo(AccessibilityServiceInfo)

Example for XML configuration:

<accessibility-service xmlns:android=""
    android:packageNames="com.test.accessebility" />

Here I used android:canRequestFilterKeyEvents=”true” & android:accessibilityFlag=”flagRequestFilterKeyEvents” to get key events from the system. Also, have to override the onKeyEvent() method inside our service class

protected boolean onKeyEvent(KeyEvent event) {
    return super.onKeyEvent(event)

That’s it. We are done with the service configuration. Don’t forget to add the below permission in your manifest file.

<uses-permission android:name="android.permission.BIND_ACCESSIBILITY_SERVICE"/>

Now to get this event to our Activity class, we are going to user local broadcast manager. It’s an android component which allows you to send or receive Android system or application events.

protected boolean onKeyEvent(KeyEvent event) {

//handle keyevent for widnows key
if((keyCode == KeyEvent.KEYCODE_META_LEFT || keyCode == KeyEvent.KEYCODE_META_RIGHT)) {
       //Send broadcast intent to main activity. 
       //On the main activity you can take any desired action.
    return super.onKeyEvent(event)

Then register that local broadcast on your activity class. By this activity will get notified whenever an event occurs. Now you can write your own action on it.

You are done!!

Posted in General | 1 Comment


Content posted here with the permission of the author Bandana Pandey, who is currently employed at Josh Software. Original post available here.

Today Performance is what that comes first, when we developers try to develop web services. One of the issue is that, when a web service tries to interact with database, in order to get the result it may take time depending on the number of records.


For this blog, I am assuming that you have knowledge about Rails and basic idea about Redis.

Getting Started

Lets’s imagine we are building a back-end for the online movie app. Customers will use this app to view all the movies, their details, resulting in huge load on Database. So what if we could reduce the load on the database by caching the movies data. But for caching what should we use ?

There comes REDIS to our rescue.


Redis is the key-value store, which we can use for CACHING to speed things up and improve our performance.

But Redis is not just a plain key-value store, it is data structures server, means it not just limited to support strings as value, but also more complex data structures, such as Hashes, Lists, Sets, Sorted Sets. For detailed information refer this.


Strings are the most basis data type that we use for caching in Redis. They are binary safe and easy to use. So we mostly go for them.

But in our scenario Strings DataType was not enough as I have to store the whole list of movies and their respective details in Redis.  Strings work well, but it stores the whole list in the string format as value. So, before sending the data , I have to parse them in JSON Format,  such that they can be used by the views in order to present it to the User. But what if the data is huge, parsing strings to JSON or any other required format will be time consuming. So, string is not which can be used in our case.

By reading these memory optimization blog and documentation, I found that there is other Datatype that Redis supports, which can be helpful i.e, Hashes.


Hashes are the perfect data structure to represent the objects. They are the map between string fields and string values. Also, they are stored in attribute: value format, just like how the tables data is mapped to object using ActiveRecord in Rails. Small hashes are encoded in a very small space, so we should always try to represent our data using hashes.

And in this way using hashes our data parsing issue is solved. Now, we fetch data, as it is from Redis using Hashes, and there is no conversion of data format is involved.

Also memory consumption, reading and writing performance can be improved using optimized storage of hashes over strings data type.

Now lets check the above theory using Benchmark in rails. Here, we are going to use  redis-namespace and redis service which is explained later in this section.

Setting Data in Redis: do |x|
  #here data is in the json format

  #Setting data using hash(value will be stored as hash) { Event).set_list(key: CMS_MOVIE_LIST, data: data) }

  #Setting data using string(value will be stored as string) { Event).set(key: MOVIE_LIST, data: data) }

#user     system   total    real
#0.030000 0.010000 0.040000 ( 0.011480) #Hashes
#0.150000 0.000000 1.150000 ( 0.447619) #Strings

Fetching Data from Redis: do |x|

  #Fetching data using hash(value will be stored as hash) { Event).get_list(key: CMS_MOVIE_LIST) }

  #Fetching data using string(value will be stored as string) { Event).get(key: MOVIE_LIST) }

#user     system   total    real
#0.010000 0.000000 0.010000 ( 0.008200) #Hashes
#0.090000 0.000000 0.090000 ( 0.032398) #Strings

This demonstrates, how our performance can be improved by using Hashes over Strings in Redis.

So in order to use the same things in our rails application, we are going to use redis-namespace. For detailed information about this, refer Redis::Namespace

Initializing Redis in Rails

We instruct our rails app to use redis as a cache store and set the redis_host in ENV variable like this:

REDIS_HOST: 'redis://localhost:6379'

Now, initialize a wrapper around redis using redis-namespace. Or have a service redis_service.rb using redis-namespace so that we can interact with our redis.

class RedisService
  def initialize(klass:)
    redis = ENV['REDIS_HOST'], timeout: 1)
    @namespaced_redis =, redis: redis)

  def set(key:, data:, expire: nil)
    #Command to Set value of a key
    @namespaced_redis.set(key, data.to_json)

    #Expire your redis key In 1 week
    @namespaced_redis.expire(key, 1.weeks)

  def set_list(key:, data:, expire: nil)
    #Command to Set List of Data on a Redis Key
    @namespaced_redis.set(key, Marshal.dump(data))

  def get(key:)
    #Command to Get only Value of Key in Json Format

  def get_list(key:)
    #Command to Get List of Data from Redis

  def del(key:)
    #Command to delete Key from Redis

  def keys(pattern: nil)


In the above code, we are using marshal. It is a library in ruby which converts the collection of Ruby objects into byte stream. It is the fastest option available in ruby for data serialization. For detailed information refer this

Now we have generic Redis Service which we can use to perform different operations like add, delete, fetch data from Redis in our rails application.

Advantages of writing this service class:

  •  Code is DRY
  • All the redis commands are there in it, and we can use them whenever and wherever we want in our rails app.

Now, we are going to use this, to fetch movies on the basis of city.

Managing Redis Cache in Rails

Here, the whole idea is that, when a customer wants list of movies in a particular city, firstly we are going to fetch the movies, by directly quering on database. Secondly, we will cache the response using redis-namespace wrapper, such that on subsequent quering, the data will be fetched from redis, and not from the Database, thus improving our application performance.

class MoviesController < ApplicationController
  #Here we are going to use the RedisService to perform operations on redis
  def index
    #Check if the list of movie is there in redis
    movies = Movie).get_list(key: "movies:#{params[:city]}")

    #If there is no movies in redis
    if movies.blank?
      #Load Movies from Database

      #serialize the data
      movies = serialize_resource(movies, V1::MoviesSerializer)

      #Cache the serialized response in Redis, so that it can be used again Movie).set_list(key: "movies:#{params[:city]}", data: movies, expire:

    #Returns the response
    mobile_success_response(data: movies)

The above code is perfect, but there is one loophole in that, if any movie is added or it is updated in the database, it will not be shown to the customer if the data is fetched from Redis.

So in order to solve the above issue, what we have to do ?

We’ll write a callback in such a way that, whenever any movie is added or updated,  we will delete the keys corresponding to movie list. So, during updation of any movie, if User wants the data, it will be fetched directly from database and then will be stored in redis cache. On the subsequent calls, it will be fetched from redis. Below is the callback, to achieve this:

class Movie < ApplicationRecord
  after_commit :update_in_redis, on: [:create, :update]
  after_commit :delete_from_redis, on: [:destroy]

  def update_in_redis
    redis = self.class)

    #Delete all the keys matching the movies: pattern
    redis.del(key: redis.keys(pattern: "movies:*"))

  def delete_from_redis
    redis = self.class)

    #Delete a movie from redis if it is deleted from database

Hope this blog will be useful. For more information like this, Stay tuned 🙂

Posted in General | Leave a comment

Postgres – A new NoSQL

Content posted here with the permission of the author Tejaswini Gambhire, who is currently employed at Josh Software. Original post available here.

In today’s world of database technologies, there are two major database types: SQL and NoSQL.  Basically both SQL and NoSQL do the same thing but in different way. Depending upon our project needs we need to find the better fit for our project. So, if you need to handle large amount of data with a little or even no structure then NoSQL is the best fit. But if you need transactional support and handle structured data then you should go for the SQL. We will not go into the details of it. But for a quick difference between these two, you can visit here.

For any project we either go for SQL or NoSQL. For our project also we needed acid compliance and transactional support, so we had opted for SQL database(postgres). Now, keeping this in mind and looking at the title of this blog you may wonder Why would I even want to store unstructured data in my database? and that too postgres? Isn’t it better to go for the NoSQL database itself?

Nothing complicated in that. So this is the usecase that I had in my project:

In our project we were storing dish. Now as dish has many ingredients, we wanted to keep a track of some ingredients like whether it contains nuts, milk, eggs, gluten, soy sauce in order to handle the allergies and preferences of customers. Now,  by the conventional approach assuming that its a relational database, we would have created separate column for each. But thinking a little bit, is it a scalable solution? Off course not! Here postgres came into the picture with its hstore support. PostgreSQL has provided a very great platform by incorporating hstore, json and jsonb, which has lead us to use unstructured data in a structured database. So, we created just one column ‘contains’ which stored this data as a key value pair with keys as the ingredients and boolean value.

There are some other usecases also where it makes a lot of sense to incorporate JSON document into your model. For example, it’s perfect when you need to maintain data that comes from an external service in the same structure and format (as JSON) that it arrived to you. Instead of trying to normalize this data across multiple tables, you can store it as it is (and still query against it).

In this blog we will have a quick overview about the NoSQL capabilities of postgres and learn how to use hstore in detail.


  • Hstore is a schema less key-value store.
  • The best part is it’s acid compliant.
  • It is useful to store parse attributes like product description.
  • The advantage of using hstore is we can store very different types of records with different attributes in the same table and still we can query with SQL.
  • The downside of hstore is that all values are stored as strings.

Postgres Document Store:

  • JSON is the most popular data interchange format on web.
  • Postgres has a native JSON data type and a variety of JSON functions.
  • It is a hierarchical document model.
  • Postgres also supports JSONB column type which is the binary version of JSON.
  • JSONB is faster and robust than JSON.
  • The key difference between JSON and JSONB is that JSON stores exact copy of the text input, which must be reparsed again and again. However, JSONB stores a binary representation that avoids reparsing the data structure.

If you want to learn how to use jsonb with ruby on rails you can visit here

So, we can say that we can use nosql capabilities with the same syntax and in the same ACID transactional environment and rely on the same query planner, optimizer and indexing technologies as conventional SQL-only queries.


To use hstore you must enable the extension by using the command


We can simply create a table as like any other table with hstore as column type

  name TEXT,
  recipe TEXT,
  contains HSTORE

Insertion of the data has nothing magical to do with hstore. We can simply use the same conventional syntax

INSERT INTO dishes (name, recipie, contains) VALUES (
'Green Beans, Tomato and Potato Salad',
'Organic Potatoes red 2 cups, Hot house Tomatoes 1/2 cup, Organic Green Beans 1/2 cup, 1 tbsp Parsley, 1 tsp  Lemon, 1/2 clove Garlic, 1/4 cup Extra Virgin Olive Oil, Salt &amp; Pepper red onoin 1/4 cup kalamata olives 2 tbsp 1 tsp capers',
'"nuts"=&gt;"yes", "dairy"=&gt;"no", "gluten"=&gt;"no", "sesame"=&gt;"no", "egg"=&gt;"no"');

It’s typical for every row to have the same key names, or at least some minimum number of overlapping key names, but you can, of course, use any keys and values you like. It may be the case that there are totally different keys in many of the rows.

Now, let’s see a simple query to retrieve all the dishes containing nuts

SELECT name FROM dishes where contains-&gt;'nuts'='yes';

Notice several things here. First, the name of the column remains without any quotes, just as you do when you’re retrieving the full contents of the column. Second, you put the name of the key after the -> arrow. Finally, the returned value always will be of type TEXT. There are numerous operators and functions provided by postgres which you can always refer from the official documentation.

Hstore with Rails:

In rails you can use enable_extension in your migration. Let’s see how to add the column contains to our dishes table by writing a migration

class AddContainsToDish &lt; ActiveRecord::Migration[5.1]
  def change
    enable_extension 'hstore'
    add_column :dishes, :contains, :hstore, default: {}

Now you have to identify this column on your model with store_accessor as below:

class Dish &lt; ApplicationRecord
  store_accessor :contains

We can now store any kind of attributes in the contains column.

Dish.create(name: 'Green Beans, Tomato and Potato Salad', contains: {'nuts'=&gt;'yes', 'dairy'=&gt;'no', 'gluten'=&gt;'no', 'sesame'=&gt;'no','egg'=&gt;'no'})

Not only hstore allows us to store arbitrary key value pairs but also it allows us to quickly query them.

# Find all dishes that have a key 'nuts' in contains

Dish.where("contains ? :key", :key =&gt; 'nuts')

# Find all dishes having sesame

Dish.where("contains @&gt; (:key =&gt; :value)", :key =&gt; 'sesame', :value =&gt; 'yes')

If you’re going to query this column frequently, you must add an index. There are two types you can use: GiST and GIN.

  • GIN indexes are three times faster to search, but they take more time to index. They also take more disk space. Use it when you have more than 100K unique terms.
  • GiST indexes are slower than GIN indexes, but they’re faster to update. Use it when you have up to 100K unique terms.

You can define the index on your migration file with the :using option.

class AddContainsToDish &lt; ActiveRecord::Migration[5.1]
  def change
    enable_extension 'hstore'
    add_column :dishes, :contains, :hstore, default: {}
    add_index :dishes, :contains, using: :gin

This is how you can use hstore. If you want to dig into the details, more information is available in the postgres hstore docs.

So, you can say that postgres is a bridge between SQL and NoSQL. You can convert hstore to json as well. Also you can make a sql table look like json document and vice versa in postgres. You can easily combine the sql and json queries in the acid compliant environment of postgres. So, now you can start with storing structured data in your database and then integrate unstructured data as well or start with unstructured dataset and adjust the balance between structured and unstructured data very very easily with postgres. To know more about the NoSQL capabilities visit the official site of enterprisedb.

Posted in General | Leave a comment

Testing React-Redux App with Jest

Content posted here with the permission of the author, Kiran Deshmukh, who is currently employed at Josh Software. Original post available here.

We often get confused about selecting testing framework for our application. Currently, I am working on a ReactRedux based project. While selecting the testing framework, we compared some of the popular JavaScript testing frameworks. We found that Jest is the best fit for testing our application.

Jest is not limited to ReactJs testing. We can test any JavaScript code using Jest. It can be used to test asynchronous code.

In React – Redux project, we will have a single store containing state of the application. We will have actionCreators which return action type and payload(may be the response from API). Reducer will contain actual logic to update store for a particular action. Components listen to the reducer. So, when state of the reducer changes, component will be re-rendered.

Here, we will discuss how Jest helped us for testing actionCreators, reducers, components in our project. ActionCreators return the actions. So, we are not testing actions .

Suppose we have a file friendListActions.js which contains string literals for the actions:

const friendListActions = {
  fetchFriendList: 'FETCH_FRIEND_LIST',
  fetchingFriendListSucceeded: 'FETCHING_FRIEND_LIST_SUCCEEDED',
  fetchingFriendListFailed: 'FETCHING_FRIEND_LIST_FAILED'


export default friendListActions;

Suppose, we have following file friendListReducer.js.We are changing the state in the reducer based on the actions.

import friendListActions from 'friendListActions.js';

//Set the initial state for this reducer.
const initialState = {
  isLoading: false,
  errorMsg: null,
  friendList: []

//Here is our business logic to change state in the reducer.
const friendListReducer = (state = initialState, action) =&gt; {
  switch (action.type) {
    case friendListActions.fetchFriendList:
    case friendListActions.fetchingFriendListSucceeded:
    case friendListActions.fetchingFriendListFailed:
      return { ...state, ...action.payload }
      return state;
export default friendListReducer;

Suppose we have the following file friendListActionCreators.js file containing action creators for fetching friend list. We are handling success as well as error response while fetching the friend list.

import friendListActions from 'friendListActions.js';

//This actionCreator is to initialise the fetching of friend list
export const fetchingFriendListInitiated = () =&gt; {
  //actionCreator is returning an action object.
  return {
    type: friendListActions.fetchFriendList,
    payload: {
      isLoading: true

//This actionCreator is used when friend list is fetched successfully.
export const fetchingFriendListSucceeded = ( friendList ) =&gt; {
  //actionCreator is returning an action object.
  return {
    type: friendListActions.fetchingFriendListSucceeded,
    payload: {
      isLoading: false,
      errorMsg: null,

//This actionCreator is used when failed to fetch friend list.
export const fetchingFriendListFailed = ( errorMsg ) =&gt; {
  //actionCreator is returning an action object.
  return {
    type: friendListActions.fetchingFriendListFailed,
    payload: {
      isLoading: false,

export const fetchFriendList = () =&gt; {
  return( dispatch =&gt; {
    dispatch( fetchingFriendListInitiated() )

    //Here we are fetching the friend list for user having 23 as id.
    return fetch("")
    .then(successResponse =&gt; {
    .catch(errorResponse =&gt; {
      dispatch( fetchingFriendListFailed(errorResponse.message))

We are making ‘fetch’ call to the respective API.

Let us observe the test cases for the reducer. We have the following file friendListReducer.test.js:

import reducer from 'friendListReducer.js';
import friendListActions from 'friendListActions.js';

const expectedInitialState = {
  isLoading: false,
  errorMsg: null,
  friendList: []

//'describe' is used to create 'test suite' containing multiple test cases.
describe('Friend List Reducer', () =&gt; {
  it('returns a state of reducer when succeeded to fetch the friend list', () =&gt; {
    let expectedPayload = {
      isLoading: false,
      errorMsg: null,
      friendList: [
          'John', 'Emraan', 'Sukanya'

      // "reducer" takes 2 arguments:
      // first argument: state of reducer before applying the action
      // second argument: Plain JavaScript Object containing "action" and "payload"
      reducer(expectedInitialState, {
        type: friendListActions.fetchingFriendListSucceeded,
        payload: expectedPayload
    ).toEqual({ ...expectedInitialState, ...expectedPayload })

State in the reducer should be changed when some action is performed. Here, we are testing whether this state is changing as per expected payload or not.

Here, we are using expect() and toEqual() methods provided by Jest. Also, it provides describe to create a test suite and it to create an individual test case.

Let us test which actions will be performed when friend list is fetched successfully from the API. Let we have file friendListActionCreators.test.js:

import configureMockStore from 'redux-mock-store';
import thunk from 'redux-thunk';
import * as fetch from 'jest-fetch-mock';

import friendListActions from 'friendListActions.js';
import { fetchFriendList } from 'friendListActionCreators.js';

//We are creating a mock store here.
const middlewares = [thunk];
const mockStore = configureMockStore(middlewares);

describe('fetchFriendListSucceeded()', () =&gt; {

  it('returns friend list in response', () =&gt; {
    const getFriendList = [
      'John','Emraan', 'Sukanya'

    const jsonResponse = {
      "method": "getFriendList",
      "response": getFriendList

    //We are mocking only one http fetch response

    let store = mockStore({
      friendList: {}

    let expectedActions = [
        type: friendListActions.fetchFriendList,
        payload: {
          isLoading: true
        type: friendListActions.fetchingFriendListSucceeded,
        payload: {
          isLoading: false,
          errorMsg: null,
          friendList: getFriendList
    //We are returning 'promise' due to asynchronous actions.
    return (
      .then(() =&gt; {

Here, while testing, we should mock the Redux store. Since we are using fetch call to the API,our store is getting responses from asynchronous actions. As Redux only supports synchronous code, we require middle-ware f so that it supports asynchronous code. While writing test cases, we should mock middle-ware also.

Jest can’t mock http  fetch calls, Redux store and middle-ware which will be used in store. So, we need to add some other packages for mocking these things.

Here, we have used ‘redux-mock-store’ to create a mock store and ‘redux-thunk’ to mock the middle-ware. We are using ‘jest-fetch-mock’ package to mock http fetch calls. We have used ‘mockResponseOnce()’ method since we want to mock only one API call.

After mocking API call, it will return a static value. But, our actual code expects a promise object to be returned from API. So, for simulating the same behaviour, we are returning the promise object in the test case.

Snapshot testing:

When we don’t want to change UI components unexpectedly, Snapshot testing will be useful.Jest provides this  amazing feature.

Snapshot for the component is created when test case for the component is run for the first time. So, when I am running Snapshot test cases for the first time, they will pass successfully. This shows that Snapshot testing is not Test Driven Development(TDD). For making it TDD, we can use enzyme package along with it.

Jest creates a new folder __snapshots__ under the current working folder of the test cases and the snapshots will be stored here. These snapshots will be in human readable format. When I run test cases afterwards, the component will be compared with it’s existing snapshot. If some modifications are done in the component, test case will fail. If these changes are desired, we can change the existing snapshot.

Let us have a list component in list.js file:

import React, { Component } from 'react';

class List extends Component {

  render() {
    const flowers = ['Lily', 'Lotus', 'Rose']
    return (
<h2> List of flowers</h2>
   flower, index ) =&gt;
	<li key="{" index="" }=""> { flower }</li>
export default List;

We will write test case for this component to create it’s snapshot:

import React from 'react';
import renderer from 'react-test-renderer'
import List from 'list.js';

it('renders list component correctly', () =&gt; {
  const tree = renderer.create(	<list>).toJSON()
  expect( tree ).toMatchSnapshot()


For creating snapshot of a component, first we have to create Json object of that component. So, we will create JavaScript object for the component and will convert it to Json. We are unable to create JavaScript object of the component in Jest. So, we have used ‘react-test-renderer’ .

Jest provides ‘toMatchSnapshot()’ method to create snapshot(if it is absent) for that component. Next time, when I want to test List component, it is compared with existing snapshot.

If there is any change in List component, the test case will fail. We can update the snapshot to reflect these changes in snapshot. We should commit these snapshot files along with other test files. You can find more information here.

If we want to add assertions in component testing, want to check manipulations in the components, we can use enzyme package along with Jest framework. enzyme will add TDD in out UI component testing.

In short, Jest will create component tree structure and we can traverse this component tree with the help of enzyme. This package doesn’t have it’s own assertion library. So, we can use assertion library provided by Jest.

We can test following things with the help of Jest and Enzyme:

  • We can test state changes in the components.
  • We can test conditional parameters passed in the component. E.g. Suppose the className of div tag is calculated at run-time based on the received props, we can test it.
  • We can test event handling in the component.
  • We can test component life cycle callbacks. Here we can test whether desired function is called from that life cycle hook or not.

Here are my observations about testing React – Redux application with Jest:

  1. Jest provides a very good assertion and mocking library. We can test asynchronous code with the help of it. If you are new to testing ReactJs application, Jest will be the best choice. Due to parallel testing, it is a great choice for large projects.
  2. We cannot mock http fetch calls with Jest. We can use package like ‘jest-fetch-mock’ for it.
  3. We cannot mock Redux store with Jest. We can use packages like ‘redux-mock-store‘ to create mock store and ‘redux-thunk’ to provide middleware for the store.
  4. Snapshot testing is one of the best features provided by Jest. It is useful to check whether UI is changed unexpectedly or not.
  5. Snapshot testing creates a component tree. We are unable to traverse through this component tree using Jest only. We can use enzyme package along with Jest for it.

To sum it up, I think, Jest is really good framework for testing ReactJs part of the application. Using some packages like ‘redux-mock-store’, ‘redux-thunk’, ‘jest-fetch-mock’, ‘enzyme’ along with Jest, we can test entire React-Redux application.

Posted in General | 1 Comment

Raspberry Pi with GOBOT (Golang) Part I

This all started with developing a simple software to support the robotic project. We had initially planned with Arduino and Gobot (A golang framework for robotics). After we tested some initial sample programs we realised that we can’t run the … Continue reading

Gallery | Tagged , , | Leave a comment