9 Awesome Tips for Go Developer

Content posted here with the permission of the author Anuj Verma, who is currently employed at Josh Software. Original post available here.

I have just started learning Go and found it to be a very interesting language. It bridges the gap between rapid development and performance by offering high performance like C, C++ along with rapid development like Ruby, Python.

Through this blog, I wanted to share some behaviours of Go that i found tricky along with some style guidelines.

Un-exported fields in struct can be a mystery

Yes it was a mystery for me when I started. My use case was simple, I was having an object of Person struct and wanted to marshal it using encoding/json package.

package main

import (
    "encoding/json"
    "fmt"
)

type Person struct {
    name string
    age  int
}

func main() {
    p := Person{name: "Anuj Verma", age: 25}
    b, err := json.Marshal(p)
    if err != nil {
        fmt.Printf("Error in marshalling: %v", err)
    }
    fmt.Println(string(b))
}

Output:

{}

Oh things worked fine without any error. But wait, why is the response empty ? I thought it must be some typo. I checked and checked and checked…
I had no idea why things were not working. Then I asked to every developer’s god(Google). You will not believe me but this was the first time  I understood the real importance of exported and un-exported identifiers in Go.
Since encoding/json is a package outside main and the fields inside our struct name and age are un-exported (i.e begins with a small case), therefore encoding/json package does not have access to Person struct fields and it cannot marshal it.

So to solve this problem, I renamed fields of Person struct to Name, Age and it worked like a charm. Check here

json.Decode vs json.Unmarshal ?

I was once writing an application which makes HTTP call to Github api. The api response was in JSON format.
So to receive the response and use it I created a Go struct (GithubResponse) matching the format of API response. The next step was to deserialise it. After looking up from the internet I came up with two possible ways to do it.

var response GithubResponse
err = json.NewDecoder(req.Body).Decode(&response)
var response GithubResponse
bodyBytes, _ := ioutil.ReadAll(req.Body)
err := json.Unmarshal(bodyBytes, &response)

Both will exactly do the same thing and will de-serialise a JSON payload to our go struct.  So I was confused about which one to use ? After some research I was surprised to know that using json.Decode  to de-serialise a JSON response is not a recommended way. It is not recommended because it is designed explicitly for JSON streams.

I have heard JSON, what is JSON stream ?
Example JSON:

{
  "total_count": 3,
  "items": [
    {
        "language": "ruby"
    },
    {
        "language": "go"
    }
  ]
}

Example JSON stream:

{"language": "ruby"}
{"language": "go"}
{"language": "c"}
{"language": "java"}

So JSON streams are just JSON objects concatenated. So if you have a use case where you are streaming structured data live from an API, then you should go for json.Decode. As it has the ability to de-serialise an input stream.
If you are working with single JSON object at a time (like our example json shown above), go for json.Unmarshal.

var declaration vs :=

So this one is just a cosmetic suggestion, Remember when declaring a variable which does not needs an initial value prefer:

var list []string

over

list := []string{}

There’s no difference between them, except that the former may be used at package level (i.e. outside a function), but the latter may not. But still if you are inside a function where you have the choice to use both, It is a recommended style to use the former one.

Rule of thumb is to avoid using shorthand syntax if you are not initialising a variable.

Imports using blank identifier

In one of my application we are using postgres database. I am using “lib/pq” which is a go postgres driver for database. I was going through the documentation here and I saw this:

import (
    "database/sql"

    _ "github.com/lib/pq"
)

func main() {
    connStr := "user=pqgotest dbname=pqgotest sslmode=verify-full"
    db, err := sql.Open("postgres", connStr)
    if err != nil {
        log.Fatal(err)
    }
}

Is this correct ? Why are we using an underscore in front of a package import. Checking on the internet I found that it is an anonymous import. It will import the package, but not give you access to the exported entities.

So the next question is very obvious:
If I do not have access to package entities, why we are importing it?

You remember when I said Go is an interesting language. In Go we can define an init() function in each source file, which allows us to setup things before the program executes. So sometimes we need to import a package so that its init() function gets called, without using the package directly in code.

Now lets understand why in code snippet above github.com/lib/pq is imported as a blank identifier. Package database/sql has a function

func Register(name string, driver driver.Driver)

which needs to be called to register a driver for database. If you have a look at this line from lib/pq library, things become more clearer. So lib/pq is calling the Register function to register an appropriate database driver even before our main function executes.

So even we are not using lib/pq directly from our code, but we need it to register driver postgres  before calling sql.Open().

Naked Returns

In Go return values can be named. When we name a return value, they are treated as variables defined at top of the function.

func Insert(list []string) (err error) {
    // Do Stuff here
    return
}

This creates a function-local variable by name err, and if you just call return with no parameters, it returns the local variable err.

Rule of thumb is that we should use naked return if the function is short (handful of lines). They can harm readability in longer functions.

Use shorter variable names in limited scope

In most of the languages you might have observed that it is advised to use descriptive variable names. For example use index instead of i. But in Go it is advised to use shorter variable names for variables with limited scopes.

For a method receiver, one or two letters is sufficient. Common variables such as loop indices and readers can be a single letter (ir). More unusual things and global variables need more descriptive names.

Rule of thumb is:

The further from its declaration that a name is used, the more descriptive the name must be.

Examples:

Good Style

// Global Variable: Use descriptive name as it can be used anywhere in file
var shapesMap map[string]interface{}

// Method
// c for receiver is fine because it has limited scope
// r for radius is also fine
func(c circle) Area(r float64) float64 {
  return math.Pi * r * r
}

 

Explicitly ignore a json field

If you want to ignore a field of struct while serialising/de-serialising a json, you can use json:"-". Have a look at an example below:

type Person struct {
    ID      int    `json:"-"`
    Name    string `json:"name"`
    Age     int    `json:"age"`
    Address string `json:"address"`
}

In above struct ID field will be ignored while serialising/de-serialising.

Backquotes to the rescue

The back quotes are used to create raw string literals which can contain any type of character. So if you want to create a multi line string in Go, you can use back quotes. This will help you to save the effort for using escape characters inside string.

For example, suppose you want to define a string containing a JSON body:

{"name": "anuj verma", "age": 25}

See the below two ways:

b := "{\"name\": \"anuj verma\", \"age\": 25}"// Bad Style
b := `{"name": "anuj verma", "age": 25}`// Good Style

Comparing strings can be tricky

If in your code you need to compare a string with empty string, before comparison do not forget to trim spaces.
resultString == “”, may produce incorrect results as resultString can contain extra spaces(”    “)

strings.TrimSpace(resultString) == "" // good style

Conclusion

What am I missing here? Let me know in the comments and I’ll add it in. If you enjoyed this post, I’d be very grateful if you’d help it spread by sharing. Thank you.

Advertisements
Posted in General | Leave a comment

What I learned from my first ever software development internship

Content posted here with the permission of the author Viraj Chavan, who is currently employed at Josh Software. Original post available here.

I was a student at an engineering college in India. After 3 and a half years years of learning computer science academically, I now had a chance to test my knowledge in the real world through an internship.

In this article, I’ll be sharing my internship experience at Josh Software, Pune with the hope that it is helpful to other IT and computer engineering students that are looking for internships.

Like most of my colleagues at the college, I had a very limited view about software development in general and didn’t know what to expect from an internship.

Lucky for me, I was assigned a live project, which was based on Ruby on Rails, something that I had already developed an interest for.

After I had learned PHP and MySQL in the 2nd year of my studies, I built a basic web app, and all that it did was some CRUD (Create, Read, Update, Destroy) operations. I remember talking with a friend who had similar skills to mine, and said “Even we can build Facebook now that we know PHP and MySQL!”

How ridiculously simple things seemed at that time. Now I understand how complex building/maintaining a software can be.

So here’s what I learned from my Internship while working on a live project.

 

General lessons

Scale Makes a huge difference

  • How many users are going to use the software?
  • How much data will be processed?
  • What are the expected response times for a function?

These are questions that we, as college students, hardly think about. Our college projects were usually short-sighted. In real-world projects though, the above questions fundamentally affect decisions about hardware, technologies/tools to be used, system architecture, algorithms, and so on.

Working with a large codebase

Back in college, we used to work on projects that had like 15 – 20 files or so. Built in under a week, the whole project could be understood in a few hours.

Now the project I’m working on has hundreds of files spread across dozens of folders. It can take months to understand the whole project, and hours to debug a bug that’s spread across multiple files. And the first time you look at the whole project directory, you don’t know where to start understanding the code.

Writing maintainable code

Knowing that the code you write will be read, understood, and improved/changed by someone else (or even yourself) in the future makes you write code that’s maintainable.

In college, all I focused on was getting the expected functionality to be complete, and never considered whether the code I wrote was maintainable.

This resulted in scrambled pieces of code that somehow worked at the time. But two days later even I wouldn’t understand why I had written a certain piece of code that way. And changing some part of the code almost always broke other parts. 😆

Code Maintainability is easier to recognise by its absence, like when something you thought should take an hour ends up taking a week.

Using a version control system – properly

When I first started building small software, all the files existed on my own development machine, and maybe they were backed up to Google Drive as regular files.

Then I got to know about GitHub, but I merely used it as a safe storage place for my code. I used the GitHub desktop app to commit all changes on just the master branch. I even hesitated using it through the command line.

Now not a day goes by that I don’t use Git. It’s such a great tool for collaboratively writing code, distributed development, branching out for new features, pull requests, and so on.

Here’s a little article on why version control systems are awesome!

The importance of using a Test Driven Development approach

During my internship, I was assigned to work on a new feature that was to be added to the main project .

I wrote the code and tested if it was working the way it was supposed to. It worked perfectly, or so I thought. I deployed the feature to the production confidently, and moved on to work on something else.

After a few hours, Rollbar, a real time error reporting tool burst with a number of errors in our code deployed to production. I checked the errors and they seemed unrelated to anything I had ever worked on.

After some debugging, all of those errors traced back to a single method. A method that was called in numerous places, and in which I had modified just a single line, and hadn’t checked where else it was used.

Now this could’ve been avoided if the code that used that method had test cases written for it, and if I had checked if all the test cases ran successfully before deploying the code. That made me realize the importance of test driven development.

Here’s an article to understand why writing test cases is important.

Things specific to Ruby on Rails/ Web Development

The MVC Architecture

Back in my college days, when I developed applications in PHP, I had no clue what Model, View, and Controller were. Any project was so complexly scrambled that I couldn’t find in which file a piece of important logic was written. The HTML embedded PHP scripts at odd places and I had placed all the files in just one folder.

Then I learned about the Rails framework, and got accustomed with the MVC architecture.

Model-View-Controller (MVC) is an architectural pattern that separates an application into three main logical components – Model, View, and Controller. Each of these components are built to handle specific development aspects of an application (source)

MVC really simplifies things and is an important part of many major frameworks.

Dealing with Databases

In the last 6 months, I haven’t written a single direct SQL database query. Yet I deal with databases everyday, even doing some complex operations. This is thanks to the ORM (Object Relational Mapper) that Ruby On Rails uses.

ORMs convert object-oriented programming language such as Ruby into database lingo in which to perform operations. Which makes data access more portable and abstracted from the required database queries that are necessary when manipulating data.

Thanks to ORM, it’s much much easier to query the database. This gives a big advantage to beginners, who can start writing applications without even knowing SQL.

Writing/Using REST APIs (Application Programming Interfaces)

APIs make it easier for one application to talk to another.

APIs make some other applications’s functionalities easily accessible to our application. For example, I once developed a Road Trip Planner application that used the Google Maps API to show various places on a map that a user could visit on a particular route.

APIs can also be used to separate the front-end and the back-end completely. For example, we can write the back-end as an API-only Rails application that can be used by a web site, an Android/iOS application, or even some third party applications.

Using ElasticSearch for searching

Although I don’t know much about ElasticSearch so far, but I’ve learned that it’s a NOSQL, distributed full text database. It acts as a distributed search engine that is incredibly easy to scale and returns results at lightning speed.

Why would we need it for searching? Because having millions of records in a regular database can make it really complex to make efficient searches.
With Elasticsearch, we can index the documents needed to be searched and it can perform queries across all those millions of documents and return accurate results in a fraction of a second.

Elasticsearch has a Restful API, which makes it really easy to query the searches and get the results.

Here’s a tutorial that helped me, and here are some use cases of Elasticsearch.

Using asynchronous/background tasks

Sometimes the user will perform an action on our application that takes a considerable amount of time to complete. We don’t want the user to sit there waiting for this action to complete, so we send it off to a background worker.

Here’s a link that explains it better.

In Ruby On Rails, I came across Sidekiq, which makes it easy to handle background tasks efficiently.


Thanks for reading! If you found this article helpful, give me some claps. 👏

There’s still a long way to go!

Check out my Github profile here.

Posted in General | Leave a comment

Rails: Conserve your database by “audit”ing its space in right way !!!

Content posted here with the permission of the author Ganesh Sagare, who is currently employed at Josh Software. Original post available here.

In most of the Rails applications, we track the important data for auditing. Most of the time, database table in which these audit records are stored, is under same database of our application.

Keeping this table in same database is helpful until it doesn’t grows tremendously. We use this table most of the time for analysis & sometimes for recovering the data. As this size increases, it will start showing impacts in space consumption, like, increase in database size and backup size and also time taken for database backup.

There are multiple reasons for size of table to increase like

  • tracking lots of columns from different tables
  • tracking more actions happening on data

So to optimize our database & backup storage usage and to increase speed in backup process, we just thought what if we store this history/audit records in another database and we found its very easy to do so.

First lets see advantages of this.

  • Avoid rapidly growing database size.
  • Reduced database backup size.
  • Speed up in backup process.
  • Data isolation.

Now lets see how to store audit records to second database.

1. Update your Gemfile

We used gem audited to keep track of our data. So you can add below entry to your Gemfile.

gem “audited”, “~> 4.7”

2. Create configuration for second database.

We can configure our application to connect to second database using YAML file similar to our database.yml code.

# config/audited.yml

development:
encoding: utf8
adapter: postgresql
database: audit_development
port: 5432

production:
encoding: utf8
adapter: postgresql
database: audit_production
port: 5432

The purpose of this configuration file is to have a nice clean place to store our database connection configuration options.

Note: Assuming database has already been created, and running on default postgres port i.e 5432.

3. Connect to the second database.

Using ActiveRecord::Base.establish_connection method, we can connect to second database. Using our YAML configurations let’s create connection to our second database.

Also let Audited::Audit table (i.e table which stores audit records) to read & write data to second database.

# config/initializers/audited.rb

AUDIT_DB = YAML.load_file(
File.join(Rails.root, "config", "audited.yml")
)[Rails.env.to_s]

# Configure Audited to read/write to second database
Audited::Audit.class_eval do
establish_connection AUDIT_DB
end

4. Create “audits” table in second database

Audited gem uses audits table to store model related changes. You can generate migration for audits table using below command,

rails generate audited:install

For more information refer to gem documentation.

# db/migrate/20180629113852_install_audited.rb

class InstallAudited < ActiveRecord::Migration[5.2]
def self.up
create_table :audits, :force => true do |t|
t.column :auditable_id, :integer
t.column :auditable_type, :string
t.column :associated_id, :integer
t.column :associated_type, :string
t.column :user_id, :integer
t.column :user_type, :string
t.column :username, :string
t.column :action, :string
t.column :audited_changes, :text
t.column :version, :integer, :default => 0
t.column :comment, :string
t.column :remote_address, :string
t.column :request_uuid, :string
t.column :created_at, :datetime
end

add_index :audits, [:auditable_type, :auditable_id], 
:name => 'auditable_index'
add_index :audits, [:associated_type, :associated_id], 
:name => 'associated_index'
add_index :audits, [:user_id, :user_type], :name => 'user_index'
add_index :audits, :request_uuid
add_index :audits, :created_at
end

def self.down
drop_table :audits
end
end

Wait, we don’t simply want to run this migration, because this will update schema in our Rails application, instead we want this migration to be executed on our second database.

Hence we need to update generated migration so it should connect with second database as below. (Note changes highlighted.)

# db/migrate/20180629113852_install_audited.rb

class InstallAudited < ActiveRecord::Migration[5.2]
def self.up
 Audited::Audit.connection.create_table :audits, 
:force => true do |t|
t.column :auditable_id, :integer
t.column :auditable_type, :string
t.column :associated_id, :integer
t.column :associated_type, :string
t.column :user_id, :integer
t.column :user_type, :string
t.column :username, :string
t.column :action, :string
t.column :audited_changes, :text
t.column :version, :integer, :default => 0
t.column :comment, :string
t.column :remote_address, :string
t.column :request_uuid, :string
t.column :created_at, :datetime
end

 Audited::Audit.connection.add_index :audits,
[:auditable_type, :auditable_id], :name => 'auditable_index'
 Audited::Audit.connection.add_index :audits,
[:associated_type, :associated_id],:name => 'associated_index'
 Audited::Audit.connection.add_index :audits,
[:user_id, :user_type], :name => 'user_index'
 Audited::Audit.connection.add_index :audits, :request_uuid
 Audited::Audit.connection.add_index :audits, :created_at
end

def self.down
 Audited::Audit.connection.drop_table :audits
end
end

And then execute migration in order to create our table in second database.

rake db:migrate

That’s it, now all your audit records will be stored in second database.

Happy auditing !! 😃

Posted in General | Leave a comment

Preventing Machine Downtime by Predicting it Beforehand

For the past few months, I have been observing the growth of the manufacturing sector in India, and how the contribution of the manufacturing sector to the India’s gross domestic product (GDP) will increase from the current levels of ~16% to 25% by 2022.

One of the major concerns and challenges of having a seamless manufacturing output is to prevent & avoid unfavorable machine performance. With the assumption that machines will degrade over time, manufacturing companies, prior to advanced technology intervention, aimed at focusing on preventive and reactive maintenance of the health of their machines, but the use of deep learning technology is leading towards a new age term method to safeguarding the health of machines, coined in the industry as predictive maintenance.

Predictive maintenance technology approaches can help the manufacturing sector to find the optimal inflection point between costs and machine failures. But, predictive maintenance is not as simple as a plug ‘n’ play solution as the requirement of machine learning requires layers of historic data to be collected over time.

Consider the life-cycle of a CNC machine. Today, most CNC manufacturers define the maintenance cycles based on the type of work the CNC machine does for their customer. It is based on their individual experience and judgement. However, if we were to get not just real-time data on the display but also store and analyze the historical data and use of the CNC machine, deep learning algorithms could find out the pattern of use and predict the maintenance and life of the CNC machine.

False positives would occur, i.e. a situation where the algorithm may predict the maintenance incorrectly based on the parameters it has to play with. With some human intervention, this pattern is corrected, learnt, and applied on the following data set to improve the result. So, the algorithm can learn from its mistake and give more relevant and accurate results over time.

Using cloud based scalable technologies, we could reduce the infrastructure requirements at each premise and even customize the maintenance cycle for each CNC machine based on the customer’s usage patterns. This will not only reduce the cost of maintenance but also improve the efficiency – a win win for both the CNC machine manufacturer and their customer!

Deep Neural Networks are used in this approach to learn from sequences from data. Unscheduled machine downtime can be damaging for any business. Preemptive identification of these issues can help enhance quality of production and significantly improve supply chain processes. The advantages of using predictive maintenance strategies can enhance overall operational efficiency.

Predictive Maintenance strategy is built on the fundamental methodology of Internet of Things (IoT). IoT will not be functional without data and machine learning. This approach is not only about gathering data, but also creating an ecosystem to predict and make decisions as a response to the sequences of data collected. Predictive maintenance will be a larger opportunity as global economies progress, and IT solutions providers need to look at this opportunity to further innovate to help manufacturing companies disrupt their industries.

Posted in General | Leave a comment

10 Signs of a good Ruby on Rails Developer

Content posted here with the permission of the author Pramod Shinde, who is currently employed at Josh Software. Original post available here.

I have been working as Ruby on Rails developer since last five years with Josh Software, I felt that I should write down my learnings about the best practices followed by the RoR developer. How I learned …? of course to learn something you need to commit the mistakes, thats how we learn right?

Let’s see, What all you should follow to be a ‘Good’ Ruby on Rails developer.

1. Your migrations are “thoughtful” …

Whenever you come across database table schema design do you think through all the aspects like

  • The table which is being designed where its going to be used? How much it might grow in terms of the data size? (Imagine the worst future of your design)
  • Have I kept correct data types, defaults, constraints if any? Most of the times we really don’t need integer columns, We can use smallint instead for smaller set of integers, similarly varchar(10) vsvarchar(255) vs text.
  • Have I added indexes wherever necessary? Thinking through What kind of the queries this table is going to handle?

A special point…Do you write multiple migrations for same table? if yes, its a bad habit.

Often we don’t think through all the points mentioned above and end up creating multiple migrations for same table which causes codebase to look scary.

Instead you should use up and down on the migration to fix or alter the table, change in the requirement is an exception to this.

2. You always follow the single responsibility principle

We all know a convention of “skinny controller and fat model”, some of us already follow this but do we follow it wisely.

We are living in the Rails 5 era, so Why to overload models?

Why not to follow “keep everything skinny move extra fat to concerns or service objects from the models”, the classes in the codebase should be designed to handle single responsibility.

I came across the following posts about how to organise controllers and using service objects in Rails.

3. You write test cases to test the “code”

I have seen many applications whose CI builds takes ages to finish, What exactly they are testing?

Your test cases should be testing the “code” not the machine performance, better test suits

  • Share objects between different examples.
  • Uses method stubs and avoid the repetitive calls to the methods.
  • Don’t test same code twice, if you have sharable piece of code and used at multiple places then don’t write test cases in multiple places.
  • Does not creates unnecessary test records, unknowingly many developers end up creating unnecessary test records.

If your are using gems like fakerfactory_bot_rails and database_cleaner to create and clean test records then creating unnecessary records can cost you time and speed.

Simple example,

create_list(:user, 10)

Much better will be reduce the list size, if you are not doing anything special with 10 users.

create_list(:user, 2)

To know how to write better Rspec then this guide is for you.

4. You keep production environment healthy

If you are an engineer and reduce efforts of others, then you use the utilities of other engineers to reduce your efforts.

A healthy Rails production environment always have

  • Monit  – Is everything up and running? if not get notified.
  • logrotate – rotates, compresses, and mails system logs.
  • crontabs with whenever, schedules work for you.
  • Database backup scripts running in maintenance window.
  • Exception notifiers like Senty or Rollbar or ‘anything that suits you’.

5. You follow basic git etiquettes

If you are working in a team and using git then you follow the git etiquettes like

  • Don’t commit untracked files – we often keep git untracked files like ‘something.swp’, ‘backup.sql’, ‘schema.rb or structure.sql backups’, ‘some.test.script’, you should not commit such files.
  • Branch naming – naming something is always difficult but you have to do it, the feature branches should have sensible names, don’t use names like ‘something-wip’, ‘somthing-test’.
  • Delete the feature branches after merge – no explanation required.
  • Commit messages – your commit messages must have ‘Github issue number’ or ‘any project management story number/link’,  ‘brief description about feature/task’

6. You don’t ignore README.md

Remember you are not the only one who is going to work on particular application for your lifetime. Someone will takeover you and he should not waste his time in figuring out that how to setup things.

Your application repository must have updated README.md with detail steps about setting up a application for the first time.

7. Secrets are “really” secrets for you

We often use credentials for database configs, secrets.yml, third party api’s like AWS, payment gateway, sentry etc.

You should not commit such credentials/secrets/environment variables to the Github instead you keep them secure with gems like dotenv-rails,figaro or simple . files that are not committed to the repository.

A sample file of such credentials should be committed and updated regularly.

8. You do code reviews and discuss feature with team

While working in a team you should get your feature reviewed from another team mate or before starting on any feature discuss it with the team thoroughly, advantages of the code reviews or feature discussion are you will come across many scenarios that are not thought of.

If you are the only one who is working on a application then you must criticise your own code and cover all the scenarios in test cases.   

9. You are up-to-date and keep updating

In open source community we get frequent updates or releases for ruby, rails and gems, you must keep yourself aware and informed by subscribing to the repositories or mailing lists and update your application libraries.

Also you should stay alert on security fixes about the production operating system, database so you can take necessary action on time.

10. Need not to say…

You write clean and maintainable code and your codebase is

Well there are many more points that can be included in this list but I feel these are the most important to fill in first into this list, If you find that I have missed anything more important then you can comment on this post.

Thanks for the reading upto here, hope this will help you to become a ‘Good’ developer.

Posted in General | Leave a comment

Android Accessibility Service Customization For KeyPress Event

Content posted here with the permission of the author Shekhar Sahu, who is currently employed at Josh Software. Original post available here.

Accessibility services are a feature of the Android framework designed to provide alternative navigation feedback to the user on behalf of applications installed on Android devices.

It runs in the background and receives callbacks from the system when accessibility event is fired. Of course when accessibility is enabled on the device.

Examples of common accessibility services

  • Voice Assistance.
  • Switch-Access: Allows Android users with mobility limitations to interact with devices using one or more switches.
  • Talkback: A screen reader commonly used by visually impaired or blind users.

Sometimes there are unique requirements. For instance, let’s say, on pressing “Caps Lock” instead of relying on the talkback (that speaks out, “Caps Lock On” & “Caps Lock Off”) we want to play an audio file instead. This is more relevant when the user does not know “English” & hence the default talkback, which is in English is not going to work. The solution is to use an audio file in the localized language.

Creating an accessibility service

We can build our own accessibility service as per application requirements to make it more accessible.

Let’s take an example of Typing Tutor app, where we may need to override hardware keyboard event using accessibility service.

In this example, we are going to override the windows key press event, to start your app’s home menu, instead of device start menu (By default on android it opens Google assistant).

Steps

  • To Register your accessibility service, create a service class which receives accessibility events
public class MyAccessibilityService extends AccessibilityService {
...
    @Override
    public void onAccessibilityEvent(AccessibilityEvent event) {
         // your code...
    }

    @Override
    public void onInterrupt() {
    }

...
}
  • Like any other service, you also have to register it in the manifest file. Remember to specify that it handles the android.accessibility intent, so that the service is called when applications fire an AccessibilityEvent.
<service android:name=".MyAccessibilityService"
android:permission="android.permission.BIND_ACCESSIBILITY_SERVICE">
<intent-filter>
<action android:name="android.accessibilityservice.AccessibilityService" />
</intent-filter>
. . .
</service>

Configuration Service

An accessibility service can be configured to receive specific types of accessibility events,  In our case, it should be keyboardEvent. 

We can also add a filter like it can listen only to the specific app (package name), specific time duration, can work only with particular activity etc.

there are two ways to configure service event settings:

  1. Via meta-data entry in the manifest file.
  2. Programmatically, by calling setServiceInfo(AccessibilityServiceInfo)

Example for XML configuration:

<accessibility-service xmlns:android="http://schemas.android.com/apk/res/android"
    android:accessibilityEventTypes="typeAllMask"
    android:accessibilityFeedbackType="feedbackSpoken"
    android:notificationTimeout="0"
    android:canRequestFilterKeyEvents="true"
    android:accessibilityFlags="flagRequestFilterKeyEvents"
    android:description="@string/message_accessibility_service_details"
    android:packageNames="com.test.accessebility" />

Here I used android:canRequestFilterKeyEvents=”true” & android:accessibilityFlag=”flagRequestFilterKeyEvents” to get key events from the system. Also, have to override the onKeyEvent() method inside our service class

@Override
protected boolean onKeyEvent(KeyEvent event) {
    return super.onKeyEvent(event)
}

That’s it. We are done with the service configuration. Don’t forget to add the below permission in your manifest file.

<uses-permission android:name="android.permission.BIND_ACCESSIBILITY_SERVICE"/>

Now to get this event to our Activity class, we are going to user local broadcast manager. It’s an android component which allows you to send or receive Android system or application events.

@Override
protected boolean onKeyEvent(KeyEvent event) {

//handle keyevent for widnows key
if((keyCode == KeyEvent.KEYCODE_META_LEFT || keyCode == KeyEvent.KEYCODE_META_RIGHT)) {
       //Send broadcast intent to main activity. 
       //On the main activity you can take any desired action.
    }
    return super.onKeyEvent(event)
}

Then register that local broadcast on your activity class. By this activity will get notified whenever an event occurs. Now you can write your own action on it.

You are done!!

Posted in General | 1 Comment

EXPLORING REDIS AND IT’S DATATYPES

Content posted here with the permission of the author Bandana Pandey, who is currently employed at Josh Software. Original post available here.

Today Performance is what that comes first, when we developers try to develop web services. One of the issue is that, when a web service tries to interact with database, in order to get the result it may take time depending on the number of records.

Prerequisites

For this blog, I am assuming that you have knowledge about Rails and basic idea about Redis.

Getting Started

Lets’s imagine we are building a back-end for the online movie app. Customers will use this app to view all the movies, their details, resulting in huge load on Database. So what if we could reduce the load on the database by caching the movies data. But for caching what should we use ?

There comes REDIS to our rescue.

Redis

Redis is the key-value store, which we can use for CACHING to speed things up and improve our performance.

But Redis is not just a plain key-value store, it is data structures server, means it not just limited to support strings as value, but also more complex data structures, such as Hashes, Lists, Sets, Sorted Sets. For detailed information refer this.

Strings

Strings are the most basis data type that we use for caching in Redis. They are binary safe and easy to use. So we mostly go for them.

But in our scenario Strings DataType was not enough as I have to store the whole list of movies and their respective details in Redis.  Strings work well, but it stores the whole list in the string format as value. So, before sending the data , I have to parse them in JSON Format,  such that they can be used by the views in order to present it to the User. But what if the data is huge, parsing strings to JSON or any other required format will be time consuming. So, string is not which can be used in our case.

By reading these memory optimization blog and documentation, I found that there is other Datatype that Redis supports, which can be helpful i.e, Hashes.

Hashes

Hashes are the perfect data structure to represent the objects. They are the map between string fields and string values. Also, they are stored in attribute: value format, just like how the tables data is mapped to object using ActiveRecord in Rails. Small hashes are encoded in a very small space, so we should always try to represent our data using hashes.

And in this way using hashes our data parsing issue is solved. Now, we fetch data, as it is from Redis using Hashes, and there is no conversion of data format is involved.

Also memory consumption, reading and writing performance can be improved using optimized storage of hashes over strings data type.

Now lets check the above theory using Benchmark in rails. Here, we are going to use  redis-namespace and redis service which is explained later in this section.

Setting Data in Redis:

Benchmark.bm do |x|
  #here data is in the json format

  #Setting data using hash(value will be stored as hash)
  x.report { RedisService.new(klass: Event).set_list(key: CMS_MOVIE_LIST, data: data) }

  #Setting data using string(value will be stored as string)
  x.report { RedisService.new(klass: Event).set(key: MOVIE_LIST, data: data) }
end

#user     system   total    real
#0.030000 0.010000 0.040000 ( 0.011480) #Hashes
#0.150000 0.000000 1.150000 ( 0.447619) #Strings

Fetching Data from Redis:

Benchmark.bm do |x|

  #Fetching data using hash(value will be stored as hash)
  x.report { RedisService.new(klass: Event).get_list(key: CMS_MOVIE_LIST) }

  #Fetching data using string(value will be stored as string)
  x.report { RedisService.new(klass: Event).get(key: MOVIE_LIST) }
end

#user     system   total    real
#0.010000 0.000000 0.010000 ( 0.008200) #Hashes
#0.090000 0.000000 0.090000 ( 0.032398) #Strings

This demonstrates, how our performance can be improved by using Hashes over Strings in Redis.

So in order to use the same things in our rails application, we are going to use redis-namespace. For detailed information about this, refer Redis::Namespace

Initializing Redis in Rails

We instruct our rails app to use redis as a cache store and set the redis_host in ENV variable like this:


REDIS_HOST: 'redis://localhost:6379'

Now, initialize a wrapper around redis using redis-namespace. Or have a service redis_service.rb using redis-namespace so that we can interact with our redis.


class RedisService
  def initialize(klass:)
    redis = Redis.new(url: ENV['REDIS_HOST'], timeout: 1)
    @namespaced_redis = Redis::Namespace.new(klass, redis: redis)
  end

  def set(key:, data:, expire: nil)
    #Command to Set value of a key
    @namespaced_redis.set(key, data.to_json)

    #Expire your redis key In 1 week
    @namespaced_redis.expire(key, 1.weeks)
  end

  def set_list(key:, data:, expire: nil)
    #Command to Set List of Data on a Redis Key
    @namespaced_redis.set(key, Marshal.dump(data))
  end

  def get(key:)
    #Command to Get only Value of Key in Json Format
    JSON.parse(@namespaced_redis.get(key))
  end

  def get_list(key:)
    #Command to Get List of Data from Redis
    Marshal.load(@namespaced_redis.get(key))
  end

  def del(key:)
    #Command to delete Key from Redis
    @namespaced_redis.del(key)
  end

  def keys(pattern: nil)
    @namespaced_redis.keys(pattern)
  end
end

Marshal

In the above code, we are using marshal. It is a library in ruby which converts the collection of Ruby objects into byte stream. It is the fastest option available in ruby for data serialization. For detailed information refer this

Now we have generic Redis Service which we can use to perform different operations like add, delete, fetch data from Redis in our rails application.

Advantages of writing this service class:

  •  Code is DRY
  • All the redis commands are there in it, and we can use them whenever and wherever we want in our rails app.

Now, we are going to use this, to fetch movies on the basis of city.

Managing Redis Cache in Rails

Here, the whole idea is that, when a customer wants list of movies in a particular city, firstly we are going to fetch the movies, by directly quering on database. Secondly, we will cache the response using redis-namespace wrapper, such that on subsequent quering, the data will be fetched from redis, and not from the Database, thus improving our application performance.


class MoviesController < ApplicationController
  #Here we are going to use the RedisService to perform operations on redis
  def index
    #Check if the list of movie is there in redis
    movies = RedisService.new(klass: Movie).get_list(key: "movies:#{params[:city]}")

    #If there is no movies in redis
    if movies.blank?
      #Load Movies from Database
      load_movies

      #serialize the data
      movies = serialize_resource(movies, V1::MoviesSerializer)

      #Cache the serialized response in Redis, so that it can be used again
      RedisService.new(klass: Movie).set_list(key: "movies:#{params[:city]}", data: movies, expire: 1.day)
    end

    #Returns the response
    mobile_success_response(data: movies)
  end
end

The above code is perfect, but there is one loophole in that, if any movie is added or it is updated in the database, it will not be shown to the customer if the data is fetched from Redis.

So in order to solve the above issue, what we have to do ?

We’ll write a callback in such a way that, whenever any movie is added or updated,  we will delete the keys corresponding to movie list. So, during updation of any movie, if User wants the data, it will be fetched directly from database and then will be stored in redis cache. On the subsequent calls, it will be fetched from redis. Below is the callback, to achieve this:


class Movie < ApplicationRecord
  after_commit :update_in_redis, on: [:create, :update]
  after_commit :delete_from_redis, on: [:destroy]

  def update_in_redis
    redis = RedisService.new(klass: self.class)

    #Delete all the keys matching the movies: pattern
    redis.del(key: redis.keys(pattern: "movies:*"))
  end

  def delete_from_redis
    redis = RedisService.new(klass: self.class)

    #Delete a movie from redis if it is deleted from database
    redis.del(key: self.id)
  end
end

Hope this blog will be useful. For more information like this, Stay tuned 🙂

Posted in General | Leave a comment