Data Race Detector in Golang

Content written by author Rahul Shewale, who is currently employed at Josh Software.

As we know, Golang is a powerful programming language with built-in concurrency. We can concurrently execute a function with other functions by creating goroutine using go keyword. When multiple goroutines share data or variables, we can face hard to predict race conditions.

In this blog, I am covering following points

  • What is data race condition and how can it occur?
  • How can we detect race conditions?
  • Typical Data Races examples and how can we solve race conditions?

What is the data race condition?
A data race occurs when two goroutines access the same variable concur­rently, and at least one of the accesses is performing a write operation.

Following is a basic example of race condition:

package main
import (
    "fmt"
    "sync"
)
func main() {
    var wg sync.WaitGroup
    wg.Add(5)
    for i := 0; i < 5; i++ {
        go func() {
            fmt.Println(i)
            wg.Done()
        }()
    }
    wg.Wait()
 }
}


In the above example, you must have noticed we have invoked 5 goroutines and access i variable inside the goroutine, but here we faced data race condition because all goroutine read data from i variable concurrently and at the same time for loop write a new value into i variable.

Program OUTPUT:

5 5 5 5 5

How can we detect race conditions?

Now that we know what is a race condition, let’s dive into how to detect these conditions on your Golang project. So Golang provides built-in powerful race detector tools for checking the possible race conditions in program.

To use the built-in race detector you need to simply add -race flag to your go run command:
$ go run -race main.go

This command finds a data race condition in the program if any and print error stack where race condition is occurring

Sample Output
$ go run -race  race_loop_counter.go

==================
WARNING: DATA RACE
Read at 0x00c0000a8020 by goroutine 7:
  main.main.func1()
      /home/-/goworkspace/src/example/race_loop_counter.go:13 +0x3c
Previous write at 0x00c0000a8020 by main goroutine:
  main.main()
      /home/-/goworkspace/src/example/race_loop_counter.go:11 +0xfc
Goroutine 7 (running) created at:
  main.main()
      /home/-/goworkspace/src/example/race_loop_counter.go:12 +0xd8
==================
==================
WARNING: DATA RACE
Read at 0x00c0000a8020 by goroutine 6:
  main.main.func1()

Goroutine 6 (running) created at:
  main.main()
      /home/-/goworkspace/src/example/race_loop_counter.go:12 +0xd8
==================
2 2 4 5 5 Found 2 data race(s)
exit status 66

How can we solve it?

Once you finally find race condition, you will be glad to know that Go offers multiple options to fix it.

Rob Pike has very aptly stated the following phrase. The solution to our problem lies in this simple statement

“Do not communicate by sharing a memory; instead, share memory by communicating.” -Rob Pike

  1. Use Channel for data sharing

Following is a simple program where the goroutine accesses a variable declared in main, increments the same and then closes the wait channel.

Meanwhile, the main thread also attempts to increment the same variable, waits for the channel to close and then prints the variable value.

However, here a race condition in generated between main and goroutine as they both are trying to increment the same variable.

Problem example:

package main
import "fmt"

func main() {
    wait := make(chan int)
    n := 0
    go func() {
        n++
        close(wait)
    }()
    n++
    <-wait
    fmt.Println(n)
}

To solve the above problem we will use the channel.

Solution:

package main
import "fmt"
func main() {
    ch := make(chan int)
    go func() {
        n := 0
        n++
        ch <- n
    }()
    n := <-ch
    n++
    fmt.Println(n)
}

Here the goroutine incrementing the variable and variable value pass through the channel to main function and when channel receives data then main perform next operation.

2) Use sync.Mutex 

Following is a program to get the total number of even and odd numbers from an array of integers, numberCollection and store into a struct.

Following is a program to get the total number of even and odd numbers from an array of integers, numberCollection and store into a struct.

Problem example:
package main
import (
    "fmt"
    "sync"
)

type Counter struct {
    EvenCount int
    OddCount  int
}

var c Counter
func main() {
    numberCollection := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
    fmt.Println("Start Goroutine")
    var wg sync.WaitGroup
    wg.Add(11)
    for _, number := range numberCollection {
        go setCounter(&wg, number)
    }
    wg.Wait()
    fmt.Printf("Total Event Number is %v and Odd Number is %v\n", c.EvenCount, c.OddCount)
}
func setCounter(wg *sync.WaitGroup, number int) {
    defer wg.Done()
    if number%2 == 0 {
        c.EvenCount++
        return 
    }
         c.OddCount++
    
}

Output:

 Total Event Number is 5 and Odd Number is 6

If program is checked by race detector flag then we notice line  c.EvenCount++  and line no 31  c.OddCount++ generate race condition because all goroutine writes data into struct object concurrently.

Solution:

To solve this problem, we can use sync.Mutex to lock access to the struct object as in the following example:

package main

import (
    "fmt"
    "sync"
)

type Counter struct {
    EvenCount int
    OddCount  int
    mux       sync.Mutex
}

var c Counter

func main() {
    numberCollection := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
    fmt.Println("Start Goroutine")
    var wg sync.WaitGroup
    wg.Add(11)
    for _, number := range numberCollection {
        go setCounter(&wg, number)
    }
    wg.Wait()
    fmt.Printf("Total Event Number is %v and Odd Number is %v\n", c.EvenCount, c.OddCount)
}
func setCounter(wg *sync.WaitGroup, number int) {
    defer wg.Done()
    c.mux.Lock()
    defer c.mux.Unlock()
    if number%2 == 0 {
        c.EvenCount++
 return 
    } 
        c.OddCount++
   }

3) Making Copy of variable if Possible 

Problem example:
package main
import (
    "fmt"
    "sync"
)
func main() {
    var wg sync.WaitGroup
    wg.Add(5)
    for i := 0; i < 5; i++ {
        go func() {
            fmt.Printf("%v ", i)
            wg.Done()
        }()
    }
    wg.Wait()
}

In the above problem, we can see five goroutines invoked in for loop and access value of i  from the goroutine. Every Goroutine is called asynchronously and goes to wait state until the for loop is completed or any block operation is created.

After for loop execution is completed all goroutine will start execution and try to access i variable. This will result in a race condition.

For this problem,  we can easily pass copy argument to goroutine and every goroutine gets a copy of the variable. As shown in the example, below we use argument j instead of accessing i from within goroutine.

Solution :

package main
import (
    "fmt"
    "sync"
)
func main() {
    var wg sync.WaitGroup
    wg.Add(5)
    for i := 0; i < 5; i++ {
        go func(j int) {
            fmt.Printf("%v ", j)
            wg.Done()
        }(i)
    }
    wg.Wait()
}

Conclusion:
The preferred way to handle concurrent data access in Go is to use a channel and use -race flag for generating data race report, which helps to avoid a race condition.

Advertisements
Posted in General | Tagged , , , , , , | Leave a comment

Async Action handling in Javascript

Content posted here with the permission of the author Pragati Garud, who is currently employed at Josh Software. Original post available here

As you know javascript is synchronous language. To deal with async operations javascript has provided three ways:
1] Promises
2] Async & await
3] Generator function

Promises: Whenever we ask for some async operation like fetching data from database, instead of waiting for that to get completed, javascript just returns you the promise, on that promise object you can add your success and error handler.

let promise = new Promise((resolve, reject) => {
    // do a thing, if everything is fine then resolve
    // otherwise reject
  })

  //Success handler executed on promise resolve
  promise.then((data) => console.log(data));

  //Error handler executed on promise reject
  promise.catch((error) => console.log(error));

if async operation that you requested is successfully completed then your success handler will get executed otherwise error handler will get executed.

Example: Suppose you want to fetch a list of users from database.(Note: I am assuming that you are aware about the `fetch` function and it’s error handling)

const fetchData = () => {
  fetch('https://jsonplaceholder.typicode.com/users')
  .then((response) => {
    console.log(response);
  })
  .catch((error) => {
    console.log(error);
  })
}

fetchData();

In above code we first requested for users list to server using fetch() and it’s a async operation, so we will get promise as a result of this operation and I added success and error handler on that promise. The response returned by the fetch needs to be converted in the json format.

const fetchData = () => {
  fetch('https://jsonplaceholder.typicode.com/users')
  .then((response) => {
    return response.json();
  })
  .then((data) => {
    console.log(data);
  })
  .catch((error) => {
    console.log(error);
  })
}
fetchData();

here, we called json() on the response to convert that into json, and it is also a async operation that results into a promise. so here from success handler of one promise I am returning another promise called Promise chaining.


So in case of promise chaining, code will look like little bit confusing. hence ES7 comes up with the async/await

Async/Await: async/await is just a syntactical sugar to promise. it provides you the way that you can write your async operations which will look like synchronous. The async function always returns you the promise, even if you return some non-promise value from your async function, javascript just wraps it inside promise and then returns.

async function f() {
  // await works only inside async functions
  let value = await promise;
  return 1;
}

async function f() {
  // await works only inside async functions
  let value = await promise;
  return Promise.resolve(1);
}


await instructs the javascript that wait until provided operation gets finished. The same example that we saw using promises using async/await will be like:

const fetchData = async () => {
  try {
    let apiEndpoint = 'https://jsonplaceholder.typicode.com/users'
    let response = await fetch(apiEndpoint);
    let data = await response.json();
    console.log(data);
  } catch (error) {
    console.log(error);
  }
}
fetchData();

here, I defined a async function and used await for my async operations, so javacsript will wait until it will get finished. if your async operation will get failed then await will throw the exception that’s why try catch is used.

Generator Function: It is a special kind of function which can stop it’s execution at a point and resume it’s execution from the same point later. Basically, it’s a function which can pause/resume on our demand.

function* generator() {
   // do something
   yield expression;
}

* after function keyword denotes that it’s a generator function. Inside generator function yield will get used, this is the point where execution of the generator function will get stopped and It will yield an object { value: result, done: true/false } to caller function. value contains the result of the expression given to yield and done indicates that collection on which you are iterating has finished or not(true/false).

Calling a generator function is different than our normal javascript function. Normal function just start it’s execution once we invoke that function. It follows Run to Completion model. But, When we call generator function it does not start the execution of the function body, it just returns you the iterator object on which you can call next() and throw()method. When you call first time next() method on the iterator object then it will start the execution of your function body.

Basic Example for generator:

function* generator() {
   yield 5+3;
   yield 7*2;
}

let it = generator(); // does not start execution of function body
console.log(it.next()); // { value: 8, done: false }
console.log(it.next()); // { value: 14, done: false }
console.log(it.next()); // { value: undefined, done: true }

we can use generator function for async operation handling. we just pause at our async operation and resume generator when that async operation will be done.

Example: fetching list of users from server:

function* fetchData() {
  try {
    let apiEndpoint = 'https://jsonplaceholder.typicode.com/users'
    let response = yield fetch(apiEndpoint);
    let data = yield response.json();
    console.log(data);
  } catch (error) {
    console.log(error);
  }
}

let it = fetchData();

// { value: promise returned by fetch(), done: false }

let promise1 = it.next().value

promise1.then((response) => {
//resume generator by calling next() as your async operation is   fulfilled now.
// we can pass values to generator from here. e.g response is accessible here but we need that in the generator, passed value will gets assign to response variable of the generator.
//{ value: promise returned json(), done: false }  

  let promise2 = it.next(response).value  

  promise2.then((data) => {
    it.next(data);
  });
});

Generator functions are fun and really a good concept. As Generator function provides power to caller to control the execution of the function, most of the libraries uses this concept to handle the async action.

Conclusion:

when to use which one ?

  1. Async/Await – when you want synchronous behaviour with async operations.
  2. Promises – when you want async nature. i.e. execution should continue while you wait for results.
  3. Generators – when you don’t want function to run to completion.

Posted in General | Tagged , , , , | Leave a comment

Using Rack::Proxy to serve multiple React Apps on the same domain.

Content posted here with the permission of the author Rahul Ojha, who is currently employed at Josh Software. Original post available here

Background

We had 4 React apps running on the local server. All apps were running on different ports because we can not start multiple React Apps on the same port. We had to build a single sign-on server. Out of 4 react apps one will be responsible for managing authentication and based on roles or department redirecting to other apps. The Auth app will get token from Auth API and will store it to local storage, and other apps will be using this token for authentication while calling APIs.

Problem Faced:

Now while calling API from any React app we need to send the token with it. Since the token is stored in local storage by Auth App which is using a different port so the local storage will not be accessible for other react apps.

You may think “Why didn’t you go with Cookies”, Yes cookies are shareable among HTTP ports but its size is very less which is 4 KB. So if you have to store permissions and anything else which is shareable and exceeding to 4 KB the cookies will not be useful.

Solution:

We will be using a single domain and will access different React apps using namespace in URL.

Example:

localhost:9292/app1 => app1 => localhost:3001
localhost:9292/app2 => app2 => localhost:3002
localhost:9292/app3 => app3 => localhost:3003
localhost:9292/app4 => app4 => localhost:3004

Here we need to use a reverse proxy server to do this. We will use Ruby Rack to create a proxy server. Why not use Nginx here? You can but there is a problem, React does not add namespace in the assets path so assets will not be loaded and you will see a blank white page always, and there is no way in development to add the namespace in asset path without ejecting CRA. If you have specified any value to homepage key in package.json file, then it will be automatically prefixed with assets path while creating a build in production mode.

So if you are Ruby developer then you are going to enjoy rest of the blog. But if you are not a Ruby developer there is no issue because Ruby is nice and easily understandable. You just need to follow the steps.

Step 1:

Install ruby
You can use below link to install RVM and ruby. RVM is Ruby version manager.https://rvm.io/rvm/install

Step 2:

gem install rack

Step 3:

gem install rack-proxy

Step 4:

Create a file ./proxy_server.rb and add below code.

require 'rack-proxy'
class ProxyServer < Rack::Proxy
 def rewrite_env(env)
  request = Rack::Request.new(env)  
  if request.path.match('/app1')
    env["HTTP_HOST"] = "localhost:3001"
    @port = 3001
  elsif request.path.match('/app2')
    env["HTTP_HOST"] = "localhost:3002"
    @port = 3002
  elsif request.path.match('/app3')
    env["HTTP_HOST"] = "localhost:3003"
    @port = 3003
  elsif request.path.match('/app4')
    env["HTTP_HOST"] = "localhost:3004"
    @port = 3004
  # The below line is important to load assets.
  # So it detects app name from the first request to the app
  # which is stored in an instance variable and redirects
  # to that app for assets.
  elsif request.path.match('/static') || request.path.match('/assets')
    env['HTTP_HOST'] = "localhost:#{@port}"
  else
    env["HTTP_HOST"] = "localhost:3001"
    @port = 3001
  end
  env
 end
end

Step 5:

Add a file ./config.ru and add below code

require_relative './proxy_server'
run ProxyServer.new

Step 6:

Run below command on the command line where you have stored above files. This command is responsible for starting your rack server.

rackup

Now your Rack server has started and ready do proxy pass. You just need to go at localhost:9292 and your default app which is app1 as per proxy_server file will be loaded to the browser. Port 9292 is default port defined for Rack server.

Conclusion:

By using Rack server you can run multiple react apps on the same domain in development environment with local storage and cookies shareable among all applications. This solution is not restricted to only React apps, it can be used for any other client-side frameworks too.

Posted in General | Tagged , , , , | Leave a comment

How to make image loading lightning fast on web and mobile App

Content posted here with the permission of the author Shivam Kumar Singh, who is currently employed at Josh Software. Original post available here

Since internet is born, we have come a long way in improving web page loading time. Faster it gets, better it is. Most time consuming part of loading web & mobile app is loading images. Therefore faster image load is major challenge today.

One of application we worked recently has lots of images to render. As number of images increased, loading time of application increased as well.

So to tackle this problem we decided to compress images. We were using ImageMagick with carrierwave for all the image processing and remote file upload. So we decided to compress the image using ImageMagick libraries to reduce the file size but it compresses the image at the cost of image quality. We can not compromise on image quality therefore we were looking for alternatives.

We came across webp format developed by Google — which uses both lossy and lossless compression. WebP lossless images are 26% smaller in size compared to PNGs.

Can you figure out size of below images by looking at them ?

https://www.dropbox.com/s/aes6q2vldawol2u/seeds-4306035.webp?dl=0 (Image 2)

Image 1 is of type jpg and Image 2 is of type webp.

Quality wise they look very similar, right ? But size wise Image 1 is 4 times larger than Image 2

Image 1 size is 2.8 MB while Image 2 size is 551 KB.

How to configure it for Rails Application

Considering we already have ImageMagick installed on our system.

We have to install libwebp, a library used to add WebP encoding and decoding to your programs. Before that

Download dependency for libwebp.

sudo apt-get install libjpeg-dev libpng-dev libtiff-dev libgif-dev

Download latest libwebp and extract it.

wget http://downloads.webmproject.org/releases/webp/libwebp-0.4.3.tar.gz
tar xf libwebp-0.4.3.tar.gz

Run the following commands to install libwebp.

cd libwebp-0.4.3/
 ./configure
make
sudo make install

Run following command on the console to confirm successful installation.

2.0.0p0 :002 > WebP.decoder_version  => "0.4.3" 
2.0.0p0 :003 > WebP.encoder_version  => "0.4.3"

Now we need to add carrierwave-webpgem which is Ruby wrapper for libwebp.

gem 'carrierwave-webp'

Now that we have all of our prerequisites out of the way, we are now ready to convert our image to WebP.

Customize Carrierwave uploader to generate images in webp format.

class MediaUploader < CarrierWave::Uploader::Base
  include CarrierWave::MiniMagick
  include CarrierWave::WebP::Converter   
   version :jpeg do     
     process resize_to_fit: [720, 450]     
     process convert: :jpeg   
   end  
 version :webp do     
   process :convert_to_webp   
  end 
end
 

So this way we have generated two different versions of images, one in jpg with given resolution (for fallback) and other in webp format.

Format Specific remote image URL

def return_url(poster, format)
  if poster.image.send(format).exists?
    url = poster.image.send(format).url
  else
    poster.image.jpeg.url //JPEG as fallback option.
  end
end

There are few limitations as well for webp

  1. Not supported in iOS and Safari client.
  2. More effective for substantially large files

Conclusion:

With Webp format, we decreased our images size by 4 times thereby increased page load speed. If you have any questions let us know in comments.

Posted in General | Tagged , , , , | Leave a comment

How is data secure over https?

https

STEP1: Client HELLO

STEP2: Server HELLO

STEP3: Server authentication

STEP4: Secret key exchange

STEP5: Client HELLO finished

STEP6: Server HELLO finished

STEP7: Data exchange

Thanks for reading and hope you learned something new. Feel free to comment if you have any suggestions or corrections.

Posted in General | Leave a comment

Rails Sidekiq configuration for micro services on reverse proxy.

Content posted here with the permission of the author Rahul Ojha, who is currently employed at Josh Software. Original post available here.

This blog is intended to describe how to configure multiple sidekiq webs for API only apps using reverse proxy server.

Background

We have 4 API services running on ECS. We used a single domain for all the services and segregated them with the namespace in routing. For namespace routing and pointing to Docker containers, we have used Traefik. We can do the same thing using Nginx and some other reverse proxy too.

Example:

www.domain.com/rails-api-1/REST_ROUTES
www.domain.com/rails-api-2/REST_ROUTES
www.domain.com/rails-api-3/REST_ROUTES
www.domain.com/rails-api-4/REST_ROUTES

Problem Faced

Now the problem was when we try to access sidekiq webs with these namespace routes. It does not load rails assets because namespace path ‘rails-api-1 ’ is not considered in the root path, and sidekiq tries to find assets in the path `www.domain.com/sidekiq/` and it doesn’t exist actually.

Solution

Initially, we mounted sidekiq path in config/routes.rb like this.

mount Sidekiq::Web => '/sidekiq'

Later, we changed it to

mount Sidekiq::Web => '/rails-api-1-sidekiq'

Configuration for Traefik:

You need to add a new rule in docker labels

Key => traefik.sidekiq.frontend.rule 
Value => Host:www.domain.com;PathPrefix:/rails-ap1-1-sidekiq

Configuration for Nginx:

If you are using Nginx as a reverse proxy then you need to add these configuration for each service.

location /rails-ap1-1-sidekiq/ {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://your_domain/rails-ap1-1-sidekiq/;
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;
  }location /rails-ap1-2-sidekiq/ {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://your_domain/rails-ap1-2-sidekiq/;
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;
  }

Now you can access your sidekiq web at

https://www.domain.com/rails-api-1-sidekiq/retries

After that Sidekiq will look for assets in path /rails-api-1-sidekiq/. Here as per docker label rails-api-1-sidekiq indicates that it will point to the container rails-api-1 because it is configured forrails-api-1task and within that container, it will look for the assets in /rails-api-1-sidekiq/.path, You can see in below image

Similarly, you can setup multiple sidekiq on your reverse proxy.

Thank you!

Posted in General, Ruby on Rails | Leave a comment

Building Decentralised Link Shortner on Ethereum blockchain using Truffle

Original post available here.

Blockchain is emerging technology, it needs no introduction. By any chance if you are left behind and don’t know about blockchain then I recommend reading about blockchain first before reading this article. You can read about blockchain here or here or search on internet and you will find plenty of article to read from.

What is Ethereum ?

Launched in 2015, Ethereum is the world’s leading programmable blockchain. It is a global, open-source platform for decentralized applications. These decentralized applications (or “dapps”) gain the benefits of cryptocurrency and blockchain technology. Read more about ethereum here.

What is Truffle ?

Truffle is a framework for blockchain development, it streamlined smart contractcreationcompilationtesting, and deployment onto Ethereum. Read more about truffle here

What is Decentralised App ?

Decentralised application does not require centralised server to work (hence no maintenance cost). It interacts with smart contract deployed on blockchain network.

What is Smart Contract ?

Smart contracts are programs which govern the behaviour of accounts within the Ethereum state. We will write smart contract in Solidity language, Solidity is an object-oriented, high-level language for implementing smart contracts. Read more about solidity from here.

Getting Started:

A) Install npm, node & Truffle

Follow https://docs.npmjs.com/downloading-and-installing-node-js-and-npm for installing npn & node.

Then install truffle

npm install -g truffle

check if truffle installed successfully or not

$ truffle version
Truffle v5.0.21 (core: 5.0.21)
Solidity v0.5.0 (solc-js)
Node v11.0.0
Web3.js v1.0.0-beta.37

B) Create Project

Create new folder for project & initialise with truffle. We will use React Truflle box

$ mkdir link_shortner
$ cd link_shortner/
$ truffle unbox react
✔ Preparing to download
✔ Downloading
✔ Cleaning up temporary files
✔ Setting up box
Unbox successful. Sweet!
Commands:
Compile:        truffle compile
  Migrate:        truffle migrate
  Test contracts: truffle test

If you are new to Truffle then read about created directory from https://www.trufflesuite.com/docs/truffle/getting-started/creating-a-project

C) Install Ganache for blockchain setup on local machine https://www.trufflesuite.com/docs/ganache/overview


Link Shortner Smart Contract

Create LinkShortner.sol file inside contracts/ folder and write following content in it.

pragma solidity ^0.5.0;

contract LinkShortner {
  event LinkAdded(uint linkId, string url);
  uint lastLinkId;
struct LinkTemplate {
  address userAddress;
  string url;
 }

mapping (uint => LinkTemplate) public linkMapping;
constructor() public {
  lastLinkId = 0;
 }

function createNewLink(string memory url) public returns (uint, string memory) {
   lastLinkId++;
  linkMapping[lastLinkId] = LinkTemplate(msg.sender, url);
    emit LinkAdded(lastLinkId, url);
  return(lastLinkId, url);
 }

function getLink(uint linkId) public view returns(address, string memory) {
  LinkTemplate memory link = linkMapping[linkId];
  return(link.userAddress, link.url);
 }

function getLastLink() public view returns(address, string memory, uint) {
  LinkTemplate memory link = linkMapping[lastLinkId];
  return(link.userAddress, link.url, lastLinkId);
 }
}


Now deploy this contract on local blockchain network:


$ truffle compile
$ truffle migrate
Ganache Screenshot after contract deployment

React Application for interaction with Smart Contract

Open client/src/App.js file & Replace

import SimpleStorageContract from"./contracts/SimpleStorage.json";

with

import SimpleStorageContract from "./contracts/LinkShortner.json";

Creating new link

contract.methods.createNewLink(this.state.url).send({ from: accounts[0] })

Install Metamask chrome extension

and run React app

cd client
npm run start

Deploying contract on Ropsten test network

- Register new account on infura.io
- Create new project
- Get project api and connection link:
ROPSTEN_URL=https://ropsten.infura.io/v3/<your-api-key>

Goto Truffle project, install truffle-hdwallet-provider

npm install truffle-hdwallet-provider — save

Create `.env` file, put MNEMONIC and <network>_URL to file

MNEMONIC=wallet mnemonic 12 words
ROPSTEN_URL=https://ropsten.infura.io/v3/<your-api-key>

Update truffle-config with following content

const path = require("path");
require('dotenv').config()
const HDWalletProvider = require('truffle-hdwallet-provider')
const MNEMONIC = process.env.MNEMONIC
const ROPSTEN_URL = process.env.ROPSTEN_URLmodule.exports = {
  // See <http://truffleframework.com/docs/advanced/configuration>
  // to customize your Truffle configuration!
  contracts_build_directory: path.join(__dirname, "client/src/contracts"),
  networks: {
    ropsten: {
      provider: function() {
        return new HDWalletProvider(MNEMONIC, ROPSTEN_URL);
      },
      network_id: '3',
    },
    development: {
      host: "127.0.0.1",
      port: 7545,
      network_id: "*",
   },
   test: {
     host: "127.0.0.1",
     port: 7545,
     network_id: "*",
  }
 }
};

Run following command to deploy

truffle migrate --network ropsten

Sinatra API for reading Short Link on ethereum network

Create folder backend
Add following content in backend/app.rb

# Require the bundler gem and then call Bundler.require to load in all gems
# listed in Gemfile.
require 'bundler'
Bundler.require

require 'sinatra'

require 'ethereum'before do
  content_type 'application/json'
end

class Contract
  def initialize
    @client = Ethereum::HttpClient.new("https://ropsten.infura.io/v3/<API-KEY>")
    contract_json = JSON.parse(File.read('LinkShortner.json'))
    @contract_abi = contract_json['abi']
    @address = contract_json["networks"]["3"]["address"]
    @client.default_account = "0x3b8B0b23C4850FA8289da815a6abEE4Fc2DF941A"
  end

  def result(id)
    return nil unless id
    contract_instance.call.get_link(id.to_i)[1]
  enddef contract_instance
    Ethereum::Contract.create(name: "LinkShortner", address: @address, abi: @contract_abi,
                              client: @client)
  end
end

class App < Sinatra::Base
  get '/url' do
    response.headers["Access-Control-Allow-Origin"] = "*"
    return {url: Contract.new.result(params[:id])}.to_json
  end
end

Deploy sinatra API on heroku

heroku create
heroku buildpacks:set https://github.com/timanovsky/subdir-heroku-buildpack
heroku buildpacks:add heroku/ruby
heroku config:set PROJECT_PATH=backend
git push heroku master

Now use deployed API for reading short link

fetch("https://<heroku-app-url>/url?id="+id).then((response) => {
  return response.json();
}).then((response) => {
  const url = response.url
  console.log(url)
})

That’s it, now you have your link shortner decentralised app deployed on ethereum network. Generated short link can be shared with anyone, irrespective of browser. For creating short link Metamask plugin is required.

Code is hosted on github

Application is hosted at http://anilmaurya.github.io/link-shortner

Demo of Link Shortner

References:


https://medium.com/@nhancv/deploy-smart-contract-with-truffle-79d2bf218332

https://hackernoon.com/making-a-decentralized-url-shortener-using-ethereum-4fdfccf712a6

Posted in Blockchain, Tutorials | Tagged , , , | Leave a comment