Adventures with Hyperledger Sawtooth Core

From the first paragraph of the introduction to hyperldeger sawtooth documentation: 

Hyperledger Sawtooth is an enterprise blockchain platform for building distributed ledger applications and networks. The design philosophy targets keeping ledgers distributed and making smart contracts safe, particularly for enterprise use.

It’s an ambitious open source distributed ledger framework that aims at providing a high rate of transactions per second (it claims a thousand per second); configurable consensus mechanisms; transaction processors (smart contracts) written in any language; and an overall modular architecture.

We took a keen interest on this platform because of its immense capabilities, so, in order to find out if this distributed ledger framework is really what it boasts to be on paper, I took the hyperledger sawtooth through a series of tests. And of course, I’m reporting my findings here.

 

About this Adventure

  • The released version 1.0 is used for all tests documented here.
  • The java sdk will be extensively used to make transaction processors.
  • All tests were done on a windows 10 system (docker toolbox)

 

Installation Requirements

Please proceed to the documentation for more information on the requirements for installation. From experience, you just need a system with enough ram and disk space to handle execution of docker containers. Docker is used for packaging the framework.

 

Installing Hyperledger Sawtooth

The installation process is simple, all that’s needed is to pull the git repository  and checkout to the version you want. In this case, we will be checking out to v1.0.0.

Cloning the Repository

git clone https://github.com/hyperledger/sawtooth-core

Checking out to v1.0.0

 git checkout tags/v1.0.0 -b sawtoothv1.0.0

Good! We now have our copy of v1.0.0 on the sawtoothv1.0.0 branch

 

Running Hyperledger sawtooth core

The code contains docker-compose files for different configurations that you could use as a starting point. Here, we will start with docker/compose/sawtooth-default.yaml.

cd /docker/compose
docker-compose -f sawtooth-default.yaml up

 

A brief introduction to the sawtooth architecture

An indepth exposee on the sawtooth architecture can be easily found in the documentation. We will just go through the basic components.

The hyperledger sawtooth system is made up of the following:

  1. Validator – This is the principal component of the sawtooth platform. It is responsible for block validation, block chaining and communicating with other validators on the network. The validator also validates transactions by delegating it to its transaction processor. It also ensures that transactions and batches are signed properly.
  2. Transaction Families – These are group transaction types that perform related actions. A transaction processor processes transactions belonging to a transaction family. Clients create transactions from a family and send it to the validator through the rest-api. The transaction processor is an independent service that communicates with the validator through TCP/UDP socket connections. A validator can be connected to any amount of transaction processors in order to process transactions belonging to different transaction families. The validator registers these transaction processors as soon as they are connected and deregisters them once they are disconnected. This way, a validator knows what transaction processor is attached to it.
  3. Rest Service – This exposes rest api endpoints for clients to integrate with the validator. A proper documentation of
  4. its endpoints be found on the sawtooth documentation. With this, we can send transaction in batches, query states, transactions and blocks, and listen for state changes through a websocket api endpoint.

 

On the sawtooth-default.yaml

This is a basic one-validator sawtooth system meant to show case the capabilities of the hyperledger sawtooth. It comprises of the following:

  1. Settings-tp: A transaction processor that processes transactions that can be used to configure the blockchain.
  2. Intkey-tp-python: An integer-key transaction processor, that provides transactions that can be used for testing.
  3.  Xo-tp-python: A tic-tac to transaction processor. Yes, you can play tic-tac to by sending transactions to the blockchain LOL.
  4. Validator – The validator (as the name implies)
  5. Rest api – The rest api (as the name implies)
  6. Shell – A command line utility for posting transactions of the various transaction families. It integrates with the rest api.

With a good internet connection, all modules are pulled quickly as docker-compose is run and you can see the colourful output on your command line.

 

Experimenting with the Integer Key

Run the following command in the docker terminal to enter into the shell environment

docker exec -it sawtooth-shell-default bash

We are going to create a batch and send to the int-key-transaction processor.

Intkey create_batch
Intkey load

 

More about IntKey

Intkey provides transactions that can be used to test deployed sawtooth distributed ledgers. These transactions are as follows:

intkey set: It sets a key to a specified value

intkey inc: this increases the value of a key by 1

intkey dec: this decreases the value

intkey show: this displays the value of a specific key

intkey list: this lists the values of all the keys.

intkey create_batch: creates a batch of intkey transactions

intkey load: Sends the created batch to the validator.

 

More about the intkey that the documentation won’t tell you

A close inspection of the intkey cli code written in python reveals how create_batch can be used. You can supply the following arguments:

-o –output:  Location of the output file (batches.inkey by default)

-c –count:  Number of batches modifying random keys (1 by default)

-B –max-batch-size:  Max transactions per batch (10 by default)

-K  –key-count:  Number of keys to set initially ( 1 by default)

 

create_batch generates a random word list (keys to set initially specified by -K or –key-count). It then generates “intkey set” transactions that initialize the values of those keys to a random number between 9000 and 100000, appending it to a batch. It generates one set transaction for each key. After this, it creates a random number of batches modifying the random keys. The number of batches created is specified by the -c / –count args.

Each batch created will have a random batch size of the range between 1 and the args -B / –max-batch-size. These transactions modify a random choice of keys and randomly increment or decrement the values.

Therefore, with the create_batch, the smallest number of transactions we can have is two batches, 1 transaction per batch. One batch for set, while another for increment or decrements.

Investigating Transaction Flow with Int Key

In order to properly investigate transaction flow, lets cover the theoretical background.
From the sawtooth documentation, the main component of the validator is the Journal, responsible for “maintaining and extending the BlockChain” for the validator. These responsibilities include the following:

  • Validating candidate blocks
  • Evaluating valid blocks to determine if they are the correct chain head
  • Generating new blocks to extend the chain.

 

A batch is a group of transactions that are atomically lumped together in a manner that when one fail, all fails. Batches can be made up of transactions from the same or different families.

The journal consumes blocks and batches that arrive at the validator via the interconnect, either through the gossip protocol or the rest api. The journal routes them internally.

The flow is as follows:

  1.  Blocks and batches arrive from the interconnect to the completer
  2. The completer ensures that all dependencies of the blocks and(or) batches have been satisfied and delivered downstream
  3. The completer delivers the blocks and batches to different pipelines.
  4. Completed Batches go to the BlockPublisher for validation and inclusion into a block
  5. Completed blocks are delivered to the chain controller for validation and fork resolution.
  6. Blocks are considered formally complete by the completer once all of their predecessors have been delivered to the ChainController and their batches field contains all the batches specified in the BlockHeader’s batch_ids list. The batches field is expected to be in the same order as the batch ids.
  7. Batches are considered complete once all of its independent transactions exist in the current chain or have been delivered to the BlockPublisher.

 

The Chain Controller

This is responsible for determining which chain the validator is currently on and coordinating any change-of-chain activities that need to happen.
The Chain Controller is designed to handle multiple block validation activities simultaneously. This means that once forks form, the Chain Controller can process these forks simultaneously. The current chain can be used while a deep fork is evaluated. Currently the thread pool is set to 1 (its hardcoded), so only one block is validated at a time.

 

Flow for updating blocks in the ChainController

  • Block enters the chain controller
  • Chain controller creates a block validator with the candidate block and current chain head and dispatches it to a thread pool for execution. Once the block validator has completed, it will callback the chain controller indicating whether the block should be used as the chain head. This call back occurs in three situations:
    • If the call back occurs after the chain head has been updated, a new block validator is created and dispatched to redo the fork resolution.
    • If the call back occurs when the chain head has not been updated, that new block becomes the chain head.
    • If the call back determines that the block should not become the chain head, then it is discarded. This occurs if there is an invalid block in the block’s chain, or is a member of a shorter or less desirable fork as determined by consensus.
  • The chain controller synchronizes chain head updates such that only one block validator result can be processed at a time.
  • The chain controller performs a chain head update using the block store, providing it with a list of commit blocks that are in the new fork and a list of decommit blocks that are in the block store that must be removed.
  • After the block store is updated, the block publisher is notified about the new chain head.

 

Block Validation Rules

In hyperledger sawtooth, block validation includes the following:

  • Transaction permissioning – Ensuring that the right entity submits transactions and batches.
  • On-chain block validation rules
  • Batch validation
  • Consensus verification
  • State hash check.

 

Block Publisher

The block publisher creates candidate blocks to extend the current chain. It does all the house-keeping work around creating a block but waits for the consensus-algorithm’s instruction on when to create a block and when to publish a block.
The flow in a block publisher is as follows:

  1.  Add new batches to scheduler
  2. Wait for consensus algorithm to create candidate block
  3. If candidate block is created, keep adding batches
  4. Wait for consensus algorithm to publish
  5. If time to publish, finalize candidate block and publish block
  6. When a block is created, an instance of Consensus.BlockPublisher is created that is responsible for guiding the creation of this candidate block. A transaction scheduler is created and all of the pending batches are submitted to it.

 

Results of intkey Testing

This will be posted soon enough, happy reading!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: