OneLedger

OneLedger v0.5.0 Release Part 2

Written by OneLedger | Jul 25, 2018 3:40:00 PM

We are pleased to publish the 3rd video in our series of demonstrations for our MVP. Like the 2nd video, this one shows off some of the features from our v0.5.0 release.

 

In this video, Enrico demonstrates how the MVP is resilient to fullnodes crashing. Essentially he’ll start a new OneLedger chain and then bring down one, and then two nodes. The chain is composed of 4 nodes using PoS. That algorithm requires ⅔ of the nodes to be active at all times. He demonstrates how this level of fault tolerance can let one node go down, but not two.

The demo starts with Enrico starting up a simple tmux monitor. Tmux is a terminal console that can split the windows in different ways. The monitor breaks the screen into three horizontal windows. In the top window, it watches the log file for consensus. In the second window it automatically watches the log file for OneLedger. The bottom window allows Enrico to type in commands that change the behavior of the chain.

Once the OneLedger chain has been initialized and started, we can see in the top window that the chain is producing new blocks every couple of seconds. They scroll by quickly, consisting of JSON descriptions of the underlying block data. Since we are using Tendermint consensus, the configuration creates a new block regularly, whether there are transactions or not. Building a chain from empty blocks is a bit resource intensive, but it ensures that the nodes are really coming to consensus about the blocks, even if their underlying data is fairly trivial.

After the chain has been running, Enrico launches fulltest, which just adds a few transactions to the chain to show that the setup can do more than just handle empty blocks.

Then the demo gets interesting. When the chain was initialized Enrico added four users to the chain: Alice, Bob, Carol and David. Each user is associated with their own node. Because of this, Enrico can issue a command to stop Alice’s node for example. As he does, we see that the production of blocks briefly slows down, but then goes back to behaving normally, but now with only 3 nodes in the chain.

When Enrico stops a second node, the blocks stop and the consensus window starts printing up messages about trying to find the missing peers. The two remaining nodes know that there are not enough of them remaining to actually achieve consensus, so they keep polling the P2P network instead of producing blocks.

Once Enrico restarts one of the nodes, the other two see it, realize there are enough nodes and the production of blocks goes back to normal.

This is, of course, is what all blockchains one of the many basic requirements for Byzantine Fault Tolerance. As the chains are distributed with nodes across the world, the networks and the nodes in between will occasionally be down. The blockchain, however, must continue to function, no matter what…