While blockchain technology has many amazing applications and opens up the door for a straightforward consensus mechanism where the data structure itself enables very simple checksumming, its simplicity could also be seen as its downfall. Why? A blockchain is exactly what it sounds like - an ever growing chain of checksummed blocks that grows indefinitely.

This simplicity and continuous growth of blockchain implementations causes a number of problems:

  • Constantly growing memory consumption. For rapid lookups of blocks in the blockchain, a lookup table is traditionally required in-memory.
  • Constantly growing storage space usage. Blocks are continously created causing stale and no longer relevant blocks to be left behind and stored on-disk forever.
  • Massive resource waste. At the time of writing (June 2021) there are roughly 80 000 Bitcoin nodes storing 351.19GB of blockchain data on each node. That compounds to;

    80 000 * 351.19 = 28 095 200 GB =  27 436 TB

    This is for a relatively small blockchain such as Bitcoin. With heavier blockchains the problem quickly becomes more severe.
  • Limited number of transactions per second. To prevent the data stored in blockchains from growing to absurd sizes there is usually a limit to the number of blocks that can be created and their size. Transaction fees are used to limit the creation of new transactions and to prevent the chain from becoming overly congested.
The ever growing Bitcoin blockchain with no end in sight. Source:

These are the main reasons that blockchains are limited in the amount of transactions per second they can handle. Bitcoin can only handle roughly 7 transactions per second (see Bitcoin scalability problem). Some other blockchains projects have ramped this up to hundreds and sometimes even thousands of transactions. However, the majority of these projects can't live up to their expectations. When actually tested in production - they often fall short, the cause often being one of the aforementioned points.

These limitations are an obvious and big problem and is one of the reasons for the original development and inception of the Unigrid network.

How do you store massive amounts of data and keep track of it? Simultaneously, how do you allow the network to grow without major bottlenecks and performance problems forming as it does? The Unigrid network aims to solve this scalability problem once and for all. Not by using some magical approach with a lot of buzzwords and strange algorithmic names (HotStuff comes to mind), but by applying simplicity and realistic and proven design approaches that have worked for many years within other fields of the industry.

Combining decentralization and a high throughput when it comes to the amount of transactions per second a network can handle has been an ongoing problem.

The Unigrid network approaches this problem differently. The Unigrid network isn't a single ledger or blockchain. Instead, it's a collection of blockchains with data striped across them with parity blocks introduced for fault tolerance and redundancy. The end result is a network where the amount of transactions that can be handled could be argued borders on the infinite. When the network grows and additional gridnodes are introduced to the hierarchy of the network, the throughput and the number of transactions the network can handle at any given time also increases.

To help you make a mental visualization of how this looks, imagine the Unigrid network as a collection of smaller networks (segmentation) that are independent from each other. In the middle of this, an address tree ledger is used to keep track of the network and how to find the data.

To limit resource waste, the Unigrid network, with the help of the address tree ledger, also allows blockchains to re-organize and throw away stale blocks. Preserving balances, transaction blocks and data blocks that are no longer valid or simply too old to matter are thrown away, allowing the storage space used on the network to grow and even shrink in size.

This makes Unigrid the first blockchain network that can size-adjust and segment itself in this manner. This optimizes resource use and allows the network to load-balance itself by placing nodes into segments that need to be able to handle increases in traffic.

To learn more about the Unigrid network, please visit the about section at The Unigrid Foundation website.

You’ve successfully subscribed to Decentralized Internet
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Your link has expired
Success! Check your email for magic link to sign-in.