Uncategorized

How Hypernet Compares to Golem, Sonm, iExec, etc..

By May 14, 2018 No Comments

At first glance, Hypernet may appear to be similar to other blockchain projects, such as Golem, Sonm, iExec, etc… Indeed, there are many problems which all 4 groups are able to solve equally well. But when you pop open the hood, you’ll see that the engine driving Hypernet is fundamentally new, with a brand new programming model — a programming model which is more powerful and versatile than the traditional architecture used by other blockchain computer projects.

This programming model is what makes Hypernet different. Existing models are not well suited to solve general computing problems on distributed networks, which constantly have computers popping in and out of a computation. And simply throwing blockchain at the problem doesn’t solve this fundamental constraint. Hypernet has architected and implemented a new programming model beneath the blockchain layer to handle distributed computation problems which require interprocess communication. This is not off-the-shelf tech. We created it ourselves, from the ground up.

Hypernet is based on the principle of Distributed Average Consensus (DAC), and it is the result of years of research, testing, and optimization. DAC+Blockchain allows for the efficient distribution of compute jobs, and effectively manages computers dropping on and off the network. Furthermore, it creates a secure backbone where buyers and providers of computational power can engage with confidence. Both the on-chain (scheduled) and off-chain (DAC) technology layers of Hypernet fit together hand in glove, and are both driven by consensus.

Now, if we examine Golem, Sonm, and iExec in more depth, we can see that they have built (or plan to build) their technical foundations on traditional computing architectures. These architectures were originally developed specifically to be used in data centers, and each group has bolted this data center architecture onto a blockchain network (albeit in different manners.)

These data center architectures have two unavoidable consequences when applied to a distributed network:

1. The amount of network communication and data transfer overhead is very high.

2. These architectures do not tolerate computers randomly dropping in and out of the network.

These problems arise because in an orderly data center you know the exact topology of the network, and exactly how the packets of information are flowing from processor to processor. Data center architectures are optimized for one topology, and one topology only. This is problematic on a distributed network because you don’t know the network topology, and it’s impossible to know the state and availability of every machine, so the topology constantly changes. This means that if you attempt to carry out a parallel compute job that requires any sort of back-and-forth communication between computers, then the data reductions will fail, and the program will fault. The data reductions are computationally fragile. This fragility can be strengthened, however, attempts to strengthen data reduction fragility will cause an increase in communication overhead. Naturally then, these traditional data center algorithms are expensive to employ over a distributed network, due to the need to transport terabytes upon terabytes of data. And again, they can only handle certain types of tasks to begin with.

This is perhaps why Golem, Sonm, and iExec seem to currently be focused on solving very specific issues. Golem is currently heavily focused on rendering, which is a great application that pairs well with their grid computing architecture. Sonm is primarily adapting existing hub and spoke architectures, with an emphasis on server hosting, to distributed networks. And iExec is focused on decentralized cloud computing, for specific use in certain research applications.

Hypernet wants to do more though. It was originally developed by researchers who did not have the computational availability to solve some of humanity’s most nagging problems. So, they set out to rethink and redesign computation from the ground up. Hypernet was specifically created in order to solve problems which were previously unsolvable, to enable machine learning in a more efficient and sustainable manner, and to support the community of users who will help make it all happen.

As Albert Einstein famously said, “ The significant problems we face cannot be solved at the same level of thinking we were at when we created them.” Hypernet is innovating the process of problem solving through parallel distributed computation, in order to create new and effective solutions to the world’s greatest challenges — and we can’t wait to see what problems we can solve when we all work together.

If you’re interested in finding out more, we are always available to answer questions in our Telegram early supporter community. We hope to see you there!

-The HyperTeam
Palo Alto, California