Exciting new technologies like autonomous vehicles, artificial intelligence, and innovative life-saving medical treatments have at least one critical thing in common — they require massiveamounts of computational power to continue to evolve.
Satya Nadella, the CEO of Microsoft, recently declared that the world is quickly “running out of computing capacity.” Research teams working on groundbreaking innovations feel this pain every day. The more computational power they have, the faster they can run calculations and process data in order to engineer solutions to the world’s biggest problems. Despite many advances in recent years, gaining access to massive amounts of computational resources is still difficult and expensive.
Before the cloud:
In the days before cloud computing, teams working on major new technologies would have to build their own data centers to acquire the computational power they needed. This was slow and very, very expensive. The only groups that could afford to do this were huge companies, major universities, and governments.
Cloud computing changed this, making supercomputational power much less expensive and easier to access, thus opening it up to a much wider range of actors. This is one of the most important computing innovations that directly impacts and improves our lives on a daily basis.
Data Centers that power the cloud are still really expensive:
But there’s a catch… The data centers powering the cloud can’t keep up with demand. Creating a single new data center costs hundreds of millions of dollars, requires extended construction time, and has an enormous environmental impact. The big companies that build these data centers aren’t able to make them quickly, cheaply, or efficiently enough to continue enabling the type of innovation we need to solve the world’s biggest problems.
Even though the cloud is much less expensive than the old stand-alone proprietary infrastructure, the costs of creating and maintaining the cloud are still passed on to the customers. The traditional data-center model for providing computational power is becoming outdated, and it’s holding innovation back. This is exactly the challenge we are tackling at Hypernet.
A More Cost Effective, Scalable Approach:
Instead of a future filled with billions of dollars of capital investment, and years of construction, we believe the answer to the computational resource crisis is in the palm of your hand, in your kitchen, in your living room, and on your desk.
Hypernet is a distributed computational network that enables any device (from your mobile phone to your smart refrigerator) to contribute computational power to researchers who need it. We are surrounded by billions of devices that have an incredible amount of unused capacity, and Hypernet is capable of leveraging it.
In the past, attempts to built distributed networks were hampered by roadblocks like latency and network resilience. The Hypernet team has spent years developing a proprietary Distributed Average Consensus algorithm framework that has overcome these challenges.
With no massive capital expenditures or long construction times, Hypernet is opening up a virtually unlimited reserve of low cost, infinitely expandable computational power that will fuel the next generation of innovation.
Learn more about Hypernet:
If you’re as excited as we are about changing the world, or want to ask a question about Hypernet, then we invite you to join our Telegram community for the latest information.
Until next time,
-Palo Alto, CA