Supercomputers have long existed solely in the realms of science-fiction and pop-culture. According to movies, they represent either humanity’s greatest threat, or our saving grace.
But what is a supercomputer, actually?
This seems like an important question that Hollywood never seems to address. It’s an incredibly hard question, nearly impossible to answer, but we’ll do our best to describe it in a simple manner:
A supercomputer is a big computer. Specifically, it’s a big computer that’s built from a bunch of smaller computers. That’s it.
Go to your local Best Buy, purchase all the computers in the store, and hook them together. You could build yourself a mini-supercomputer as your weekend project, and still have time to grill a nice steak Sunday evening.
Of course, supercomputers can also be made to function differently, in that they might communicate faster, they might be built with greater protection from overheating, and they may be optimized in various other ways… But conceptually, they’re really just big computers that are made from smaller computers.
Supercomputers are simple because they’re just the extension of the basic nature of the computer. It’s such a common word, that we sometimes forget that a ‘computer’ is literally an object which computes data.
It does this via a processor. If one processor can compute one million problems per minute, then 100 processors can compute 100 million problems per minute
Therefore, a supercomputer is a collection of really powerful processors, which are hooked together really efficiently, in order to solve really difficult problems. These processors are usually housed in massive buildings, so that maintenance and repair can be done very quickly… These buildings and maintenance services are costly, and they impose an ever-greater environmental burden, which is a cost to us all.
But what if you didn’t have to worry about maintenance, repairs, or housing, because other people took care of the processors for you?
Enter distributed supercomputing.
A distributed supercomputer is a supercomputer with processors spread around the world, connected via the internet. It’s conceptually the exact same as a classic supercomputer, but distributed. These processors are located in the desktops and laptops used by average people every day. In a dsitributed supercomputer, device owners allow other people to use their processors when they aren’t using them, and the devices are wirelessly connected to the distributed supercomputer. That’s pretty neat, but there’s a downside. Currently, distributed supercomputers can only solve certain, very specific, problems.
This is why the future is Hypercomputing by Hypernetwork.
Hypercomputing is a term we adapted to describe distributed supercomputing that is done much faster, and in a truly parallel manner. Because it is parallel, Hypernet is not limited in its computational abilities in the way that other distributed supercomputers are. This means it can be used for complex parallel processing tasks, like the identification of cancer cells, the prediction of traffic patterns, or even predicting natural disasters!
Until next time,
The Hypernet Team