The term “supercomputer” is used for the first time in the 1960s to differentiate computers designed for high-performance scientific and digital computing from the growing number of machines designed for business computing. Supercomputers are Formulas 1 in the world of computing. Unlike microcomputers, which have become commonplace products, very few supercomputers are manufactured and only a few programs are designed for them. However, being the most powerful machines of their time, supercomputers have a very big impact in the field of science.

Supercomputers are typically used to study problems that are too big, too small, too fast, too slow, or just too expensive to study in the lab. Almost everything known about global climate change, for example, comes from simulations made using supercomputers. The automotive industry uses them extensively to simulate accidents, while aerospace companies use them to predict the performance of future aircraft. Even Hollywood uses supercomputers to produce special effects in the films of George Lucas, Steven Spielberg and many others.

Two different techniques are used to achieve “supercomputer” level performance. The first is simply to use faster components. Since these are generally more expensive, micro-computer architects use only a few high-performance parts for the most vital parts of their designs. Supercomputer architects, on the other hand, use expensive and fast hardware everywhere in their machines.

The second technique consists in performing several operations at the same time. Seymour Cray, the pre-eminent supercomputer architect of the 60s and 70s, realized this technique by organizing the elements in a system called “pipeline”. Each of its components performs a single operation and then transfers the result to the next, allowing multiple simultaneous operations. This involves a kind of cascading execution, similar to the division of tasks on an assembly line that makes it possible to build several cars at the same time.

In the 1980s, another technique called “parallelism” began to gain popularity. A parallel architecture computer contains tens, hundreds or even thousands of microprocessors that all work together (or “in parallel”) to solve a single problem. The appeal of parallelism is that it works with less expensive consumer devices than the specialty devices used in conventional supercomputers, and there is no limit, other than cost, on the size and power of a parallel computer. However, parallel computers are extremely difficult to program.