Retrospective: Magnitudes of FLOP computing

Retrospective: Magnitudes of FLOP computing

Today’s computing power of the whole world is roughly equal to only one human brain. In 2018 the Computing Capacity Worldwide was roughly accounted to be around 2*10^20 – 1.5*10^21 FLOPS. A human brain performs an equivalent of between 3*10^13 to the 3*10^25 FLOPS (Floating Point Operations Per Second) calculations in terms of the computing power respectively. Today we are getting closer and closer to the physical limits of lithography processing, making silicone-based ships with a 5-nm process. The path to cutting-edge computing technologies was long and complicated. Let’s look through a very short retrospective and see what computers looked like back in the day.

Disclaimer: the list is not fully chronological due to some comparisons “old vs new hardware”.

 

1936 — 10^0 FLOPS (Just FLOPS)
Zuse 1 mechanical computer was able to perform 1 FLOPS — it is roughly equivalent to what a human can compute using pen and paper. Z1 was partly programmable and was set in motion by an electric motor.

Z1 reconstruction in Technikmuseum, Berlin.
In the bottom of the computer you can see the electric motor.

 

1946 — HectoFLOPS (10^2)
The IBM 602 Calculating Punch, introduced in 1946, was an electromechanical calculator capable of addition, subtraction, multiplication, and division. Its calculating capacity was 2*10^2 FLOPS. The 602 was IBM’s first machine that did division. (The IBM 601, introduced in 1931, only multiplied.) Like other IBM calculators, it was programmed using a control panel. Input data was read from a punched card, the results could be punched in the same card or a trailing card.

IBM 602, you can see the programmable panel in front of the computer.
Photo rights — Heinz Nixdorf MuseumsForum.

 

1964 — MegaFLOPS (10^6)
The CDC 6600 was the flagship of the Control Data Corporation. Generally considered to be the first successful supercomputer. With performance of up to 3*10^6 FLOPS, the CDC 6600 was the world’s fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.

They quickly became a must-have system in high-end scientific and mathematical computing, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. At least 100 were delivered in total.

CDC 6600. Display console shown in the foreground, main system cabinet in back, with memory/logic/wiring to the left and middle, power/cooling generation and control to the right.

 

1972 — GigaFLOPS (10^9)
The ILLIAC IV was the first massively parallel computer. The system was originally designed to have 256 64-bit floating point units (FPUs) and four central processing units (CPUs) able to process 1 billion operations per second. Due to budget constraints, only a single “quadrant” with 64 FPUs and a single CPU was built. Since the FPUs all had to process the same instruction – ADD, SUB etc. – in modern terminology the design would be considered to be single instruction, multiple data, or SIMD.

While ILLIAC IV was in the business back in 1975, it performed a lot of helpful work, like wind tunnel simulation, seismic studies, and image processing. It was decommissioned in 1982.

 

1997 — TeraFLOPS (10^12)

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing. The original goals to deliver a true teraflop machine by the end of 1996 that would be capable of running an ASCI application using all memory and nodes by September 1997 were met.

All four rows of ASCI Red (end view) inside Sandia National Laboratories.

Only 13 years later NVIDIA would create its GTX 40’s series with the same computing power as the whole ASCI Red supercomputer. Namely GTX 480 features 1.34 TFLOPS. The price of ASCI Red in today’s prices was about $67 million, whereas NVIDIA GTX 480 release price was $499.

Zotac’s GTX 480 designed by NVIDIA.
The computing power able to simulate nuclear explosions was used by gamers to run Half Life 2, Team Fortress 2, Bioshock and other 2007 hits.

 

2010-2020 — PetaFLOPS (10^15)

A petascale supercomputer is extremely powerful even today. There are various hardware solutions of this magnitude, and all of them are used at this very moment.

In 2017 Google has commissioned its second generation of Tensor Processing Units (TPUs). This kind of hardware is optimized for tensor (multidimensional matrices) operations, which makes it ideal for AI training. One “TPU pod” built with 64 second-generation TPUs delivers up to 11.5 petaflops of machine learning acceleration.


Single TPU unit delivers up to 180 teraflops to train and run machine learning models… And the whole TPU pod features up to 11.5 petaflops of machine learning acceleration.

 

The fastest supercomputer today, Fugaku (Japan) delivers 540 PetaFLOPS. It is notable for its architecture based on ARM CPUs. The ARM-based architecture represents a dramatic shift in the type of hardware traditionally employed in supercomputers, as it is a living proof that there is still room for innovation in HPC.


The Fujitsu A64FX is one of the most powerful Arm-based processors in the world, and the world’s fastest supercomputer features over 150,000 of them.   

 

ExaFLOPS (10^18)

Humankind has not yet reached the point when a single supercomputer could reach such power in double-precision mode (that’s how FLOPs are usually measured). However there is a distributed network that has surpassed 1 ExaFLOPS and during the pandemic its power computing power has reached tremendous 2.43 ExaFLOPS. This project is Folding@home.

Folding@home (FAH or F@h) is a non-commercial distributed computing project aimed to help scientists develop new therapeutics to a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on the simulations run on the volunteers’ personal computers. Folding@home is currently based at Washington University in St. Louis and led by Greg Bowman.

The project utilizes CPUs, GPUs, PlayStation 3s, and some Sony Xperia smartphones for distributed computing and scientific research. Folding@home is one of the world’s fastest computing systems. With heightened interest in the project as a result of the COVID-19 pandemic, the system achieved a speed of approximately 1.22 exaflops by late March 2020 and reached 2.43 exaflops by April 12, 2020, making it the world’s first exaflop computing system. Since its launch on October 1, 2000, the Pande Lab has produced 225 scientific research papers as a direct result of Folding@home. Results from the project’s simulations agree well with experiments.

Get in touch

MegaMiner GmbH 26 Sumpfstrasse, 6312 Steinhausen, ZG, Switzerland

+44 751 708 14 51

Sign up

Select your role

Login