The current HPC has
36 computer nodes that perform the calculations, with a total of
5 560 CPU cores and
13,8 terabytes (TB) of system memory (RAM). The HPC also recently installed
two new GPU servers. These servers are predominantly used for Image Processing, Artificial Intelligence (AI), and Machine Learning. Each GPU server has
four nVidiaÒ Tesla V100 GPU cards installed. Each of these GPUs is capable of performing
7,8 double precision teraFLOPS (7,8 trillion double floating-point operations per second) - each GPU card has the equivalent performance throughput of 18 midrange desktop computers, or the equivalent of about 72 midrange computers per GPU server.
The current
90 TB of storage is made available on all computers in the HPC over a
fast IntelÒ OmniPath network connection. The OmniPath network is capable of
transmitting 200 GB/s between computer nodes. That is at least 200 times faster than a user would have in a normal computer network. The HPC was also the first in Africa to make use of this network technology.
The overall calculation capacity of the system is referred to as the theoretical peak performance of the system. The
theoretical peak performance of the HPC is
137,432 teraFLOPS (double precision). A teraFLOP is the measuring unit that indicates how many floating-point operations can be performed per second. One teraFLOP is one trillion (10 to the power of 12) floating-point operations per second. Imagine it was humanly possible to perform one mathematical calculation every second of the day, 24/7, for every day of the year; if you can do that non-stop for 31 688,77 years, you would have executed the same number of calculations that a teraFLOP computer performs in a second.
For more information on the HPC, contact
Albert van Eck (Director: HPC) and watch out for
training opportunities offered through the DSC.