Data & Analytics

"We Can Solve This Problem in an Amount of Time That No Number of GPUs or CPUs Can Achieve"

Startup Cerebras benchmarked its pint-sized computer against 16,000 Xeon cores in the Department of Energy's Joule supercomputer on a problem of computational fluid dynamics.

cerebras.jpg
Cerebras' CS-1 computer is a refrigerator-sized machine that contains the largest computer chip ever made.
Credit: Cerebras.

For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: Memory bandwidth.

That is the argument made this week by one startup company at SC20, a supercomputing conference that takes place in a different city each year, but this year is being held as a virtual event given the COVID-19 pandemic.  

The company making that argument is Cerebras Systems, the artificial intelligence computer maker that contends its machine can achieve speed in solving problems that no existing system can. 

"We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's chief executive officer, Andrew Feldman, told ZDNet in an interview by Zoom.

"This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.

The argument comes in the form of a formal research paper titled "Fast Stencil-Code Computation on a Wafer-Scale Processor." 

In this white paper, ESG Research details how the new IBM FlashSystem family provides information technology organizations a single storage platform capable of supporting diverse application environments while integrating and consolidating new or existing, distributed, heterogeneous storage assets.

The paper was written by Cerebras scientist Kami Rocki and colleagues, in collaboration with scientists at the National Energy Technology Laboratory (NETL), one of multiple national laboratories of the US Department of Energy. Researchers at scientific research firm Leidos also participated in the work. The paper was posted last month on the arXiv preprint server.

The class of problem being solved focuses on systems of partial differential equations (PDEs). The PDE workloads crop up in many fundamental challenges in physics and other scientific disciplines. They include problems of modeling basic physical processes such as computational fluid dynamics and simulating multiple interacting bodies in astronomical models of the universe.

The high-performance work on fluid dynamics is an interesting departure for Cerebras, which has so far focused on machine learning problems. The work that lead to the paper came together over a period of 9 months and was the result of a serendipitous encounter between one of NETL's researchers and Cerebras' executive in charge of product development, Feldman said.

The PDE workloads exhibit what is known as weak scaling, Feldman noted, whereby increasing the number of processors in a clustered or multiprocessor system provides diminishing returns.

Instead, in the research paper, Rocki and collaborators contend that the problem requires the greater on-chip memory and reduced latency for communication between processing elements that the Cerebras computer offers.

The Cerebras computer, introduced a year ago, is a refrigerator-sized machine that contains the largest computer chip ever made, known as the "wafer-scale engine," or WSE. The chip is a single silicon wafer divided into the equivalent of 84 virtual chips, each with 4,539 individual computing cores, for a total of 381,276 computing cores that can perform mathematical operations in parallel. 

Read the full story here.

Find the paper here.