Meet the Cerebras Wafer Scale Engine, the largest computer chip ever created
A California artificial intelligence start-up known as Cerebras Systems has unveiled the largest semiconductor chip ever manufactured. Cerebras revealed the “world’s largest computer chip” during a Hot Chips conference in California.
Dubbed the Cerebras Wafer Scale Engine (it sounds like the sort of thing which will eventually decimate humanity), it comes packing an astonishing 1.2 trillion transistors on a single chip. It’s different from other giant chips in that it is a single chip interconnected on one giant 46,225mm² wafer.
To help put its size into context, this is some 56 times larger than Nvidia’s largest chip, the Nvidia Tesla V100. Nvidia’s top-end GPU has 21.1 billion transistors and is 815mm². The Cerebras Wafer Scale Engine has 400,000 AI optimised cores, 18GB on-board memory, and is capable of 9PB/s memory bandwidth, which is bordering on insanity. That’s, er, 10,000 times the memory bandwidth of the Tesla V100.
It’s a record-breaker all right, although hardly on the cutting edge of processes in terms of its use of TSMC 16nm fabrication. That’s by the by though, as there are some immense technical challenges in terms of reliably manufacturing a chip of such vast size. Not least of which is yield issues, which requires a matured process in order to minimise defects.
“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size — such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, head of the team at Cerebras. He also claims the Wafer Scale Engine can deliver “hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”
This is huge news in the AI industry although obviously nothing much more than a curio for gamers. The Cerebras Wafer Scale Engine is so far outside the realms of possibility for gaming it’s absurd, but for AI engineers such a mammoth chip could dramatically reduce the length of time it takes to train AI.
“Cerebras has made a tremendous leap forward with its wafer-scale technology, implementing far more processing performance on a single piece of silicon than anyone thought possible,” said Linley Gwennap, principal analyst at the Linley Group.
“To accomplish this feat, the company has solved a set of vicious engineering challenges that have stymied the industry for decades, including implementing high-speed die-to-die communication, working around manufacturing defects, packaging such a large chip, and providing high-density power and cooling. By bringing together top engineers in a variety of disciplines, Cerebras created new technologies and delivered a product in just a few years, an impressive achievement.”