- 400,000 Sparse Linear Algebra (SLA) cores
- 18GB on-chip SRAM, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth.
- 100 Pb/s interconnect bandwidth in a 2D mesh
- Manufactured by TSMC on its 16nm process technology
The SLAC cores are flexible, programmable, and optimized for the sparse linear algebra, which underpins neural network computation. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network called Swarm.
“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems. “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”
The Cerebras product unveiling occurred at this week's Hot Chips Conference at Stanford University.
This is a companion discussion topic for the original entry at https://www.blogger.com/feeds/8327500062634719808/posts/default/1427815015687914586