Better Hardware Could Turn Zeros into AI Heroes
New hardware that takes advantage of sparsity in AI models could significantly reduce their energy consumption and increase performance.

["When it comes to AI models, size matters. Even though some artificial-intelligence experts warn that scaling up large language models (LLMs) is hitting diminishing performance returns, companies are still coming out with ever larger AI tools. Meta's latest Llama release had a staggering 2 trillion parameters that define the model.
As models grow in size, their capabilities increase. But so do the energy demands and the time it takes to run the models, which increases their carbon footprint.", "To mitigate these issues, people have turned to smaller, less capable models and using lower-precision numbers whenever possible for the model parameters. But there is another path that may retain a staggeringly large model's high performance while reducing the time it takes to run an energy footprint.
This approach involves befriending the zeros inside large AI models. For many models, most of the parameters—the weights and activations—are actually zero, or so close to zero that they could be treated as such without losing accuracy.", "This quality is known as sparsity. Sparsity offers a significant opportunity for computational savings: Instead of wasting time and energy adding or multiplying zeros, these calculations could simply be skipped; rather than storing lots of zeros in memory, one need only store the nonzero parameters.
Unfortunately, today's popular hardware, like multicore CPUs and GPUs, do not naturally take full advantage of sparsity. To fully leverage sparsity, researchers and engineers need to rethink and re-architect each piece of the design stack, including the hardware, low-level firmware, and application software.", "In our research group at Stanford University, we have developed the first (to our knowledge) piece of hardware that's capable of calculating all kinds of sparse and traditional workloads efficiently. The energy savings varied widely over the workloads, but on average our chip consumed one-seventieth the energy of a CPU, and performed the computation on average eight times as fast.
To do this, we had to engineer the hardware, low-level firmware, and software from the ground up to take advantage of sparsity. We hope this is just the beginning of hardware and model development that will allow for more energy-efficient AI.", "Our team at Stanford has developed a hardware accelerator, Onyx, that can take advantage of sparsity from the ground up, whether it's structured or unstructured. Onyx is the first programmable accelerator to support both sparse and dense computation; it's capable of accelerating key operations in both domains.
The Onyx chip, built on a coarse-grained reconfigurable array (CGRA), is composed of flexible, programmable processing element (PE) tiles and memory (MEM) tiles. The memory tiles store compressed matrices and other data formats. The processing element tiles operate on compressed matrices, eliminating all unnecessary and ineffectual computation.", 'We evaluated the efficiency gains of our hardware by looking at the product of energy used and the time it took to compute, called the energy-delay product (EDP).
Source: IEEE Spectrum