January 21, 2026

Rethinking AI hardware to meet growing energy demands

A new Frontiers in Science lead article outlines brain-inspired hardware approaches designed to break today’s computing bottlenecks. Prof. Kaushik Roy led the research.
A smiling man in a blue polo shirt stands in front of a whiteboard with faint writing. The atmosphere is relaxed and academic.
Kaushik Roy, Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering

As AI models scale rapidly, energy efficiency has become a critical challenge. A new Frontiers in Science lead article outlines brain-inspired hardware approaches designed to break today’s computing bottlenecks.

The article, Breaking the memory wall: next-generation artificial intelligence hardware,” is led by Kaushik Roy, the Edward G. Tiedemann Jr. Distinguished Professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. Roy co-authored the paper with researchers from Purdue and Georgia Tech, including Purdue ECE alumnus Arijit Raychowdhury, now a professor and Steve W. Chaddick Chair in Georgia Tech’s School of Electrical and Computer Engineering.

At the center of the research is the so-called “memory wall,” a long-standing bottleneck in computing caused by the physical separation of processing and memory. Constantly shuttling data between these components consumes significant time and energy, a challenge that becomes especially severe for today’s large-scale AI models.

“Our current computing systems were never designed with modern AI in mind,” Roy said. “To sustainably scale artificial intelligence, we need hardware that more closely integrates memory and computation, much like the human brain, while dramatically reducing the energy cost of moving data.”

The article examines emerging hardware approaches that aim to break through the memory wall by bringing computation directly into memory. One promising strategy, known as compute-in-memory, allows calculations to occur where data are stored, significantly improving energy efficiency. The authors also explore brain-inspired designs, including neuromorphic and spiking neural network hardware, which process information in sparse, event-driven ways similar to biological systems.

Rather than focusing on a single technology, the paper provides a comprehensive roadmap — spanning algorithms, hardware architectures, circuits and devices — for building future AI systems that are faster, more energy-efficient and better suited for real-world applications.

To further explore these ideas, Roy will co-host a live webinar on Feb. 12 with Raychowdhury, offering a deep dive into the article’s findings and their implications for the future of AI hardware.

In addition to the scholarly publication, Frontiers in Science has released a companion content hub featuring a video overview, expert insights and a kid-friendly version of the research, highlighting the broad impact and accessibility of the work.

Explore the full content hub, including the video and educational resources, here.