Device promises massive energy savings for AI applications

Digital Transformation Abstract Colorful Lines Growth Leadership
image: ©Just_Super | iStock

Engineers at the University of Minnesota Twin Cities have unveiled a cutting-edge hardware device set to transform the landscape of artificial intelligence (AI) computing

Published in the prestigious journal npj Unconventional Computing, their research introduces a state-of-the-art technology capable of reducing energy consumption for AI tasks by an unprecedented factor of at least 1,000.

Energy and AI

Traditional AI processes often face significant energy demands due to frequent data transfers between logic and memory components.

Addressing this challenge the University of Minnesota team has pioneered Computational Random-Access Memory (CRAM). Unlike conventional methods, CRAM allows data to be processed entirely within the memory array, eliminating the need for data to travel back and forth between memory and processing units.

Yang Lv, lead researcher and postdoctoral researcher at the University of Minnesota, described CRAM as a revolutionary leap in efficiency: “This work is the first experimental demonstration of CRAM, where the data can be processed entirely within the memory array without the need to leave the grid where a computer stores information.”

Making AI more energy efficient

With global energy consumption for AI doubling by 2026, innovations like CRAM could not arrive at a more crucial time. According to Lv and his colleagues, CRAM-based systems could achieve energy savings of up to 2,500 times compared to traditional methods, setting a new benchmark for energy-efficient computing.

CRAM leverages Magnetic Tunnel Junctions (MTJs), advanced nanostructures that use electron spin rather than electrical charge to store data. This spintronic approach not only enhances speed and efficiency but also ensures resilience in demanding environments.

Ulya Karpuzcu, an expert in computing architecture and co-author of the paper, highlighted CRAM’s versatility: “As an extremely energy-efficient digital based in-memory computing substrate, CRAM is very flexible in that computation can be performed in any location in the memory array.”

Looking ahead, the University of Minnesota team plans to collaborate with semiconductor industry leaders to scale up CRAM technology, paving the way for its integration into mainstream AI applications. Their efforts are supported by grants from agencies like DARPA, NIST, NSF, and Cisco Inc., showing the project’s national significance and potential for global impact.

The unveiling of CRAM represents a technological milestone and also a future where AI can achieve unprecedented efficiency without compromising performance.

LEAVE A REPLY

Please enter your comment!
Please enter your name here