Research

A Minimal Matrix-Vector Product Benchmark

Date

04 June 2024

Revision

1.0

Changes

Initial release

Key Points

ComputeRAM™ offloads up to 99% of model operations to memory, significantly reducing bus activity and freeing up the processor.

An Arm Cortex-M0 with ComputeRAM™ achieves up to 139x faster processing and 158x greater energy efficiency compared to one with conventional SRAM.

ComputeRAM™ enables general-purpose MCUs to handle heavy workloads in resource-limited devices, increasing their addressable market and reducing costs, complexity, and time to market for chip and device makers.

At the most basic level, ComputeRAM™ enables embedded microcontrollers to perform fast and efficient matrix-vector multiplications (MVMs). MVM operations are low-level linear algebra primitives forming the basis for numerous algorithms, ranging from signal processing to AI and machine learning.

By offloading up to 99% of model operations to memory, reducing bus activity, and leveraging an advanced in-memory computing engine, an Arm Cortex-M0 enhanced with ComputeRAM™ delivers up to 139x faster processing and 158x better energy efficiency than an Arm Cortex-M0 with conventional SRAM memory.

The gain in speed enables even a basic processor, like the Arm Cortex-M0, to handle heavy workloads that would otherwise take an order of magnitude longer to process. Meanwhile, the reduction in energy consumption leads to lower operational costs and expands the variety of applications that can be efficiently deployed on resource-limited devices. These establish ComputeRAM as a solution that eliminates the reliance on dedicated AI or signal processing accelerators for edge applications.

If you would like to know more, reach out for a free copy of our MVM benchmarking application note using the contact form below.

Want to learn more?

Fill out this form to download our Matrix-Vector Multiplication application note: