ADVERTISEMENT
Advertise with BNC

Towards Implementing Neural Networks on Edge IoT Devices

Towards Implementing Neural Networks on Edge IoT Devices
30 Oct 2024,

Researchers have developed a groundbreaking magnetic RAM-based architecture for edge IoT devices

The race between artificial intelligence (AI) and the Internet of Things (IoT) is heating up, but these two technologies are evolving on very different tracks. AI, with its prowess in data analysis, image recognition, and natural language processing, is already reshaping fields from academia to industry. 

Combining IoT with AI

Meanwhile, IoT devices, powered by incredible advancements in miniaturization and electronics, are transforming everyday objects into internet-connected tools. But the challenge arises when trying to combine these two giants: how do you bring the power-hungry AI capabilities to the limited circuits of IoT devices?

The difficulty lies in size, power, and computing capacity. AI, especially with its artificial neural networks (ANNs), needs a hefty amount of processing power. Edge devices in IoT, however, are intentionally small and efficient, designed for low-power tasks with minimal processing capability. This creates a puzzle: how can engineers make these limited devices ‘smart’ without sacrificing size or efficiency?

This is where Professor Takayuki Kawahara and his graduate student, Mr. Yuya Fujiwara from Tokyo University of Science, come in. They’re not just trying to solve this puzzle—they’re creating a new playbook for it. In a recent paper published in IEEE Access, the duo introduced a cutting-edge solution that could make AI on tiny IoT devices more feasible. Their secret weapon? A novel training algorithm for a specialized type of ANN called a binarized neural network (BNN), running on an advanced computing-in-memory (CiM) architecture suited for IoT’s constraints.

“BNNs operate with weights and activations of just -1 and +1,” explains Kawahara. “This lets them reduce computational load by using just one bit per calculation.” However, the problem is that while BNNs are efficient in operation, the learning process itself typically requires real-number calculations, which consume more memory and power than IoT devices can spare. 

To tackle this, Kawahara and Fujiwara created a new algorithm called the ternarized gradient BNN (TGBNN), specifically designed for efficient training on resource-constrained devices.

The TGBNN approach incorporates three major innovations. First, it uses ternary gradients during training, keeping the rest of the network’s weights and activations binary. Then, they fine-tuned the Straight Through Estimator (STE) to improve backpropagation efficiency, enabling the BNN to learn faster and more accurately. Finally, the researchers introduced a probabilistic update method, leveraging the behavior of MRAM cells for even more efficiency.

Realizing neural networks on edge devices

(a) Structure of the proposed neural network, which uses three-valued gradients during backpropagation (training) rather than real numbers, thus minimizing computational complexity. (b) A novel magnetic RAM cell leveraging spintronics for implementing the proposed technique in a computing-in-memory architecture.

Image credit: Takayuki Kawahara from Tokyo University of Science

The team then tested their approach on a CiM architecture, where they performed calculations within the memory itself. Using a custom-designed XNOR logic gate and an innovative Magnetic Random Access Memory (MRAM) system, they reduced the size of the essential calculation circuit by half. This design allowed them to manipulate MRAM cells using two mechanisms—spin-orbit torque and voltage-controlled magnetic anisotropy—to optimize storage and processing.

Testing the model’s performance on the popular MNIST dataset, a classic benchmark for handwritten digit recognition, the researchers achieved an impressive 88% accuracy using their Error-Correcting Output Codes (ECOC)-based learning. Even better, their BNN design achieved faster convergence, matching traditional BNNs in structure but outpacing them in training speed.

The implications of this development are wide-ranging. Wearables, for instance, could become more intelligent, autonomous, and compact, capable of analyzing data directly on the device without needing to sync constantly with the cloud. Homes could become smarter, responding to patterns and behaviors in real-time, and all while using less energy. In an age where sustainability is essential, these advances in power efficiency could contribute to reducing overall energy consumption.

Kawahara and Fujiwara’s work takes us closer to a future where AI doesn’t just sit on massive servers but lives directly in the objects around us. Their progress hints at a world where our devices are not only connected but smart, responsive, and adaptive—a world that feels closer with every innovation.

 


Maximize Your Q4 Crypto-Media Reach!

BNC AdvertisingBrave New Coin reaches 500,000+ engaged crypto enthusiasts a month through our website, podcast, newsletters, and YouTube. Get your brand in front of key decision-makers and early adopters. Don’t wait – Secure your spot and drive real impact in Q4. Find out more today!


ADVERTISEMENT
Advertise with BNC
ADVERTISEMENT
Advertise with BNC
Top Gainers & Losers
Discover the biggest crypto gainers & losers
ADVERTISEMENT
Advertise with BNC
BNC Newsletters: A weekly digest of the most important news and analysis.
Latest Insights More Insights
ADVERTISEMENT
Advertise with BNC