Binarised networks — the networks that will underpin future edge applications

They are an extremely valuable tool, and it’s now hard to imagine living in a technologically driven world without them. But while neural networks are extremely effective at performing the task for which they were made, they have two major issues that have so far limited their potential for the newest edge applications that are coming to the market — they’re expensive and energy hungry.

The issues with neural networks

These issues exist because of the way neural networks work. Neural networks consist of many layers of weighted sums. Each weighted sum adds up to a number that indicates either the likely presence or absence of some feature. For example, the layers of weighted sums could combine raw image data into features, and then recombine those to identify an object. It’s an incredibly detailed process.

In seeking the highest possible accuracy, many networks apply a floating-point approach to the segment analysis, which requires a relatively large amount of compute power, memory, and time to execute. Whilst the cloud can support intensive compute power, many applications that rely on edge processing cannot. The dual issues of cost and power make it unsustainable to use neural networks based on floating point numbers on the edge – although significant advances have been made in the quantisation of neural networks into smaller, fixed point numbers for embedded applications.  Today, 8b arithmetic is common, with little impact on the integrity of the answers generated.

Binarised networks offer the next major step-change in implementation efficiency.

Binarised networks at the edge

While a neural network gives each ‘segment’ a fine-grained probability, a binarised network is bimodal in approach, giving a score of either –1 (if it thinks that the segment doesn’t include the feature it’s looking for) or +1 (if it does). The weighted sums now weigh each feature either positively (multiply by +1) or negatively (multiply by –1), and rather than doing full multiplications we now only have to consider multiplying +1 and –1.

Binarised networks have immediate advantages. Compared to their floating-point counterparts they require 32 times less storage for a single number (a single bit versus 32 bits) and hundreds of times less energy, making them far more suitable for edge applications. There is a slight sacrifice in accuracy, but this can be reclaimed by making the network slightly larger. 

The future with binarised networks

The next couple of years are going to be big for binarised networks. Their simplicity opens a host of potential commercial edge applications where efficiency really matters. For example, autonomous vehicles, which need to make instantaneous decisions based on the immediate changing environment can use binarised networks to determine hazards on the road.

Companies are working hard to provide binarised technology, and the software needed to train binarised networks is maturing quickly. We’re likely to see the first real deployments of the technology very soon, enabling edge devices to contain a cost-effective and low-power machine for analysing data. When it comes down to the practicalities of building edge applications, embedded single chip solutions are the natural choice for fast time to market and affordability.  For the most efficient intelligent systems, that need a solution to support all the required processing needs, alongside native support for Binarised Neural Networks, devices like xcore®.ai will be the natural choice. 

Watch this space – Binarised networks offer a step change in implementation efficiency for embedded systems. They will enable artificial intelligence to reach into entirely new categories of products that we will encounter routinely – and use unthinkingly – in our everyday lives.

Scroll to Top