Making the AIoT a reality – part 2

In part one of this blog series we outlined the huge potential for the AIoT to reshape the world around us. But as highlighted, making this vision a reality requires an entirely new approach to delivering on-device AI capabilities. 

The high-end CPUs/GPUs traditionally used by AI systems are prohibitively expensive – both from a cost point of view, and in terms of their power demands. Moreover, the architecture demands of a CPU+GPU approach adds significant complexity. End-point IoT devices are typically low-cost, low-power and small footprint – they simply can’t support the ‘traditional’ AI infrastructure

There are a variety of overlapping technical challenges to deal with here – but, we’re already seeing a rapidly growing demand for chips that can enable high performance processing without a debilitating price tag. 

Of course, the other consideration is just what sorts of AI tasks we are talking about. While there are likely to be a number of AIoT use cases where the absolute cutting edge of performance will not be required – tolerating a few seconds of compute time – there is also a huge class of potential applications that will need to support more-or-less “real-time” AI.  

And this is likely to be the focus of the AIoT – “end-point AI” that will support real-time sensors and context awareness to enable instant decision-making where time is very much of the essence. And the fact is that today’s lower cost systems cannot support these sorts of applications – the processing is simply too slow.

Fundamentally, if end-point devices are to enable this sort of real-time AI economically, it’s going to take a serious boost to the processing power available from today’s microcontrollers. 

Of course, that is easier said than done. When it comes to driving down the cost of compute hardware there is a wealth of challenges to be overcome – from the expense of third-party hardware/software to understanding the true nature of the processing task and therefore the functional components it requires to operate.

At the same time there can be no compromise on performance. To ensure that the bar is met, designers need to compensate for the necessary cost-saving techniques with sophisticated algorithms and architectures that deliver the required grunt to tackle AI functions.

It’s more complicated than that

The competing priorities of cost and performance are significant enough on their own, but there is a further complication. 

The nature of the AIoT market is unlike anything that has come before in the semiconductor industry. Even if manufacturers solve the cost/performance trade off, if the result is a chip purpose-built for one use — say, driving voice recognition — then it is not an AIoT solution at all. 

The issue is that AIoT represents a hugely diverse market with thousands if not millions of potential use cases. The traditional semiconductor model of developing highly optimised fixed solutions for each use case no longer applies.

Instead the solution must be flexible by design – and software configurable to a range of different applications that accounts for the huge variety of combinations of compute classes (AI, DSP, control and IO) required by each application.

The solution also needs to be able to support a variety of neural network ‘resolutions’ – including 1-bit, 8-bit, 16-bit, and 32-bit integers. The ability to mix and match these resolutions in a hybrid network, enables application developers to deliver machine learning that is optimised for reduced memory and processing footprint.

As may now be clear, any manufacturer looking to produce a processor that meets the needs of AIoT has a precarious balancing act to perform.

Enabling an intelligence revolution

But these are precisely the challenges that our xcore®.ai platform overcomes – and in so doing, delivers the new crossover processor the AIoT requires.

It solves the performance challenge with its fast processing and neural network capabilities. It solves the versatility challenge with its blend of high-performance AI, DSP, control and IO in a single device. And it solves the cost challenge by making all of these capabilities accessible with prices starting at just $1.

Crucially xcore.ai has also been designed with ease of use at the heart of the platform. For a device that offers such unique capabilities, one might expect proprietary tools and development techniques. Instead, xcore.ai is fully programmable using industry standard tools (LLVM compiler, Tensorflow) and runtime firmware (for example, freeRTOS). We also provide a rich set of libraries, including many IO protocols that cannot normally be implemented in software. As a result, XMOS’ multi-core architecture makes designing for real-time AI applications much more manageable than traditional CPU architectures.

In short, xcore.ai represents an entirely new generation of embedded platform – enabling data to be processed locally and actions taken on-device within nanoseconds.

No matter how far you want to take the concept of AIoT, xcore.ai delivers the most versatile, scalable, and easy-to-use processor in its price range today. It is a significant achievement, but one that will enable all of us to be the beneficiaries of an intelligence revolution.

Scroll to Top