Following the announcement of xcore.ai earlier this year, we recently made available The Linley Group’s in-depth analysis of the platform.
A number of key themes relating to the xcore.ai platform were raised within the report. This is the first in a short series of blogs that go beyond the headline performance and address some of these themes in more detail.
First and foremost, The Linley Group’s analysis highlights the performance the xcore.ai platform delivers for the cost. Indeed, the report goes so far as to state that: “xcore.ai ranks among the least expensive 32-bit microcontrollers, giving customers a big AI performance boost essentially for free”.
Of course, it is great to have this unique combination of performance and price point validated, but the analysis also highlighted one of the other key advantages of the xcore.ai platform — ease of use.
Meeting the needs of a shifting market
When it comes to realising the benefits of xcore.ai, ease of use is absolutely crucial. This is particularly true considering how much design flexibility the xcore.ai platform offers.
xcore.ai has been designed to address the shifting reality of the semiconductor market as we increasingly move away from a model reliant on shipping high volumes to a handful of vertical applications. These traditional large markets are breaking out into smaller, more numerous markets that don’t represent such high volumes individually but represent an even larger opportunity overall — driven by trends like the IoT.
However, if designers are to capitalise on this changing dynamic then they need the tools to be able to easily manipulate and combine the different compute classes that the applications require. Ultimately the performance and cost benefits of the platform mean nothing if it’s too complex to program.
Industry-standard tools for maximum flexibility
As the Linley Group report highlights, ease of use is as central to the platform’s attractiveness to designers as the performance and cost advantages.
Linley mentions that xcore.ai’s flexible IO is supported with “library code for many protocols. It also offers libraries of DSP kernels and neural network functions”, which cannot normally be implemented in software.
The report also points out that xcore.ai enables users to “create applications in high-level code and link them to these library routines. XMOS [also] provides an LLVM compiler, Gnu debugger, linker, software timing closure tool, and other tools for directly programming its architecture”.
To add to that, xcore.ai also supports Tensorflow for neural network development and runtime firmware tools such as FreeRTOS.
These capabilities are crucial. For a device that offers such unique capabilities, one might expect proprietary tools and development techniques, which would add significant complexity and impact the ability of the platform to serve an increasingly fragmented electronics market. Instead, xcore.ai is fully programmable using industry standard tools.
Diversity of application
The result of these capabilities is that XMOS’s multi-core architecture makes designing for real-time AI applications much more manageable and reliable than traditional CPU architectures.
This design flexibility only serves to increase the value of the performance and cost advantages of xcore.ai. This combination will be crucial to meeting the needs of designers as the electronics market continues to evolve in the coming years.
As this blog makes clear, we have not designed xcore.ai with a specific purpose in mind — quite the opposite in fact. Instead, we want to give our developer community the tools to use the platform for a diversity of applications that we can only imagine today.
The Linley report is available to download via our website.