Back to top

XMOS Programming Guide

Version: B Date: April 30, 2014Download PDF
VersionReleased
FSep 21, 2015 download view
EOct 09, 2014 download view
BApr 30, 2014 download view

Parallel tasks and communication

The most fundamental difference between xC programming and C is the integration of parallelism and task management into the language.

Parallelism and task placement

xC programs are built of tasks that run in parallel. The is no special syntax about a task - any xC function represents a task that can be run:

void task1(int x)
{
  printf("Hello world - %d\n", x);
}

Running tasks in parallel is done with the par construct (short for “run in par-allel”). Here is an example par statement:

par {
  task1(5);
  task2();
}

This statement will run task1 and task2 in parallel to completion. It will wait for both tasks to complete before carrying on.

Although any function represents a task (i.e. a block of code that can be run in parallel with other tasks), tasks often have a common form of a function that does not return at all and consists of a never ending loop:

void task1(args) {
     ... initialization ...
     while (1) {
        ... main loop ...
     }
}

Although code can be run in parallel anywhere in your program, themain function is special in that it can place tasks on different hardware entities.

Task placement
/sites/default/files/images/17653/4/placement1-crop-wide.png

Task placement involves assigning tasks to specific hardware elements (the tiles and cores of the system). Task placement show a possible placement of a group of tasks. Note that:

  • Multiple tasks can run on the same logical core. This is possible via co-operative multitasking as described in Combinable functions.
  • Some tasks run across multiple logical cores. Thesedistributable tasks are describred in Distributable functions.

As previously mentioned, task placement only occurs in the mainfunction and is made by using the on construct within apar. Here is an example that places several tasks onto the hardware:

#include <platform.h>

...

int main() {
  par {
    on tile[0]: task1();
    on tile[1].core[0]: task2();
    on tile[1].core[0]: task3();
  }
}

In this example, task2 and task3 have been placed on the same core. This is only valid if these tasks can participate in cooperative multitasking (i.e are combinable functions - see Combinable functions). If no core is specified in the placement, the task is automatically allocated a free logical core on the specified tile.

Replicated par statements

A replicated par statement can run several instances of the same task in parallel. The syntax is similar to a C for loop:

par(size_t i = 0; i < 4; i++)
  task(i);

This is the equivalent to the statement:

par {
  task(0);
  task(1);
  task(2);
  task(3);
}

The range of the iterator of the par (i in the example above) must step between compile-time constant bounds.

Communication

Tasks communicate via explicit transactions between them. Any task can communicate with any other, no matter which tiles and cores the tasks are running on. The compiler implements the transactions in the most efficient way possible using the underlying communication hardware of the device.

Interface connections

All communication is done via point-to-point connections between tasks. These connections are explicit in the program. the figure below shows an example of some connected tasks.

Example task connections
/sites/default/files/images/17653/4/task_connection-crop.png

Interfaces provide the most structured and flexible method of inter-task connection. An interface defines the kind of transactions that can occur between the tasks and the data that is passed with them. For example, the following interface declaration defines two transaction types:

interface my_interface {
  void fA(int x, int y);
  void fB(float x);
};

Transaction types are defined like C functions. Interface functions can take the same arguments that any C function can. The arguments define what data is sent when the transaction between the tasks occurs. Since functions can have pass-by-reference parameters (see References) or return values, data can flow both ways during a single transaction.

An interface connection between two tasks is made up of three parts: the connection itself, the client end and the serverend. An interface connection. shows these parts and the xC types relating to each part. In the type system of the language:

  • An interface connection is of type “interface T
  • The client end is of type “client interface T
  • The server end is of type “server interface T

where T is the type of the interface.

An interface connection.
/sites/default/files/images/17653/4/interface_connection-crop.png

A client end of a connection can be passed into a task as a parameter. The task that has access to the client end can initiate transactions using the following syntax similar to a function call:

void task1(client interface my_interface i)
{
  // 'i' is the client end of the connection,
  // let's communicate with the other end.
  i.fA(5, 10);
}

The server end can be passed into a task and that task can wait for transactions to occur using the select construct. A selectwaits until a transaction is initiated by the other side:

void task2(server interface my_interface i)
{
  // wait for either fA or fB over connection 'i'.
  select {
  case i.fA(int x, int y):
    printf("Received fA: %d, %d\n", x, y);
    break;
  case i.fB(float x):
    printf("Received fB: %f\n", x);
    break;
  }
}

Note how the select lets you handle several different types of transaction. Code can wait for many different types of transaction (using different interface types) from many different sources. Once one of the transactions has been initiated and the select has handled the event, the code will continue on. A select handles exactly one event.

When tasks are run, you can join them together by declaring an instance of an interface and passing it as an argument to both tasks:

int main(void)
{
  interface my_interface i;
  par {
    task1(i);
    task2(i);
  }
  return 0;
}

Only one task can use the server end of a connection and only one task can use the client end. If more than one task uses either end in apar, it will cause a compile-time error.

Tasks can be connected to multiple connections, for example:

int main(void)
{
  interface my_interface i1;
  interface my_interface i2;
  par {
    task1(i1);
    task3(i2);
    task4(i1, i2)
  }
  return 0;
}

This code corresponds to the connections show inExample task connections. A task can wait for events from multiple connections in one select:

void task4(interface my_interface server i1,
           interface my_interface server i2) {
   while (1) {
     // wait for either fA or fB over either connection.
     select {
     case i1.fA(int x, int y):
       printf("Received fA on interface end i1: %d, %d\n", x, y);
       break;
     case i1.fB(float x):
       printf("Received fB on interface end i1: %f\n", x);
       break;
     case i2.fA(int x, int y):
       printf("Received fA on interface end i2: %d, %d\n", x, y);
       break;
     case i2.fB(float x):
       printf("Received fB on interface end i2: %f\n", x);
       break;
     }
   }
}

With interface connections, the client end initiates communication. However, sometimes the server end needs to signal information to the client end independently. Notifications provide a way for the server to contact the client independently of the client making a call. They are asynchronous and non-blocking i.e. the server end can raise a signal and then carry on processing.

The following code declares an interface with a notification function in:

interface if1 {
  void f(int x);

 [[clears_notification]]
 int get_data();

 [[notification]] slave void data_ready(void);
};

This interface has two normal functions (f and get_data). However, it also has a notification function: data_ready. Within the interface declaration, a notification function can be declared with the [[notification]] attribute. This function must be declared as slave to indicate that the direction of communication is from the server end to the client end. In other words, the server will call the function and the client will respond. Notification functions must take no arguments and have a void return type.

It may seem that specifying both slave and[[notification]] on a function is redundant. The need for both is to future proof the language against further extensions where slave functions do not necessarily need to be notifications.

Once the server raises a notification, it triggers an event at the client end of the interface. However, repeatedly raising the notification has no effect until the client clears the notification. This can be done by marking one or more functions in the interface with the[[clears_notification]] attribute. The client will then clear the notification whenever it calls that function.

The server end of the interface can call the notification function to notify the client end i.e. it can execute the code:

void task(server interface if1 i) {
   ...
   i.data_ready();

As previously mentioned this task is non-blocking and raises a signal to the client. The signal can only be raised once - after calling data_ready, calling it again will have no effect.

The client end of the interface can make calls as normal, but can also select upon the notification from the server end of the interface. For example:

void task2(client interface if1 i)
{
   i.f(5);
   select {
   case i.data_ready():
     int x = i.get_data();
     printf("task2: Got data %d\n",x);
    break;
  }
}

Here the tasks calls data_ready after receiving the notification. As well as performing a transaction, this also clears the notification so the server can re-notify at a later time.

Passing data via interface function calls

An interface function call passes data from the client end to the server end via its arguments. It is also possible to have return values. For example, the following interface declaration contains a function that returns an integer:

interface my_interface {
  int get_value(void);
};

The client end of the interface can use the result of that interface function call which has been passed back from the server:

void task1(client interface my_interface i) {
  int x;
  x = i.get_value();
  printintln(x);
}

When handling the function at the server end, you can declare a variable to hold the return value in the select case. This can be assigned in the body of the case and at the end of the case the value is returned back to the client:

void task2(server interface my_interface i) {
  int data = 33;
  select {
  case i.get_value() -> int return_val:
    // Set the return value
    return_val = data;
    break;
  }
}

Data can also pass both ways via pass-by-reference arguments (see ???) and array arguments:

interface my_interface {
  void f(int a[]);
};

The client end can pass an array into this function:

void task1(client interface my_interface i)
{
  int a[5] = {0,1,2,3,4};
  i.f(a);
}

When passing an array, it is a reference to the array that is passed. This is a handle that allows the server to access the elements of that array within the select case that handles that transaction:

...
select {
  case i.f(int a[]):
    x = a[2];
    a[3] = 7;
  break;

Note that the server can both read and write from the array. This works even if the interface is connected across tiles - in this case the array accesses are converted to efficient operations over the hardware’s communication infrastructure.

The access to arrays also includes the use of memcpy. For example, an interface may contain a function to fill up a buffer:

interface my_interface {
  ...
  void fill_buffer(int buf[n], unsigned n);
};

At the server end of the interface, the memcpy in string.h can be used to copy local data to the remote array. This will be converted into an efficient inter-task copy:

int data[5];
...
select {
case i.fill_buffer(int a[n], unsigned n):
  // Copy data from the local array to the remote
  memcpy(a, data, n*sizeof(int));
  break;
}

Interface arrays

It is useful to be able to connect one task to many others (this situation is shown in the figure below).

One task connecting to multiple other tasks
/sites/default/files/images/17653/4/multiple_connections-crop.png

A task can connect to many others using an array of interfaces. One task can handle the ends of the entire array whilst the individual elements of the array can be passed to other tasks. For example the following code connects task3 to both task1 and task2:

int main() {
  interface if1 a[2];
  par {
    task1(a[0]);
    task2(a[1]);
    task3(a, 2);
  }
  return 0;
}

task1 and task2 are given an element of the array and can use the interface end as usual:

void task1(client interface if1 i)
{
  i.f(5);
}

task3 has the server ends of the entire array. The selectconstruct can wait for a transaction over any of the connections. This is done using a pattern variable in the select case. The syntax is to declare the variable inside the array index of the array in the select case:

case a[int i].msg(int x):
   // handle the case
   ...
   break;

Here, the variable i is declared as a subscript to the array a, which means that the case will select over the entire array and wait for a transaction event from one of the elements.

When a transaction occurs, i is set to the index of the array element that the transaction occurs on. Here is a complete example of a task that handles an interface array:

void task3(server interface if1 a[n], unsigned n)
{
  while (1) {
    select {
    case a[int i].f(int x):
      printf("Received value %d from connection %d\n", x, i);
      break;
    }
  }
}

Extending functionality on a client interface end

An interface can provide an API to a component of a system.Client interface extensions provide a way to extend this API with extra functionality that provides a layer on top of the basic interface. As an example, consider the following interface for a UART component:

interface uart_tx_if {
   void output_char(uint8_t data);
};

To extend a client interface a new function can be declared that acts like a new interface function. The syntax is:

extends client interface T { function-declarations }

The following example adds a new function to the uart_tx_if interface:

extends client interface uart_tx_if : {
   void output_string(client interface uart_tx_if self,
                      uint8_t data[n], unsigned n) {
     for (size_t i = 0; i < n; i++) {
       self.output_char(data[i]);
     }
   }
}

Here output_string extends the client interfaceuart_tx_if. Its first argument must be of that client interface type (in this example it uses the convention of being named selfbut can be any variable name). Within the function it can use this first argument to participate in transactions with the other end of the interface. The only restriction on the function definition is that it cannot access global variables.

The extension can be used in the same way as a interface function by the task that owns the client end of the interface:

void f(client interface uart_tx_if i) {
   uint8_t data[8];
   ...
   i.output_string(data, 8);
}

Here the i is implicitly passed as the first argument of theoutput_string function.

Channels

Channels provide a primitive method of communication between tasks. They connect tasks together and provide blocking communication but do not define any types of transaction. You connect two tasks together with a channel using a chan declaration:

chan c;
par {
    task1(c);
    task2(c);
}

With channels, the special operators <: and :> are used to send and receive data respectively. The operators send a value over the channel. For example, the following code sends the value 5 over the channel:

void task1(chanend c) {
  c <: 5;
}

The other end can receive the data in a select:

void task2(chanend c) {
  select {
  case c :> int i:
    printintln(i);
    break;
  }
}

You can also receive data by just using the input operator outside of a select:

void task1(chanend c) {
   int x;
   ...
   // Input a value from the channel into x
   c :> x;

By default, channel I/O is synchronous. This means that for every byte/word sent over the channel the task performing the output is blocked until the input task at the other end of the channel has received the data. The time taken to perform the synchronization along with any time spent blocked can result in reduced performance. Streaming channels provide a solution to this issue. They establish a permanent route between two tasks over which data can be efficiently communicated without synchronization.

You can then pass each end of the channel to each logical core, thus opening a permanent route between the two cores:

streaming chan c;
par {
    f1(c);
    f2(c);
}

Creating tasks for flexible placement

xC programs are built up from several tasks running in parallel. These tasks can be of several different types that can be used in different ways. The following table shows the different types:

Task type

Usage

Normal

Tasks run on a logical core and run independently to other tasks. The tasks have predictable running time and can respond very efficiently to external events.

Combinable

Combinable tasks can be combined to have several tasks running on the same logical core. The core swaps context based on cooperative multitasking between the tasks driven by the compiler.

Distributable

Distributable tasks can run over several cores, running when required by the tasks connected to them.

Using these different tasks types you can maximize the resource usage of the device depending on the form and timing requirements of your tasks.

Combinable functions

If a tasks ends in an never-ending loop containing a select statement, it represents a task that continually reacts to events:

void task1(args) {
  .. initialization ...
  while (1) {
    select {
      case ... :
        break;
      case ... :
        break;
      ...
    }
  }
}

These kind of tasks can be marked as combinable by adding the combinable attribute:

[[combinable]]
void counter_task(const char *taskId) {
  int count = 0;
  timer tmr;
  unsigned time;
  tmr :> time;
  // This task performs a timed count a certain number of times, then exits
  while (1) {
    select {
    case tmr when timerafter(time) :> int now:
      printf("Counter tick at time %x on task %s\n", now, taskId);
      count++;
      time += 1000;
      break;
    }
  }
}

This function uses timer events which are described later in Timing.

A combinable function must obey the following restrictions:

  • The function must have void return type.
  • The last statement of the function must be a while(1)statement containing a single select statement.

Several combinable functions can run on one logical core. The effect of this is to “combine” the functions as shown in the figure below.

Combining several tasks
/sites/default/files/images/17653/4/combined_tasks-crop.png

When tasks are combined, the compiler creates code that first runs the initial sequence from each function (in an unspecified order) and then enters a main loop. This loop enables the cases from the main selects of each task and waits for one of the events to occur. When the event occurs, a function is called to implement the body of that case from the task in question before returning to the main loop.

Within main, combinable functions can be run on the same logical core by using the on construct to place them:

int main() {
  par {
    on tile[0].core[0]: counter_task("task1");
    on tile[0].core[0]: counter_task("task2");
  }
  return 0;
}

The compiler will error if non-combinable functions are placed on the same core. Alternatively, a par statement can be marked to combine tasks anywhere in the program:

void f() {
  [[combine]]
  par {
    counter_task("task1");
    counter_task("task2");
  }
}

Tasks running on the same logical core can communicate with each other with one restriction: channels cannot be used between combined tasks. Interface connections must be used.

Combinable functions can be built up from smaller combinable functions. For example, the following code builds up the combinable function combined_task from the two smaller functions task1and task2:

[[combinable]]
void task1(server interface ping_if i);

[[combinable]]
void task2(server interface pong_if i_pong,
           client interface ping_if i_ping);

[[combinable]]
void combined_task(server interface pong_if i_pong)
{
  interface ping_if i_ping;
  [[combine]]
  par {
    task1(i_ping);
    task2(i_pong, i_ping);
  }
}

Note that task1 and task2 are connected to each other withincombined_task.

Distributable functions

Sometime tasks contain state and provide services to other tasks, but do not need to react to any external events on their own. These kinds of tasks only run any code when communicating with other tasks. As such they do not need a core of their own but can share the logical cores of the tasks they communicate with (as shown in the figure below).

A task distributed between other tasks.
/sites/default/files/images/17653/4/distributed_task-crop.png

More formally, a task can be marked as distributable if:

  • It satisfies the conditions to be combinable (i.e. ends in a never-ending loop containing a select)
  • The cases within that select only respond to interface transactions

The following example shows a distributed tasks that responds to transactions over the interface connection i to access the port p:

[[distributable]]
void port_wiggler(server interface wiggle_if i, port p)
{
  // This task waits for a transaction on the interface i and
  // wiggles the port p the required number of times.
  while (1) {
    select {
    case i.wiggle(int n):
      printstrln("Wiggling port.");
      for (int j = 0; j < n;j++) {
        p <: 1;
        p <: 0;
      }
      break;
    case i.finish():
      return;
    }
  }
}

A distributable task can be implemented very efficiently if all the tasks it connects to are on the same tile. In this case the compiler will not allocate it a logical core of its own. For example, suppose the port_wiggler task was used in the following manner:

int main() {
  interface wiggle_if i;
  par {
    on tile[0]: task1(i);
    on tile[0]: port_wiggler(i, p);
  }
  return 0;
}

In this case task1 would be allocated a core but port_wigglerwould not. When task1 creates a transaction with port_wiggler, the context on its core will be swapped to carry out the case in port_wiggler; after it is completed, context is swapped back to task1. the figure below shows the progression of such a transaction.

A transaction with a distributed task
/sites/default/files/images/17653/4/distributed_call-crop.png

This implementation requires the core of the client task to have direct access to the state of the distributed task so only works when both are on the same tile. If the tasks are connected across tiles then the distributed task will act as a normal task (though it is still a combinable function so could share a core with other tasks).

If a distributed task is connected to several tasks, they cannot safely change its state concurrently. In this case the compiler implicitly uses a lock to protect the state of the task.