So far, we have discussed tasks coordinated by Signals. Signals do not convey data in the strict sense—the meaning of the communication is implicit.
MESSAGE PASSING
The read value on a sensor, a checksum, a command with its payload—all of these are examples of Messages. Message Passing provides the mechanisms to exchange data and also to synchronise (after all, if Task A sends a message to Task B, B can only receive the message at some point in time after A has sent it).
Shared Memory and Message Queues
Processes are isolated on more high-end embedded devices and on General Purpose Operating Systems that utilise Virtual Memory (at the lower end, we still can achieve some isolation using Memory Protection Units). Message Queues are a mechanism to circumvent this isolation by asking the kernel to convey data from one address space to another. For these systems, Shared Memory, when applied, is a pre-defined memory area, which processes will attach to so they can write and read from – without relying on the kernel.

(a) Message Passing (b) Shared Memory (Silberschatz)On embedded operating systems for constrained embedded devices, often all tasks share the same address space (therefore, every memory is shared memory). In this context, message passing provides a way for tasks to exchange messages in a controlled manner.
A Message Queue (generally, a Mailbox) is a public structure for message passing. They are often constructed as circular buffers or linked lists of buffers. The queue discipline is typically FIFO, but LIFO and message priority are possibilities.

Fig 2. Two modules communication through a mailbox (Bertolotti)A communication link or a channel is said to exist between producers and consumers. At the very least the channel will support two primitives: send and receive. This link has the following properties:
(a) Buffering
Buffering is about how many messages can wait on the channel. If we decide that no message can ever be waiting, logically, we have zero buffering: a successful transmission is a successful reception. For a bounded buffer, we have one or more messages that can be stored; successful sending does not mean successful receiving – it means the message has been deposited, not it has been extracted.
(b) Naming
Naming can be direct or indirect. Indirect naming means the communicating tasks will use the channel’s name to send to and receive from. On a direct channel, the send() gets a task ID as argument.
When a receiver does not need to name a sender task or a mailbox, the naming scheme is said to be asymmetric – otherwise, it is symmetric – as a send() always needs a destination point – a task or an object.

Fig 3. Naming schemes(c) Blocking or non-blocking
For blocking operations, a producer blocks on a full buffer, and a consumer blocks on an empty buffer.
Non-blocking operations will always return, whether successful or not.

Fig 4. On top, a consumer sleeps waiting for a message. Below, a producer sleeps waiting for a successful send.Unbuffered Message-Passing
When the channel is logically unbuffered the sender will remain blocked until it receives an acknowledgement indicating that the message was received: the send operation was successful, and the consumer got the message (note that on Fig. 4. the sender is sleeping until the message is sent. A clear unbuffered communication would also depict message/signal from B to A.)
This pattern is inherent to a synchronous command-response – the sender blocks waiting for a message conveying the result of its request. If no response is expected, the acknowledgement can be a dummy message or a signal.
It can be applied in cases where the communication link is unreliable and/or the message is too important to be lost or whenever we need tasks to run in lockstep.
MESSAGE-PASSING ON RK0
RK0 provides a comprehensive set of message-passing mechanisms. The choice of these mechanisms took into account the common scenarios to handle:
- Some messages are consumed by tasks that can’t do anything before processing information — thus, these messages end up also being signals. For Example, a server needs (so it blocks) for a command to process and/or a client that blocks for an answer.
- A particular case of the above scenario is fully synchronous: client and server run on lockstep.
- Two tasks with different rates need to communicate, and cannot lockstep. A faster producer might use a buffer to accommodate a relatively small burst of generated data, or a quicker consumer will drop repeated received data.
- Other times, we need to correlate data with time for processing, so using a queue gives us the idea of data motion. Eg., when calculating the mean value of a transductor on a given period.
- For real-time tasks such as servo-control loops, past data is useless. Consumers need the most recent data for processing. For example, a drive-by-wire system, or a robot deviating from obstacles. In these cases the message-passing must be lock-free while guaranteeing data integrity.
All Messsage-Passsing mechanisms in RK0 can be classified as indirect (messages are always passed to a kernel object) and symmetric (both sender and receiver need to identify the object). They can assume buffer or unbuffered and also blocking or non-blocking behaviour.
An exception is the Signal mechanism that can be used as a direct channel for messages, as demonstrated here).
An important distinction between the mechanisms is whether the message is passed by copy or reference. Copying incurs overhead but simplifies handling the data scope. RK0 offers both mechanisms. .
Mailbox
In RK0 a Mailbox holds a single pointer-sized message, a generic pointer (VOID *). An empty mailbox points to a NULL pointer. A full mailbox points to valid data.
A successful post to a Mailbox switches its status to full; a successful pend switches its status to empty. Thus, reading is normally destructive – except if using a special operation peek.
This means that its typical usage is as a mechanism that provides mutual exclusion for reading/writing from/to a memory location while also notifying—a reader will sleep for a full mailbox, and a writer for an empty mailbox.
This simplicity is powerful whenever a signal and a payload are needed: fully-synchronous channels (lock-step client-server), ISR-to-task messages, uni/bilateral task synchronisation with payload, etc.
Different from the other message-passing mechanisms, a mailbox can be initialised either as empty or full, allowing it to be used as a public binary semaphore, if ever needed.
Message Queues
Message Queues are message-passing mechanisms that buffer several items, allowing for a higher decoupling between producer and consumer than a Mailbox. In RK0 there are two types of Queues: Mail Queues and Stream Queues – the latter pass message by deep-copy, the former by reference.
Mail Queue
In RK0, a Mail Queue, or simply a Queue, can be seen as a ring of Mailboxes. Mail Queues are especially useful when composed with a Memory Allocator. The mechanism is intended for ‘zero-copy‘ message passing, when several messages can be kept on a memory pool, and the address of each block can be transmitted. Another use case is ‘one-copy‘, when messages are copied once to an application buffer, and the address is transmitted. Passing Message by reference is a typical “embedded thing” – because it is cheap, fast and DMA-friendly. A single-slot Mail Queue behaves as a Mailbox (both are offered as distinct mechanisms because Mailboxes are ~3x smaller than Queues. If needing only Mailboxes, one can keep Queues disabled).
Stream Queue
Stream Queues, or simply Streams, resemble UNIX classic Pipes in the sense they transmit bursts of bytes. The difference is that Pipes can transmit and receive an arbitrary number of bytes on every operation. Stream Queues have fixed-size blocks. This is a design choice to improve efficiency and deterministic behaviour. Importantly, Stream Queues will perform a deep copy from the sender’s local storage to the queue buffer and from the queue buffer to the receiver’s local storage.
Mailboxes, Queues and Streams have bounded blocking time, and their sleeping queues have priority discipline. Message-passing Objects are primitives in their own right because they do not use other synchronisation primitives.
Most-Recent Message Protocol (MRM Buffers )
Control loops reacting to unpredictable time events – like a robot scanning an environment or a drive-by-wire system – require a different message-passing approach – readers cannot “look at the past” and cannot block. The most recent data must be delivered lock-free and have guaranteed integrity.
An MRM buffer works as a 1-to-many asynchronous Mailbox – with a lock-free specialisation that enables several readers to get the most recent deposited message without integrity issues. Whenever a reader reads an MRM buffer, it will find the most recent data transmitted (which also implies always finding data). A writer will always have a buffer to deposit a message.
The core idea is that readers can only access the buffer classified as the ‘the most recent buffer‘. After a writer publish() a message, that will be the only message readers can get() — any former message being processed by a reader, was grabbed before a new publish() – and, from now on can only be unget().
OWNERSHIP
Except for MRM objects, which are a one-to-many mechanism, every other message-passing object in RK0 can be assigned an owner task. Only the owner task can receive messages from that object, which resembles a Port. Still, the channel is indirect. This feature enables message-passing benefits from priority propagation, diminishing the effect of priority inversion when a higher-priority sender blocks on a full mailbox/queue waiting for a lower-priority receiver.
The RK0 Docbook contains implementation details, special operations, usage cases, and patterns for these mechanisms.

/* comment here */