RK0

Embedded Real-Time Kernel '0'

Real-Time Model and Service Map in RK0

This post describes the principles that drive RK0 design and its service map (version 0.19.).

Usage examples and design internals can be found on the Docbook.

Design Rationale

If concurrency is already difficult, when you need to bound it to real-time responsiveness, eventually you will get asking yourself if we have the right tools for the job. Crude answer is we don’t: response time emerges from implementation. To increase our chances of getting it right, we need a simple mental model.

Predictability is paramount in real-time systems, and service semantics affect predictability. This is an established fact, not an opinion.

Real-time system software – or cyber-physical systems – are highly domain-specific and hardware-dependent; yet we keep pursuing commonality on the hardware-software interface, despite the fact that an interface can express syntax, and some semantics. We have seen enough cases to conclude that correctness by interface as a contract is dangerously limited.

RK0 starts from something that has been overlooked: the commonality in cyber-physical system is their concurrency model, which is characterised by urgency and precedence conditions. Because of that, regardless the domain or hardware, the application layer repeatedly deals with coordination problems that are neither exhaustive nor unknown. They can be expressed as services.

Still, many real-time kernels follow a general-purpose habit of over-generalisation: overloaded primitives whose meaning is derived from usage.

The absence of semantics can be desired neutrality, but often turns out to be displacement. Meaning is not removed. It is pushed into the application, usually without framing, where it degenerates easily.

Modelling Events, Signals and Messages

A system has state variables that define how it behaves. A change in a state variable is caused by an event. The periodic hardware interrupt (SysTick) that increments the kernel runtime count is an example.

The notion of execution progress on a digital computer arise from observable changes in state, whatever that state represents. Therefore, given two observation logical instants, if the observed state differs, at least one event must have occurred in (real) time. Note that if there is no difference, we can’t state that no event has happened.

In this sense, an event is a logical construct derived from observed reality. Time runs on a continuum; the computer samples reality with varying granularity. Computation, therefore, always lags behind. Aware of that, a real-time system’s goal is reacting to external stimuli so that a result is delivered to the environment while it is still useful.

On a real-time kernel, execution progress follows the urgency of tasks and precedence conditions. We design concurrent units (Tasks) and use kernel services to coordinate their execution to be ordered, producing a final reponse that is bounded on time.

This coordination is achieved by Inter-task Communication. In RK0 tasks send/receive information in the form of Signals or Messages.

A Signal (or a Signal token) signifies an occurrence. When a task checks for a signal, it is sufficient for the signal to be present for the task to progress. The operation of signalling another task does not affect the sender task. A signal is a notification, never a ‘request’.

Message conveys structured, variable information. The progress emerges from how the sender and receiver handle information in messages, as well as from the mechanism itself as described in this page.


RK0 Services Map

The up-to-date Service Map is on RK0’s wiki page.

2 responses to “Real-Time Model and Service Map in RK0”

  1. Interesting! Even if I needed help (from Duck.ai) to understand “The absence of semantics can be desired neutrality, but often turns out to be displacement. Meaning is not removed. It is pushed into the application, usually without framing, where it degenerates easily.”

    Since I am so used to “channels” (and did not much dig into the others) I have some comments:

    Your channel is always synchronous, unbuffered. Like occam and XC. Client(s) block until it or one of them is “taken” by the server. Is it possible to establish “fairness” here, so that no “fast” client (in theory) can reuse the server indefinitely, leaving no time for the others? This is not “priority”, but “fairness”. Often fairness is solved by the server task once the concept is available. This is not really possible in golang (and the designers argue that it is not necessary), but in my experience on small HW I have needed it some times.

    I assume that each client will have its own data and unique pointers to the data it needs to send off. Only filled by the client, it’s ready and unchanged for the server once it’s running. After the kChannelDone() this data space may be reused. When all parts agree on the struct’s semantics, sending over variable length size data is also possible.

    I assume that the priority adjustment is needed in a system that relates to priorities. Could there be situations where the priority of each task may be static? Like with the server always higher than all of the clients? Since clients block until kChannelDone(), is the priority inheritance scheme really needed? Maybe the purpose is to get the server run “as fast as” the client? But one “as fast as”, is it better than any other “as fast as”? A server may again use a channel to get work for another server (thus becoming client), is the priority inhertiance for this also needed to avoid priority inversion, since the OS would then in case need to handle it. Will the scheme cause the priorfity inversion never to kick in?

    Has the concepts been formally verified by some tool?

    There is no one-to-any or any-to-any channel, like in https://www.cs.kent.ac.uk/projects/ofa/jcsp/jcsp-1.1-rc4/jcsp-doc/ I assume? Not that I have missed them ever.

    A comparison with XC select or [[ordered] select and occam’s ALT or PRI ALT would of course (to me) be interesting. But then, I assume that none of us would have the time to do this, albeit interesting. And then, the purpose could be discussed.

    I have once upon a time tried to write something about fairness in https://www.teigfam.net/oyvind/home/technology/049-nondeterminism/

    Like

  2. Hey Øyvind, thank you for taking your time to read this and raise these smart questions.

    I feel I see where you coming from and I will try to answer in CSPish idiom

    RK0 Channel is closer to a bounded synchronous procedure-call path. It is not unbuffered, or a rendezvous channel, see:

    A) It may rendezvous if the server is already blocked waiting when the client calls. Otherwise, the request is enqueued. Each Channel is associated with a server-owned message queue and a bounded pool of request buffers. Those buffers carry the metadata the server uses to execute on behalf of the client.

    B) Fairness is not the concern. Urgency is.
    RK0 execution model is based on progress for a reason, that is expressed on the application: what, when and why. If a client monopolises the server, that is not something to be hidden – the application model or the server’s service discipline is wrong.

    C) The server does not inherit priority in the mutex sense.
    It executes the call at the caller’s urgency. If the server’s nominal priority is higher, it is effectively demoted; if lower, effectively promoted. The purpose is not fairness, but preserving the urgency of the call path

    D) If fairness is required, it should be expressed on the application.

    P.S.:

    Note that: a client dominant blocking time is the waiting for a reply, not waiting to deposit the message. Server does not block to wait for client acknowledgement after replying.

    An unbuffered 1-1 scheme as I understand it, is here: https://antoniogiacomelli.github.io/RK0/#usage_example_unbuffered_message_passing_extended_rendezvous

    An similar approach is used on QNX, with a few differences.

    Like

/* comment here */