We explore here how Keykos domains might serve some stream patterns.
A simple pattern is a stream of data flowing thru several processes, one per domain.
Here “node” refers to such a domain executing code performing some phase of stream processing.
The flow is thru nodes of increasing indexes.
Node j holds a start key to node j+1.
A process might or might not need to keep state between successive batches that it does.
We will assume that it does for that is the more general case.
This looks slightly like the tail call pattern but I do not assume that the messages that flow via these start keys carry return keys.
A domain (node) is often available.
When it receives a batch from the stream it combines its stored state with the new data and provides some stream for the next process.
It modifies its internal state as well.
It then “RETURNs” to the next node using the start key that it holds.
That return may block but that is no concern to the logic of this node.
Hopefully the message will soon be delivered.
A vague plan is to make these node processors permutable to solve diverse problems.
What Could Possibly of Wrong?
There is a limit on the size of the data in a message.
The stream data might occupy memory segments, keys to which could be passed.
That would be a design decision between pairs of adjacent nodes.
That is a fascinating diversion I will not follow just now.
When a node finds its output won’t fit in a message it might call the next node with the initial part of the stream data and a mark indicating “more to follow”.
The next node soon returns for more.
Then return keys appear at a slightly lower abstraction level.
There may be several nodes in this overflow mode; I see no problems in this direction.
Sometimes a node will decide that the recent batch of input produces no output.
One solution is to pass on a zero length batch of stream to the next node.
In my notion of stream processing a node given the empty string, produces the empty stream.
We are ignoring here a higher level of control here; this string of nodes is not your normal program component.
It is not a function or object which are the normal system components this century.
We continue to ignore that level just now.
When several CPUs are available to the kernel, there may be several nodes running at once in one stream.
In this regard these pattern transcends the original pushy scheme.
It instead relies on the mutex aspects of the domain to avoid collision of processing.
If the processing efficiency is sporadic it may be strategic to add buffering in the multi-CPU case.
A node can buffer.
Can a node buffer?
We have already postulated that some nodes will be sensitive to input stream batches too big for one message.
When they get such an incomplete message they will snatch the return key to return to when they want more.
Buffers may not change the stream semantics, but they may change whether the system ever produces any useful output.
I can imagine a buffer absorbing an infinite amount and starving something else.
A node may take input from two streams and produce a resulting stream, as in multiplexing.
Sometimes the input rate is controlled by upstream events, sometimes it is determined by process logic.
There are two start keys to this node with different databytes (badges) so that the code can see which input stream is producing.
This node is perhaps composed of two domains, one to receive each input stream.
The node is responsible for blocking a stream so as to wait for the other stream to catch up.
It can do this by making the reluctant input domain available while the other domain is waiting.
The same dichotomy applies as it does for fan-in.
There may be fungible servers down stream and we need to load balance — ergo Exogenous.
A demultiplexor is charged with sending stuff to the right place — ergo Endogenous.
The original pushy scheme includes continuations in all the messages.
There are useful modifications to the simple “pass it on” plan.
In the plan on this page there are generally no continuations or return keys.
This saves key copying and the necessary annihilation of many copies of the return key when it expires.
It is confusing (anti pattern) but these styles may be mixed.
This may tempt some to put logic such as this into hardware.
There is smarts within the abstractions that must clearly be in user mode:
The order in which ready nodes run is not specified here.
Indeed those decisions must sometimes rely on insights we do not have regarding cache affinity at several levels.
I have no general notions on such questions.