Managing the link

The ideas below are predicated on low level error controlled blocks limited to 64KB. I deprecate that for now and the text below raises some of the problems explaining why.

Here is a list of sorts of signals across links. A transmission block is the unit of error control, either by retransmission or some sort of forward error control. I imagine that the following pattern uses the links efficiently. There is a pool of buffers each big enough to accommodate a standard packet with a 64KB payload. Reservations for buffers are against this pool and do not specify particular buffers. A transmission block is a portion of a buffer and the rest of that buffer is unused—transmission time is not wasted but portions of the buffer are wasted. Packets do not span buffers or transmission blocks. Perhaps buffers are enough bigger to include error control information such as redundancy. At 10 Gb/s a 64KB block passes a point in 53 μs and is 10 km long in the fiber. I will presume that that duration is short enough not to inconvenience applications that require low latency. For shorter lower bandwidth links where smaller latencies may be feasible, it might be necessary to interrupt the transmission of a large packet by shorter packets. This raises difficult problems if we try to guarantee that packets arrive in order. I ignore those problems for now but priority schemes come to the rescue, I think.

With the above assumptions I imagine that when a packet is found that must exit on link j, we consult to see if there is a buffer currently allocated for accumulating packets for link j. There is always 0 or 1 such buffers for a particular outgoing link. If the node supports more than one transmission priority there may be an accumulation buffer for each priority. If either:

we launch the partially filled buffer for link j, or queue the buffer if the outgoing link is busy. I presume one queue for each priority and most likely 2, or at most 3 priorities.

Link transmission services