I think that the hardware capability systems that I have known did not support selective revocation of primitive access to data—loads and stores.
This might justify some of the claims that capability systems cannot support some military security requirements.
The Keykos kernel controls access to memory by manipulating the memory map, which IBM calls DAT (Dynamic Address Translation).
The kernel supports selective revocation of primitive data access.
The access is via capabilities and the authority to revoke is also via capabilities.
Primitive data access is via the hardware that was designed to provide access—I can imagine no other feasible scheme.
In this note I follow the events that occur when access is revoked.
I also follow the events that lead up to such revocation, or at least describe the internal state of the kernel that allows it to revoke access with no overhead proportional to system size.
Revocation by a program outside the kernel is an act and acts require a capability as described in this pattern.
We will make a number of slightly optimistic yet realistic assumptions.
I assume the 370 in order to be concrete.
D owns some varying data whose sensitivity varies depending on events whose timing cannot be anticipated.
D thus needs to veil the data from some of its clients for periods of time.
Stale data are supplied to such clients for such periods.
B is a program an instance of which, b, is a client of D.
D keeps the current, intermittently sensitive data, in segment s to which there is a read-only key S.
S is disseminated to those with continuous access.
To supply limited access to the data in s, D buys a new node n and deposits S in slot 0 of n.
Node n is formatted as a red segment node and a segment key, T, to n serves for read access to s while S is in slot 0 of n.
D retains the node key N to n and uses it to change slot 0 of n when the current state of s is about to become sensitive.
On these events a stale copy ss of s is formed and a key to ss is stored in slot 0 of n for the period.
When s is no longer sensitive, S is put back in slot 0 of n.
T is disseminated to those with intermittent access to s.
T is available to the program that builds b’s memory tree.
Addresses 40000 thru 4FFFF are allocated to access the censored data.
T is thus placed in slot 4 of a node in d’s memory tree.
As d runs and touches pages with virtual address between 40000 and 4FFFF memory faults will cause the kernel to fill in a hardware page table devoted to T.
If other domains share T, they will also share this table, and the kernel work of populating the page table.
Domains with access to S will run with another page table devoted to S.
Page tables for S and T point to the same page frames in RAM.
When the hardware presents a page fault to the kernel it identifies the page table whose invalid entry was ‘at fault’.
That page table’s header identifies its producer which in this case is node n.
The kernel examines node n in the light of external kernel semantics, and presumably finds a page key in the structure that defines segment s.
It does disk I/O, if necessary and puts a read-only page table entry in the page table that caused the fault.
It is read only since header of the page table indicated that this table is for users without write access.
That indication was made when the page table was allocated in response to determining the meaning of key T.
Access is Revoked
D gets a signal that requires putting sensitive information in s and thus revoking access that was provided via T.
It invokes n via its node key putting segment key to a constant segment into slot 0 of n.
This kernel invocation involves a few internal steps which we examine below; at the end of the invocation there is no access to s via T and
any domains issuing loads or stores to addresses defined via T will see the constant segment.
How the Kernel Responds
The kernel sees an authorized request to replace the contents of slot 0 of node n.
It is authorized since the invoker holds a node key to node n.
The kernel must invalidate any mapping table entries whose creation required examination of slot 0.
The 370 kernel was able to find each such entry thru one of two mechanisms.
There were two mechanisms for efficiency; each table entry would be found by one or the other mechanism.
One mechanism was space efficient and covered common cases, and the other handled the uncommon cases and took more space.
In the case at hand the stored depend relation will lead the kernel to entry 4 in domain d’s hardware segment table.
This table might be shared with other domain’s using the same map.
The kernel makes that entry invalid and purges the TLB.
The kernel pays no attention to the new memory key placed in slot 0 of n.
If d uses those addresses while s is still veiled, the kernel will intervene via a fault and most likely find a page table left-over from the last time s was veiled.
- When a portion of the memory tree conformed to a certain common form of 16 way fan-out—4 bit nibbles of address serving as slot offsets within successive segment nodes, it was possible to reconstruct the sequence of slot examinations backwards by following the doubly linked prepared keys up the tree towards the memory tree root.
Memory keys remain prepared, and linked, while they justify memory map entries.
- Depend logic is a conventional hash table relating slots to table entries, organized to make access by slot ID fast.
False positives were allowed.
When a slot is modified all table entries associated with that slot by the depend relation are sacrificed.