We have learned to trust programs to be methodical and to have followed the rules set out for them in their code stream. People, on the other hand, are not so methodical. Few even trust themselves to remember all that they have done. ‘Me today’ is not quite the same person as ‘me yesterday’. This suggests that a log of actions taken by a user might suffice. While that might often be a better solution, we explore here techniques to examine an entire ‘system’. We must consider the case where the examiner is not privy to all of the state of the entire system. Can we imagine a program, unconstrained by capability discipline, that has access to the entire system that might be trusted to suitably reveal legitimate security concerns of some person? Indeed, what are those concerns? Such a program might run within the capability system wielding an ‘Xray’ capability that is closely held and violates limitations. Such was considered but not implemented for Keykos.
I can imagine, in Keykos at least, a complex program that could answer the following question: “If I were to write a secret into that segment (named with a key) would anyone else on this system be able to read it?”. That program would probably need a theory of some segment keepers and certainly the role of space banks. Giving powerful keys to complex programs is a bad idea. At least the Xray key is non-destructive. Perhaps the powerful parts can be divorced from the complex parts. (Is there a way to hide crypto keys from Xray keys?) If the key to the segment had been held so that all operations on it had been logged, (what does that mean?() there might be hope.
This is a tar-pit, but not hopeless.