Most machines have a few privileged registers which control aspects of the limitation of problem mode code and access to which is limited to privileged code. Commands to set privileged registers are privileged and commands to read them are also privileged on machines that are easily virtualized.
The first program to run after a system reset finds itself in privileged state. In privileged state a program may anoint another program with this status or maintain its privileged state and host other untrusted programs that run in problem state. Together with some sort of memory map or memory protection the operating system can protect itself from applications and applications from each other. With a memory map the privileged code can reserve a portion of real memory for itself and its record of system state. With additional ‘timer’ hardware that assures transition back to the privileged code after some definite period of time, the privileged code can allow the hardware to obey untrusted, even hostile programs in problem state. Such programs have access to some of the real memory for some limited period of time.
Safe-Tcl lets Tcl code run other Tcl code while critical operations are overridden. I don’t know enough details of the scheme but it seems evident that they should be able to mimic the successful two state hardware scheme to provide security.
I think that this scheme is sound and flexible but it seems awkward because writing privileged code is an uncommon talent even if the rules are simple. I think of capability architectures as more natural and some hardware systems provide capability functions directly. I don’t know whether software provision of capabilities built on two state machines is optimal but I like it. Keykos is just such a system.
With either a two state machine and a minimal capability kernel, or capability hardware and an even smaller capability kernel, diverse applications can each bring their own security models to the same capability platform and share that platform and be secure from each other. The security of the application depends only on the application implementation and the platform. (Most such capability platforms include code outside the capability kernel.)The CDC 6600 designed by Seymour Cray provided security by controlling the addresses that a program could access. Access to special addresses provided this control. The controlling program merely began initially with access to those special addresses. The big CPU had no privileged instructions. Only the “Peripheral CPUs” could do I/O. They only ran trusted code and communicated thru main memory.
Starting from the two state architecture there remains the job of agreeing upon and establishing a mutually trusted mechanisms to ensure property rights between the applications. Otherwise the integrity and security of one application remains subject to bugs in the other.
Cal TSS was a system designed to run on a CDC 6400 that was much like the 6600. It was not finished but it introduced many novel ideas on how to use hardware that provided coarse grained protection, to in fact provide fine grained protection. The idea that I remember best is that in order to provide read-only access to variable data, one copied the data to user memory upon beginning to execute the code requiring such access. To provide read-write access one also copied the data back at the end of the time slice. The 6400 had inordinately high copy performance. This provided much of the logical access control that Multics provided thru the memory map. Butler Lampson’s paper describes this well. The scheme does not handle situations where such data is too large to be copied but Seymour didn’t intend his machine to be used that way. In his view the application program was required to partition the data into hunks that would fit in real memory.
Another interesting footnote is a scheme that IBM channels employ to provide some system protection. There are a few commands that a channel will obey, that cause the channel to renounce one of a few categorical privileges. One such authority is to move data to main storage, another is to move data to the device and yet another is to order the device to change which data within the device is being accessed. The strategy is that the privileged program constructs the beginning of the channel program and includes such commands to renounce dangerous authority, and then causes the channel program to “branch” into the untrusted portion of the channel program provided by the untrusted application program. This is a bit like two state scheme but there is no way for a channel program to reacquire authority except for the CPU, executing privileged code, to restart the channel after it has become idle. The storage protection from the 360 completes this by including a 4 bit protection value in the channel which must match the storage key of the main storage accessed by the channel.
There is Hank Levy’s Capability Based Computer Systems, Digital Press, 1984 which describes alternatives to two state hardware security.
We describe here how Keykos uses such features.
The cookie jar anecdote tells of another hardware design security pitfall.
See tamper resistance also.
The security architecture of the Cell
Intel has recently introduces the SGX feature in their x86 architectures. ARM introduced TrustZone in response. I think that both of these are due to the fact that commercial kernels have failed to use the conventional memory map to protect themselves and thus a new dichotomy has seemed necessary. You will neither described very clearly in the above links. I think neither was well designed.