This note was provoked by some controversy on an e-mail list.
Earl Boebert wrote a paper in the early 80’s indicating that capability systems could not confine, meaning that you relied on all code that saw your secrets not to reveal them. That was about the time that we were specifying and implementing the factory that was designed to confine. A later paper by Boebert made a similar claim about “unmodified capability systems”. I can find no references to “modified capability systems” that suggest what they are. I presumed at the time that the modification that Boebert was referring to was the addition of factories. Here are a few features that some take to constitute a modified capability system.

Some took references to modified capability systems to be pejorative in that capability discipline had to be made more complex or somehow compromised in order to confine. I want to argue here against such perspectives.

Classically computer systems with any pretense to security achieve that security thru fixed code, in the form of a collection of modules, the security kernel, that protects itself and imposes access restrictions on all other code in the system. Usually this collection runs in privileged mode to the exclusion of all other software. In Unix some code also runs in user mode within the security kernel, with “root privilege”.

From this perspective a modification to the security kernel must be resisted for change leads to bugs and change in semantics would generally mean that data that had been safe might now be vulnerable to compromise via the new rules. Either design or implementation blunders could compromise extant security arrangements.

The factory was designed and implemented with no new hooks in the underlying security kernel. (I lie a bit but still) The factory is like any other application. It does not violate old rules as it lacks the ability (capabilities) to do so. It is just like any other application.

The strength of capability designs is that such new security function can be added with no impact on the “security kernel”. Old security arrangements remain intact. Code designers who are unaware of the new facility can still safely write code to the unmodified rules of the original security kernel. The new kid on the block is merely an application that can provide a new form of protection. You are concerned with its design and implementation only if you wish to employ it. Indeed, unlike the security kernel, which unilaterally imposes universal discipline, the factory is user replaceable code effecting only those who choose to be bound to its new rules. It is up to the mutually suspicious parties to agree on using the factory. Of course they both rely on the correctness of whatever factory they select.

It may seem to some that a single monolithic security module is preferable, so that there is just one thing to trust. The history of operating systems, however, has been to remove function from some sort of kernel and add it back as an application. (That is until Microsoft discovered a business strategy to the contrary.) Capabilities extend this thrust so that users unconcerned with the service of some external security module need not learn about it to understand whether its existence on the machine that use impacts them. This is in contrast to Unix where the addition of new features for file access might make files available to an expanded set of users.

Some may regard this feature as a quirk as surely there are no more important security issues to be raised beyond confinement. Well you might take a look at a number of novel security problems that can be solved in this way, such as making sure that code that you install will continue to function into the future under terms enforced by agents working for you. Capabilities provide for Custom security.