Capabilities in the large/small

There are two sorts of thrust towards applied capabilities which can be describes as “Capabilities in the large” or “Capabilities in the small”. Keykos is in the former. They both promise great benefits, indeed they complement each other. Most detailed designs are exclusively one or the other, and merely wave their hand at the other school if they mention it at all. Keykos is guilty of this.

It is not clear which came first historically except that mathematical language theorists have pioneered the ideas that underpin Capabilities in the small more than a century ago. Those who first gave precise definitions of what it meant to be a mathematical proposition found themselves dealing with these issues, but with names we might now not recognize. Lexical scoping along with parameter logic has burried within it most or all of the insights of capabilities.

Capabilities in the Small

Among computer languages Algol was the first(?) to explicitly require each variable to be declared and a declaration determined exactly where in the program that variable might be named and thus accessed. A call site might include that variable and the called program’s parameter would then temporarily refer to the same value as the variable. C, with all of its problems, is faithful to this notion. Academic languages (with the possible exception of LISP’s dynamic scoping) adhere to this plan. This was clearly present in Church’s lambda calculus (from the 1930's) and there were less explicit precursors.

Early Burroughs machines tried to be faithful to this plan in the instruction set design. Capsicum seeks hardware support for capabilities so that not even malicious compilers can violate capability discipline. Other platforms rely on conventional hardware and trusted compilers and I see no reason why this cannot work. But I need to understand what they claim in better detail. For a secure platform what they claim needs to be melded with what loaders and other system components do to arrange for the execution of these compiled programs. I have not seen an integrated plan for such systems. I think that IBM has evolved System 38 into such a system. I do not know what claims IBM makes about their current offerings.

If you can trust the compiler and all your software is written in the language of that compiler then you may be able to build a complete capability platform without novel hardware support. Perhaps it suffices to trust a compiler back-end into which other compilers for other languages feed their code.

I am enthusiastic about all of these efforts; they complement Capabilities in the Large.

Capabilities in the Large

These systems harness hardware protection features which have come with most non-trivial computers since the middle 60’s. Some claim that experience with today’s operating systems demonstrates that this is not good enough. But just because you can write an insecure kernel based on such features, does not mean that you cannot build a secure kernel based on them. Keykos, EROS, Capros, Coyotos and evidently seL4 are of this plan. Not only does this strategy preserve legacy hardware, it preserves most legacy software. A new layer, or two is inserted between the hardware and the application software which replaces the legacy OS. Legacy application code finds enough familiar legacy like function to achieve its ends. New capability savvy application code finds an environment that is mostly easier to use.

The benefits arise in a set of firewalls that vastly limit the damage that can be done when some component, such as a browser, falls prey to maliciously crafted data from the internet and, despite its design, abuses its authority to the detriment of the user. Chrome and other browsers take great pains to avoid this but still advise the customer not to click on suspected links.