This is an examination of a certain sort of completeness in the sense that a computer is considered complete regarding the available control mechanisms by which one program controls what another can do. These might be characterized as meta tools for the program whose job is other programs. The privileged (or supervisor) mode of modern processors includes these meta tools. The kernel of an OS is a meta program. A debugger is another.

The Scheme reports require that certain invalid Scheme programs report errors upon execution of fragments such as (car 3). The nature of the report or its recipient is left unspecified. Since about 1960 computer architectures have specified the report and its recipient. MzScheme has specified a parameter mechanism to specify the recipient. The report itself is specified too but perhaps less specific to the error than it might be. (I argue elsewhere that MzScheme has made the wrong generalization and a simpler one would be more powerful.)

The meta tools that come with modern processors allow the meta program to:

  1. limit resources such as space and time that other programs can use;
  2. limit the communications of signals to and from other programs;
  3. report details of program errors in a way to diagnose problems;
  4. Intervene in program execution so as to examine and report the progress of a computation.
Furthermore the hardware meta tools are such that the meta program can apply different of these strategies to different clients. I do not advocate replicating he hardware solution for Scheme inherently does an exemplary job at task 2 and does it at a higher abstraction level than hardware meta tools do typically. It would be well, however, to be as complete as the hardware.

Conventional debuggers pay little attention to security mechanisms except as they themsevles are thereby constrained. For instance when gdb observes a client program calling the kernel it treats the invocation as an atomic operation. This seems correct to me. Kernel integrity and the secrets of others are thus preserved. When Scheme’s abstraction mechanisms are used to protect secrets and invariants of some large portion of an application, debuggers should be similarly constrained just as gdb is constrained from intervening in the operation of the kernel. When these abstraction mechanisms are used to separate code of mutually suspicious users this protection may be critical. (While MzScheme’s parameter mechanisms allows specification of the recipient of error reports, it assumes that the recipient for the errors of a program are the same as the recipient of its caller. Scheme’s lexical scoping is the heart of the separation that supports mutually suspicious applications. When bugs or mysteries are found in code it is srategic to apply a debugger to that code and in such a way that the protection (abstraction) remains in place.

Suppose that the recipient of an error report is statically determined as the environment determines the meanings of program identifiers. In one scheme certain symbols would be reserved to retrieve values, presumably procedures, from the environment to which to deliver the error report. These values can be changed by anyone with access to the environment. Puzzle: This is at variance with Keykos where a program may not have the authority to redirect the errors that it commits. ) As conventional hardware reports program errors it includes some sense of location within the program where the error occurred. Indeed it includes whatever continuation (in the sense of Scheme) might be in place at the error. This report does not abstract the continuation, however—the debugger is free to display the continuation as well as to invoke it. A naïve Scheme interpreter will have an expression evaluation stack from whose state can be extracted information about where in a program an error occurs. This information is more than necessary for invocation of the continuation.

See this about debugging in Keykos while respecting protection boundaries.