When entering a function register 1 points to the argument (type=val), register 2 to the continuation (type=cp) and register 3 locates the environ (type=env*).
A continuation is a place to begin obeying code which expects a ‘result value’ in register 1.
The environ (static chain) in this story would be found in the code stream.
An environ is null or a pair of words, the first of which is the value and the second is null or the remaining environ.
The function λx.x is coded with the instruction RET which goes to the continuation in registers 2 and 3 (type=cont).
The two paragraphs above are naïve.
Program Logic of REPL
We consider here the program logic of the Read Eval Print Loop which is to be expressed in our machine.
REPL repeatedly performs the following steps:
Some C code
- Read and compile code with the op READ. (cp -> reg 1)
- Create the null environ in register 3. (op NULL)
- Put cp in reg 2 to print routine.
- Begin evaluation of expression
- code: cp in reg 1
- continuation in reg 2
- environ in reg 3.
- Reads some <Expression> from standard input, produces resulting ‘compiled’ code somewhere and returns a type cp value for the new code in register 1.
This is primitive because I expect to use normal recursion with the C stack to perform this operation.
The machine has no implicit stack.
Perhaps the code is bound to a new name, provided in the input, in a name space that we have not yet defined.
- NULL r
- Create null pointer in register r.
The type of a continuation must be more like that of a stack.
Perhaps a different stack frame type for each ‘call’ site, each of which would include another continuation pointer.
In the 6600 LISP interpreter each call to the space allocation function included a ‘calling sequence’ argument describing which registers held pointers to live data, just in case there was no storage and a garbage collection was necessary.
At this point I see that I am confusing the machine interpreter with the language interpreter.
Does the notion of a mostly linear sequence of commands have a role to play in ‘evaluating a λ expression’?
Here is an older version of this page with ideas for computing programs which seem dodgy now.
The following conjecture seems pertinent:
The function of any execution hardware that is strategic to this idea can be done in code that fits in cache.
If the λ program’s data do not fit in cache then cache misses will dominate execution time and the interpreter will be as fast as special hardware.
Beware the above argument; it proves too much!
While the interpreter might be as fast as the hardware, its production cost should be much less; but this point is relevant only of CPUs cost something.