A meta program may be produced for any general purpose machine, as Turing already foresaw. It involves simulating the subject program, instruction by instruction. The performance penalty did not impact Turing’s theoretical computability arguments.
Simple hardware extensions overcome the simulation penalty. In my description of the Two State model I describe what may be the simplest hardware function that supports dynamic meta programs. It is certainly the dominant scheme. Classic operating systems (and Keykos too) transform the real hardware and attempt to provide more civilized and convenient interface to the application programs. They also usually attempt simultaneously to host concurrent applications putting minimal burdens on those applications to cooperate.
In such a scheme the program running in privileged state is in a position to “virtualize” aspects of the real machine.
The virtual machine as provided by VM/370 and Vmware take a very different approach. They each support concurrency but present to each client an illusion of a duplicate of the bare hardware. Simulators of other machines provide the illusion of another processor and system, but seldom (never?) support concurrency.
Two state hardware systems clearly provide for virtualizing various parts of the system but practice (before Keykos) requires an all or nothing approach. To virtualize a portion of memory with some algorithm, requires that that algorithm virtualize all of memory and indeed the rest of the machine or operating system. The program that runs as a Keykos segment keeper is in a position to virtualize some portion of the memory of other programs without other burdens. Virtualization is thus piecemeal and modular.