Von Neumann saw that storing instructions and data in the same memory solved a class of programming problems by the stratagem of the program modifying itself. I am sure the Turing planned on such generality but von Neumann’s insight was in the context of building a real machine where recent precedent, such as the ‘Harvard Architecture’, was to separate the two sorts of memory. Von Neumann thus launched the ‘Princeton Architecture’ as he created and gathered many of the ideas of the IAS computer.

I thought I would demonstrate the simple art of self-modifying programs circa 1955. The machine I am familiar with is the IBM 701. The 701 had no ‘general registers’ or ‘index registers’, only an accumulator and a MQ register, each 36 bits. The task I illustrate is trivial, yet sufficient to describe the main technique that enabled computations on arrays in RAM.

Suppose that one must find the sum of 100 numbers stored in consecutive locations beginning at 1000. The program is located at 200.

200 LOAD 300 // 300 holds 0.
201 STORE 270 // Initialize sum.
202 LOAD 301 // Holds ADD 1000
203 STORE 205 // Modify the code
204 LOAD 270 // Get running sum
205 0 // There will be an ADD instruction here
206 STORE 270 // Replace sum
207 LOAD 205 // Get ADD
210 ADD 303 // Holds 1
211 STORE 205 // Modified ADD
212 SUB 302 // Holds ADD 1100
213 TNZ 204 // Repeat unless done.

300 0
301 ADD 1000
302 ADD 1100
303 1

The modern programmer will appreciate that writing such programs was tedious. There was a crude assembler and assigning addresses was automated but here I show the results of some such allocation.

The real 701 had two instructions per word so I lie just a bit. Each of these instruction took two 12 μs memory cycles except the TNZ which required just one.