Early in the development of the IBM ACS system, T.J. Watson visited the Sandhill road lab to hear a presentation of the plan. At one point he exclaimed ‘but where is the unit record equipment?’. Such equipment read and punched cards and also produced ‘print outs’ on paper. All previous IBM machines could be seen as centered around these data portals.
Later John Cocke observed that 20 trains, each traveling 60 mph, full of punched cards, would provide a card data rate commensurate with the machine’s memory bus. The state of the art for printing then was the 600 lines per minute. To scale up printing capacity to match the CPU performance would require about 400 line printers occupying a very large room. Conventional buffering plans to run such a pool of printers would require main memory much beyond the cost of the whole machine. Memory was still expensive.
Subsequently I spent a day or two imagining a system to spool 300 or 400 chain printers from the machine. This would provide a scaled capacity in line with conventional large computer installations. Because of our small fast main memory was insufficient for simple buffering, we conceived a scheme to produce a multiplexed stream for many printers, perhaps all of them. The multiplexed stream would be produced by a multi merge pass which required some sort of prediction of how fast a chain printer could plow thru a particular print stream. The nature of chain printers makes this very variable. Otherwise the sort requires very little memory and time. That increased latency but not by much.
Chain printers make headway at a variable but predictable rate. It was probably not cost effective for even the ACS to do such a prediction however. Some print streams would slow down the rest. This was a problem.
An amusing and frightening prospect was the paper jam. One could not afford to stop the multiplexed stream for that would idle all the print capacity. The scheduled work of a particular printer would be lost if the stream were not stopped. The solution seemed to redirect the stricken printer’s output to a file to be fed thru the multi-merge again. There were other complex solutions as well.
This exercise was all a mock effort just to see how hard it would be to fit the ACS with the then dominant URE paradigm. Timesharing and interactive access to data stored online was already seen as the answer to these problems. These solutions also seemed to require more storage than had been planned. Fixed head drums could provide an attractive solution in theory for their queued access could provide thousands of independent accesses per second which could serve to provide short simple bursts of compute to people who were seeking and modifying data stored on DASD. This pattern was already panning out to some degree at Livermore.
The lack of memory larger than the planned “main storage” and faster than DASD (moving head disk) was beginning to be worrisome. Access time for available core technology was too slow but core could provide good bandwidth. Transistor memory was too far off.
The hardware people did propose some sort of multi megabyte memory with access time of micro-seconds and very high bandwidth.
More ACS lore