Back to the 1950s and 1960s, when memories were tiny--how did people write large, sophisticated programs on them? There must have been ways, since ambitious AI projects were conceived on those tiny machines, along with graphics, weather simulation, nuclear bomb computations, and many other programs. Let's think back on the two main uses of memory: to hold programs (instructions), and to hold data that is acted on by those programs. Huge datasets were accommodated by those early machines by writing some of the values out to disk files. Whenever the data was needed in a computation, it was read back into memory. The programmer had the responsibility to explicitly read in this data when it was needed. This increased both the complexity of the program and its running time, but still it was possible to crunch large datasets. |