I am pondering a space of ideas which in Keykos terms compares selling an old page and buying a new one, with merely rescinding some access to an old page. I have sometimes failed to relate these two sorts of activities. How do the notions of memory address fit in here? I consider the work of allocating storage to values that must be stored there but mostly avoid the question of who does that work; application, library, kernel or hardware. Even harder is knowing when the value is no longer needed and that the storage can be used for something else. What have these ideas in common and how does all this relate to Haskell ideas?
The art of progress is to preserve order amid change and to preserve change amid order.
Alfred North Whitehead
Keykos provides hierarchical storage and can fairly efficiently rescind access to a page or part of a segment by exposing much of the conventional memory map architecture. To wit when segment access is rescinded and that segment corresponds to a particular page table, then access to that page table is withdrawn and the overhead proportional to the number of pages is avoided. On some hardware we pay the cost of rebuilding TLB content.
All machine languages and almost all programming languages require program logic to deal with the notion “a place to keep data”. Almost all computer languages have assignment statements by some name and the value to the left of the assignment operator somehow nominates such a place. This goes under several names:
All current and proposed operating systems that I know still present mutable byte addressed storage to their guests. I adhere to that.
I think that continuing the tradition of mutable storage is justified in most programming languages because that is where today’s hardware bottoms out and I see no credible advantages for hardware designers if all programmers should adopt Haskell tomorrow. Often it is just good engineering to avoid the cost of abstraction levels. Some programing tasks having nothing to do with computer hardware just go better in languages that expose mutable storage. There are large and growing segments of software ideologies that foreswear mutable storage shared between multiple CPUs. I wish them well. I think that the platforms that they run on will include low level programs (kernels) written to deal with storage writable by multiple CPUs.
Part of the impetus of this note bore on the suggestion that instead of rescinding access for storage that you had disseminated, you just used new storage. Real storage is expensive but virtual storage is not so much. Depending on how access to storage is disseminated, an address in some sense, we even run out of addresses. Note that I have been cagey about whether I am talking about real or virtual storage; such confusion is often productive of ideas. When I get around to answers instead of questions I must be clearer.
Real storage is expensive and the program must sometimes take steps to arrange that some values and the storage they occupy will no longer be needed and perhaps act on this knowledge. Today the allocation of values to storage is fairly automated but some mechanism, software or hardware are still involved. This separation has improved programming but sometimes is less efficient. Garbage collection has offloaded this error-prone task in much of today’s software but not all.
Sometimes storage must be reclaimed regardless of the existence of access thereto. Perhaps another concurrent computation has found the answer to the question that was the charter of the holder of the remaining access. Perhaps someone merely ceased to pay the bill for the storage. (McCarthy’s ambiguous function and Edison.)