[Home] [Tech Library]

5. Probabilistic cash flows

As noted in the discussion of accounting overhead in the dividend algorithm, the incentive structure of an algorithm (in the absence of risk aversion) is determined by its average expected payoffs, which can deviate from its actual payoffs on any given occasion. This principle has general applicability.

5.1. Processor accounting The overhead of the escalator algorithm may be acceptable at the scale of, say, tasks in the Mach operating system [IV], but not at the finer-grained level of Mach threads, Actor tasks [11], or FCP processes [V]. Scheduling of light-weight processes like these might best be handled by a simple round-robin scheduler, which itself buys time through an auction house. How might these light-weight processes be charged so as to subject them to price incentives and compensate the round-robin process for the time it buys-all at low overhead? One approach is to use probabilistic charging: at random, uniformly-distributed times (a Poisson process with mean interarrival time T), note which light-weight process is currently running and charge its sponsoring account T times the current price of processor time. On the average, the round-robin process receives the market price for time; on the average, each light-weight process pays it. And yet on a typical occasion, a light-weight process will run without being charged, and hence without accounting overhead.

5.2. Gambling A different kind of probabilistic cash flow is gambling, wagering money on a chance event. This too has its place.

Consider an object which has just received an alert message asking for more money than it can pay or raise though retainer-fee requests. Sending an alert message may be expensive, in terms of direct communication costs and costs imposed on clients. It is an elementary result of decision analysis [12] that when X% more money has over X% more utility, for some value of X (which requires that the utility-vs.-money curve somewhere be concave upwards) there exists a fair bet (or one with a small "house percentage") that is rationally worth taking. This can be the case both in alert processing and elsewhere in an agoric system.

To illustrate the principle (albeit with absurd numbers), assume that an object has a balance of $50 and receives an alert message demanding $100. Assume further that the object has 10 clients, and that transmitting an alert costs $1 per message. If the object simply alerts its clients and then pays its bill, it will pay a total of $110. If, however, the object gambles the $50 in a fair bet on a double-or-nothing basis, its expected net payment will be half the net payment that will result if the gamble is won (1/2 * $50) plus half the net payment that will result if the gamble is lost, (1/2 * ($50 + $100 + $10)). This equals $105, for an expected savings of $5. Similar bets can be profitable, so long as the house percentage amounts to less than $5. Thus, gambling might profitably be made part of a market strategy for alert processing.

One can predict that market forces will favor the emergence of rational gambling in agoric systems. To provide gambling services, one expects to see lottery objects with substantial cash reserves. These will accept payments of X units of currency with a request for a greater sum Y, and return Y with a probability slightly less than X/Y.

5.3. Insurance Another (and more respectable) form of gambling is insurance, or risk pooling. This can be based on a form of trust that will arise naturally in an agoric system.

A set of objects sharing a single program (code, script, class) is like a set of organisms sharing a single genome. It is an elementary result of evolutionary theory [13] that the genes of such organisms (in, say, a colony) will be selected for complete altruism among "individuals". And indeed, colonial polyps often share digestive tracts, and thus all their food.

Objects sharing a script can likewise (with full trust) offer to share cash reserves, in effect insuring one another against temporary shortages and expensive alert processing. In insurance terms, the shared incentives of these objects eliminate the problem of "moral hazard", that is, of insured entities taking uneconomic risks because "the insurance company will pay for any losses". Here, objects care as much about the "insurance company" as about themselves (more accurately, "evolutionary pressures will favor those objects which behave in a manner that can be regarded as `caring' in this way"). Objects of types which abuse this mechanism to prevent proper garbage collection will in general have higher costs and lose in price competition. This is a case in which Hofstader's "superrationality" [14] and Genesereth's "common behavior assumption" [15] will apply.

Previous Next

[Home] [Tech Library]