### Divide Semantics

Rounding the result of signed fixed point divide operations is vexing to both machine and language designers. I suggest some principles here that persuade some that rounding down (towards minus infinity) is a good choice. I adhere to the common dictum for the modulus operation that a%b = a − (a/b)*b where “%” is C’s remainder operation.

I have had much use for expressions of the form A[a%16] where A is declared as “zot A;”. If 0≤a%b<b whenever b>0 then this expression is valid whether or not a<0. If (−5)/16 is −1, as I propose, then (−5)%16 = (−5) − ((−5)/16)*16 = (−5) − (−1*16) = 11 and thus A is valid and what the logic of program expects.

Here are some identities that work only with rounding down:

• (a+b)/b = (a/b)+1 (Assuming that a+b doesn’t overflow)
• (a+b)%b = a%b
• a/(1<<n) = a>>n   (a/2 & a/4 are common expressions that a compiler is thus able to optimize as a shift.)
• a%(1<<n) = a & ((1<<n)−1) (supports a common optimization when n is known at compile time as in A[a%16].)
These are a small set of advantages of rounding down that support both the end user of the language and the implementer. I have no similar set arguing for the situation of negative b for either a/b or a%b. I propose to round down here too as the simplest way of expressing the general rule. Thus 3/−2 = −2 and 3%−2 = 3 − (3/(−2))*(−2) = 3−(−2)*(−2) = −1. For b<0 we have b<a%b≤0. This does not impact the argument above for “int A[−16];” is already an invalid declaration.

One rule that is lost by rounding down is that a/−b = (−a)/b = −(a/b). I have not found that rule nearly as useful as the above mechanisms.

I have done much systems and physics programming and have found that signed integers are nearly unneeded in systems programing signed floats are but needed greatly in physics and the geometry that frequently accompanies physics. Computer graphics also uses signed numbers and probably signed integers heavily.

The proposal can be summarized merely as “round the quotient down (towards minus infinity) and define a%b = a − (a/b)*b”.

#### On two other subjects yet pertaining to divide:

If I were designing a language I as a systems programmer would like a primitive divide operation that throws an exception when the remainder is not zero. Classic hardware supports this efficiently. If I think that I have an odd number N and need to divide it 2 I can write (N−1)//2 and get confirmation that N was indeed odd.

For applications that need both a/b and a%b most C compilers will program the division twice. Here is a hack that modifies the language “transparently” to the user and solves the problem. Define a new type “typedef struct{int q; int r;} IQ;” Define a new built-in operation “IQ divide(int num, int den);”. Define “a/b” as “divide(a,b).q” and “a%b” as “divide(a,b).r”. Now the end user (programmer) expression “func(a/b, a%b)” expands to “func(divide(a,b).q,divide(a,b).r)”. The latter expression has a repeated subexpression which is found by normal compiler logic and only one divide is done. The user can make use of this type and built-in function as well if is spelled so that he can use it. I now notice that ISO’s C has very much this function with the functions

```#include <stdlib.h>

typedef ... dif_t;
typedef ... ldiv_t;
div_t div(int, int);
ldiv_t ldiv(long, long);```

The same trick works for pairs of function such as sin(x) and cos(x) which are frequently invoked in pairs and can be computed efficiently together.