"Unums 2.0"

I don't get it.

For this example I will express unums in binary so that the precision equals the number of digits, e.g. 101.1b means 5.5 with 4 digits precision.

Let's subtract the two inexact unums (110b,111b) - (100b,101b) (i.e. precision of 3 bits). It seems to me that the result should be (01b, 11b). This remains only one ULP wide since the precision was reduced, yet the bounds have doubled.

None of the public presentations about unums solve this problem. For example, in the unums 2.0 presentation, it says that the task "Let x = [2, 4]. Repeat several times: x ← x / x" immediately gives (0.625, 1.6) as the stable result - but only if the 'Compiler sets the hardware mode to "dependent" so all table look-ups are reflexive'. This makes no sense to me, because (1) if the compiler knows that it's dividing x/x, why wouldn't the result be exactly 1 (unless x=0)? (2) real workloads don't do x/x so we should look at a problem where the numerator and denominator are different. (3) if the numerator and denominator are different, how could the compiler know that it should use "reflexive" mode?

The 'x <- x - x' example has the same problem.

P.S. while it's tempting to just buy the book in the hope of getting answers (a fundraising strategy on the author's part?), the book doesn't cover unum 2.0. For speed, an unum 2.0 of size N bits seems to need at least two lookup tables of size O(22N-2), which becomes impractical at about 16 bits. What is to be done for those rare individuals that need answers with more than 2 digits of precision ;)?

/r/compsci Thread Parent