*Compellingly Beautiful Software*

Products Programming Contact

*
"NaN? Isn't that the bread in an Indian restaurant?"
*

Just how wonderfully ideal classical mathematics is becomes especially apparent when one starts working with computers. Suddenly, all kind of things that should work, don't. All kinds of things that should be equal, aren't. And you might not even have to have any bugs in your programs to see this.

One immediate difference is the notation for numbers. Since most computers calculate in Base 2 instead of Base 10, programming languages usually give you some fairly invisible capabilities for converting numbers to and from these bases. This works just fine for integers, of course, but fractional numbers pose difficult problems. For example, the ideal fraction 1/3 is written as 0.33333... in decimal notation, but it has a different non-terminating representation in binary notation. Each base has its own set of numbers that can't really be represented accurately, and these sets only partially overlap.

Then there are other problems that no one has ever figured out how to avoid.
Not only do we often have to make arbitrary
decisions about how much space to allocate inside a computer
to represent a given number,
we also sometimes have to make arbitrary decisions on how much *time*
to devote to a calculation that could give us a more accurate result.

The problems described here have names. They're called
*conversion* errors, *rounding* errors, and *truncation*
errors.

Just because these are called errors doesn't mean that anyone did anything
wrong. They're called errors because the resulting numbers differ from
what's expected in ideal mathematics, simply because a computer is a
*finite* machine.
There is a whole branch of mathematics that is concerned with these problems:
Numerical Analysis. One might say this is where reality intrudes upon
mathematics, but it's better to think of it the other way around:
no matter how hard you work at it, computer arithmetic is, to some extent,
a fantasy. But if you know what you're doing and you're careful,
you can still get some useful results from it.

Here's a simple example of computing errors that you can see on most handheld calculators. (Incidentally, handheld calculators are some of the few commonly-available computers that prefer to calculate in Base 10.)

Use your calculator to find the square root of 2. Write down this result,
but don't clear it from the machine. Now square the result that's in your
display. Depending on the brand and model of calculator, you may or may not
see 2 in your display at this point. Whatever the result, write this down also.
Now, *key in* the number it told you was the square root of 2,
and square that. Did you get the same result as your previous squaring?

Here's the reason this matters: whatever number your calculator told you
was the square root of 2, wasn't. Since it is a finite machine, it gives
you a close approximation to that number, but it must still throw away
an infinite number of digits from the mathematically ideal result.
There is no way around this. But calculator designers have some choices
to make about how to handle this *unavoidable* error. Oddly enough,
the preferable way to handle this square-the-square-root problem is not
to give you back a nice round 2, but to give you back whatever you got
when you squared that long number that you keyed in,
which your calculator told you was the square root.
This squared result probably wasn't 2, but it was close,
and the errors were *predictable*.

IEEE-754 binary arithmetic is a carefully designed environment for working in this crazy fantasy that is computer arithmetic. Many of its features are a surprise to people who only know the usual classroom mathematics. This does not imply that the classroom mathematics is wrong. IEEE-754 is an attempt to deal with the unavoidable differences between ideal mathematics and computer arithmetic.

If you are a math teacher below the college level,
you're probably accustomed to telling your students that
division by zero is "undefined".
What about tangent(90)? "Undefined."
This, of course, is baloney.
We know perfectly well what these results are: infinitely large.
We also know that infinity is a rather slippery beast,
but it can be tamed and has its uses.
Nevertheless, until IEEE-754 arithmetic arrived,
most computers would abort any program that had the nerve to ask
for 1/0 or tangent(90).
There are some noteworthy problems with this approach, however.
What if the computer in question is the entity that is actually flying
an airliner (the usual case these days)?
Or controlling a nuclear power plant (the usual case these days)?
Or managing your father's pacemaker (the usual case these days)?
*Aborting these programs is not an option.*

IEEE-754 provides for arithmetically-usable infinities. You can create them, calculate with them, and get something resembling sensible results.

We know what 0/1 and 1/0 are,
and, using limits, we can establish that 0^0 is 1.
But there are other things that we don't know what to do with at all.
For example, we still haven't figured out what 0/0 is.
But this is another thing that computers are sometimes asked to calculate,
and, once again, *aborting is not an option*.

For these *really* strange results, IEEE-754 provides NaN,
which stands for "Not-a-Number". It also provides rules for the
propagation of NaNs through a calculation in ways that allow a program
to continue running and make some sense of it all.

Most modern processors with floating-point instructions implement IEEE-754 arithmetic, and some programming languages and compilers at least acknowledge its existence. The IEEE-754 standard defines several different modes of operation that your language or compiler may or may not let you configure.

This brief article will not pretend to be a tutorial on IEEE-754. The intent here is to introduce you to some of the more visible aspects of this very useful arithmetic standard. To learn more, read this, or google "IEEE-754".