Wednesday, May 21, 2008

Mechanics' Floating Point revealed

The eight bits in the pattern are used as follows:

S EEE MMMM

S is the "sign bit": 1 means negative, 0 means positive.

EEE are the "exponent": in "excess four" notation, meaning that the usual decimal interpretation of these bits is four more than the actual power of two. When all 8 bits are 0 the entire bit pattern represents the number zero. 001 means an exponent of -3 (i.e. a multiplier of 1/8), 010 mean -2, and so on, up to 111 which means an exponent of 3 (i.e. a multiplier of 8).

MMMM are the "mantissa": the fractional value which is multiplied by two to the exponent value. There is assumed to be a decimal point and a "1" bit to the left of MMMM. So, the pattern 0000 represents the fraction .10000 (i.e. 1/2). Similarly, the pattern 0001 represents the fraction .10001 (i.e. 17/32), and on up to 1111, which represents the fraction .11111 (i.e. 31/32).

In the table below, the exponent bit patterns label each of the columns. There are only 7 patterns because the bit pattern 000 means that the entire number is zero. This wastes some of the bit patterns because the entire 8 bits represent zero whatever bit patterns are present in the S and MMMM bit positions.

The mantissa bit patterns label each of the rows.

Each cell of the table shows the real number represented by the overall 8 bit pattern. Not shown are the negative numbers, which use the same bit pattern except for the sign bit.


This table can also be found at http://sanbachs.net/mfp/mfp.html.

Tuesday, May 20, 2008

Mechanics' Floating Point

Several years ago (Fall 1993) I gave a presentation at Novell, as part of their informal lecture series, named "Food For Thought." Somewhere in a dusty archive there may still be a VHS video tape of the presentation, which was entitled "My Computer Can't Add." I presented part of this again during a job interview at UVSC in 2001.

The idea of a computer not being able to add may seem odd. But, a computer doesn't actually deal with numbers, but only with representations of numbers. Except for some very simple and small numbers, the representations are not completely accurate. Part of the problem is round-off errors. The rest of the problem comes from the fact that the representations are far from being a complete set, as only some of the numbers are actually represented at all.

Until recently, personal computers only represented integers between -2147483648 and 2147483647. Granted, those aren't small numbers (as in "pick a number between 1 and 100"), but they aren't big enough to handle the national debt. Newer computers are using 64 bit processors, enabling them to represent integers between -9223372036854775808 and 9223372036854775807.

However, money values and numbers used in scientific computations are generally handled by the "floating point" representation. Here one can represent approximations to real numbers in a larger range than the integer representations provide, but giving up precision. The floating point representation generally uses either 32 or 64 bits, but uses the bits differently than the way they are used when representing integers.

To make this clear, I invented "Mechanics' Floating Point" for the purpose of my presentation. This uses a floating point style representation inside of just 8 bits to represent numbers between 1 sixteenth and 8 (well, almost--the largest number representable in the scheme is actually 7 and 3 quarters). Choosing 8 bits allows one to enumerate all of the representations possible in 8 bits (there are just 256 of them (see "Powers of two")).

In a subsequent post, I will try to find a way to display the entire representation of Mechanics' Floating Point.