## FLOATINGPOINT

### Floating point

In computing,**floating point**describes a method of representing an approximation of a real number in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form:

*The above text is a snippet from Wikipedia: Floating point*

and as such is available under the Creative Commons Attribution/Share-Alike License.

and as such is available under the Creative Commons Attribution/Share-Alike License.

### floating point

#### Adjective

- of a number, written in two parts as a mantissa (the value of the digits) and characteristic (the power of a number base) e.g. 0.314159 x 10
^{1} - of the internal representation of such a number as a pair of integers

*The above text is a snippet from Wiktionary: floating point*

and as such is available under the Creative Commons Attribution/Share-Alike License.

and as such is available under the Creative Commons Attribution/Share-Alike License.