Floating point numbers are represented as the **float** type in Python. Physically, they are represented as base two (binary) fractions. As a result, certain fractions cannot be represented accurately, but merely as approximations. For more details, see the Python documentation on floating point numbers.

In [1]:

```
1.1 + 1.1
```

Out[1]:

In [2]:

```
1.1+1.2
```

Out[2]:

In [3]:

```
.1+.1
```

Out[3]:

In [1]:

```
.1+.2
```

Out[1]:

**decimal** module accurately handles floating point arithmetic without the limitations of base 2 fractions. You can use the **decimal** module like the following:

In [4]:

```
from decimal import *
Decimal('.1') + Decimal('.2')
```

Out[4]:

**decimal** module for use. We give the module a string representation of the number and use arithmetic operators as if *Decimal* was a number. This yields a *Decimal* with the correct result.

**decimal** module will calculate the result to a defined precision (20 decimal places by default) *and* round. Look at the examples below. The normal operation of "2/3" yields .6666 repeating, but does not get rounded to .666666666667 when the repeating decimal gets cut off. The **decimal** module will automatically round without you having to do so explicitly.

In [7]:

```
2/3
```

Out[7]:

In [5]:

```
Decimal("2") / Decimal("3")
```

Out[5]:

Don't want 20 points of precision? You can change that like so:

In [7]:

```
getcontext().prec = 4
Decimal("2") / Decimal("3")
```

Out[7]:

*Decimal* object. If you want to cleanly output this for the user or use it as a number in a different kind of calculation or conditional, you can convert it like so:

In [8]:

```
float(Decimal("2") / Decimal("3"))
```

Out[8]:

In [9]:

```
str(Decimal("2") / Decimal("3"))
```

Out[9]:

In [10]:

```
int(Decimal("2") / Decimal("3"))
```

Out[10]: