I know that SQL Server has a number of data types that look like they could be used for storing amounts of money. Developers go looking for data types that have a decimal point in them, and there are a few:

**Float** – this one seems to get picked by people from a Java background. And it's pretty much never the right choice. It doesn't store values exactly because it's an approximation. Just like we have numbers in decimal that we can't write exactly (like 1/3 as a decimal is 0.33333 and so on), there are numbers in binary that we can't store exactly. And that includes numbers like 0.1 (yes, ten cents). So using this data type for money is **usually a newbie mistake**, and the same applies to the potentially even more approximate **real**.

**Developers who use float for this are the ones that have columns of numbers that don't quite add up and other fun rounding-related issues.**

**Money** – I can't blame people for picking this data type when they're looking to store money. However, it's an old Sybase datatype that because of backwards compatibility, we're stuck with in SQL Server. It has two potential upsides:

it's smaller (storage-wise) than other exact decimal data types. So I see people occasionally use it when size is critical in extremely large data warehouses. It's also able to handle things like dollar signs in strings when it's the target of a conversion.

However, it's a fixed four decimal place data type. And that's the real problem (no pun intended). When you are working with amounts of money, you need to determine how many decimal places are needed for storage, and to just store those. You also need to spend time thinking about how rounding will work when you're dealing with calculations involving those amounts of money.

### So what should you use?

The data type that lets you do that is **decimal**. It stores values exactly (not approximately) and you can choose the scale (i.e. the number of decimal places). Note that numeric is pretty much a synonym for decimal.