When I'm looking at a database schema for the first time, there are a number of tell-tale signs that give me the hint that the developers really haven't done much work with SQL Server before. They've made a "newbie" mistake.
One of those is the extensive use of the float data type.
Most times that I see this, the developers have come from a C or Java background and they assume that something that needs a decimal point in it, needs to be float. There are some situations where float makes sense, but 99% of the time what they should have used was decimal.
float is used to store approximate values, not exact values. It has a precision from 1 to 53 digits.
real is similar but is an IEEE standard floating point value, equivalent to float(24).
Neither should be used for storing monetary values. Because the values cannot be stored precisely, people who use float end up with values that don't match, columns of values that don't quite add up, or totals that are a cent out, etc. They spend their lives trying to round values to fix the issue, and usually don't get it right.
Here's an example. What values should this code print?
You'd expect the values 0.0, 0.1, 0.2 and so on up to 10.0. But that's not what you get. The query would continue until the maximum value for the data type was exceeded (a long time). If you stop the query, you'll see odd values:
Worse, note that our stop value of 10 is actually shown, but it didn't stop:
The problem is that while the value 10 can be stored accurately in float, the value 0.1 can't be. In decimal, we have recurring fractions. 1/3 is 0.33333 recurring. We can't write it precisely in decimal. In binary though, 0.1 has the same issue.
So even though we had a test of WHILE @Value <> 10.0, the value never exactly equalled 10.0. So why does it show 10 in the Messages tab? That's because SQL Server Management Studio (SSMS) rounds the values that it prints. It's not showing us the actual value.
We could fix this by substracting @Value from 10 and taking the absolute value of the result, then comparing it to a small increment. But who wants to write code like that?
In SQL Server, decimal, numeric, money, and smallmoney are the data types with decimal places that store values precisely. numeric is basically a synonym for decimal. money and smallmoney are old Sybase data types that have fixed scale, and have a funky relationship with currency symbols when converting strings. I generally don't use those. (There are some arguments for them in gigantic data warehouses where their smaller storage size might help but with row compression, the reasons for that are quickly disappearing). It's one of the problems with backwards compatibility in SQL Server.
You can't blame people for using a data type called money for storing amounts of money. But it's generally not the right answer.
Let's now look at the query from before if we change to decimal:
When executed, it stops exactly as expected:
Decimal (and numeric) require a precision and a scale. These should be chosen appropriately to store the values that you need. You need to keep rounding in mind when you calculate decimal values.
As I mentioned earlier, there are places where float and/or real make sense, but they are typically scientific calculations, not business calculations.