When I'm looking at a database schema for the first time, there are a number of tell-tale signs that give me the hint that the developers really haven't done much work with SQL Server before. They've made a "newbie" mistake.
One of those is the extensive use of the float data type.
Most times that I see this, the developers have come from a C or Java background and they assume that something that needs a decimal point in it, needs to be float. There are some situations where float makes sense, but 99% of the time what they should have used was decimal.
float is used to store approximate values, not exact values. It has a precision from 1 to 53 digits.
real is similar but is an IEEE standard floating point value, equivalent to float(24).
Neither should be used for storing monetary values. Because the values cannot be stored precisely, people who use float end up with values that don't match, columns of values that don't quite add up, or totals that are a cent out, etc. They spend their lives trying to round values to fix the issue, and usually don't get it right.

Here's an example. What values should this code print?
You'd expect the values 0.0, 0.1, 0.2 and so on up to 10.0. But that's not what you get. The query would continue until the maximum value for the data type was exceeded (a long time). If you stop the query, you'll see odd values:
Worse, note that our stop value of 10 is actually shown, but it didn't stop:
The problem is that while the value 10 can be stored accurately in float, the value 0.1 can't be. In decimal, we have recurring fractions. 1/3 is 0.33333 recurring. We can't write it precisely in decimal. In binary though, 0.1 has the same issue.
So even though we had a test of WHILE @Value <> 10.0, the value never exactly equalled 10.0. So why does it show 10 in the Messages tab? That's because SQL Server Management Studio (SSMS) rounds the values that it prints. It's not showing us the actual value.
We could fix this by substracting @Value from 10 and taking the absolute value of the result, then comparing it to a small increment. But who wants to write code like that?
In SQL Server, decimal, numeric, money, and smallmoney are the data types with decimal places that store values precisely. numeric is basically a synonym for decimal. money and smallmoney are old Sybase data types that have fixed scale, and have a funky relationship with currency symbols when converting strings. I generally don't use those. (There are some arguments for them in gigantic data warehouses where their smaller storage size might help but with row compression, the reasons for that are quickly disappearing). It's one of the problems with backwards compatibility in SQL Server.
You can't blame people for using a data type called money for storing amounts of money. But it's generally not the right answer.
Let's now look at the query from before if we change to decimal:
When executed, it stops exactly as expected:
Decimal (and numeric) require a precision and a scale. These should be chosen appropriately to store the values that you need. You need to keep rounding in mind when you calculate decimal values.
As I mentioned earlier, there are places where float and/or real make sense, but they are typically scientific calculations, not business calculations.
FLOATs are surely appropriate for exchange rates (used to convert an amount from one currency to another), because the exchange rate is an approximation. Your article implies they are never appropriate for business calculations.
Hi Michael,
No actually. In most financial organizations that I work in, exchange rates are calculated and stored to a particular number of decimal places, and there are rounding rules that need to be applied when performing calculations on them. The problem with float is that it can't store even simple values accurately. If I say that an exchange rate is 0.1, I want it to be 0.1 not 0.9999 recurring. Here's a simple example of the issue with float:
DECLARE @Value float = 0;
DECLARE @ExchangeRate float = 0.1;
WHILE @Value != 10
BEGIN
SET @Value = @Value + @ExchangeRate;
PRINT @Value;
END;
Ask yourself how many values that would print, then try it.
The point is that if you want an exchange rate to be 0.1, you actually want 0.1, not a number that's approximately 0.1.
When to use FLOAT in real-life scenario?
Hi Edgar, typically when storing scientific values, rather than business values.
Well done in explaining the difference of these data types. Ive read different articles regarding and this is the clearest of all!
Glad that it helped.
DECLARE @CONVERSION float
set @CONVERSION=2.20462442018377
SELECT (@CONVERSION*10.25)
DECLARE @CONVERSION1 decimal
set @CONVERSION1=2.20462442018377
SELECT (@CONVERSION1*10.25)
see the difference b/w output values
i am confused which one i need to choose
Whenever you work with decimal values, you need to decide what the appropriate precision is, rather than just storing it as an approximate value. For example, see the difference if you used decimal(38,20) instead of just decimal.
Total Legend. You are a rock star
You are most welcome
Hi Greg,
I understand what could be the benefit of using fields with type decimals (mainly the possibility to index them), but I think you did not choose your examples objectively. See the following examples (which are not objective either).
I remember also that we chose to go from DECIMAL to FLOAT many years ago precisely because some of our customers complained because the sum of periodized costs per month did not always match the whole cost (per year) with DECIMAL, while it did with FLOAT…
If you add the fact that when using your database with Microsoft Entity Framework, you need to cast all your decimal fields to double (which is the standard type of float variables in most of programming languages) to be able to do proper calculations, use 'M' suffix to initialize them, …, I am not quite sure it is worth.
DECLARE @CONVERSION1 decimal
set @CONVERSION1=1.0
SELECT (@CONVERSION1/3)*3
DECLARE @CONVERSION float
set @CONVERSION=1.0
SELECT (@CONVERSION/3)*3
and your first example with the counter, try running the following one, and see which one works…
DECLARE @Value decimal(10,2)=0.9
WHILE @Value/3*3 1.0
BEGIN
PRINT @Value;
SET @Value+=0.1;
END;
DECLARE @Value float=0.9
WHILE @Value/3*3 1.0
BEGIN
PRINT @Value;
SET @Value+=0.1;
END;
Regards,
Leto
Hi Leto,
While there are examples where taking a value, and dividing by a proportion is going to finally total closer to the original amount, that's not an argument for storing values as approximate values. As I said, you need to store values appropriately and manage rounding. For example, if I need to pay someone $100 quarterly, and send them 1/3 of that each month, I can't actually send them $33.33333333333333333333333333 each month, even though it would total to close to the right value at the end. I need to send them $33.33 (rounded to the nearest cent) for each of the first two months, and $33.34 for the final month. All that takes is knowing what the final amount should be, and deducting the rounded amounts already deducted. Each monetary value is then still precise.
Storing approximate values is not the answer when dealing with money.
As for Entity Framework, it has so many limitations that I don't believe it should be used in serious applications, at least not at scale. In my consulting work, I see an amazing number of issues caused by people using it, and even an amazing number of problems that people have in using it in the first place, once they get past the trivial applications of it. I see a lot of people who finally realise this and remove it (painfully) from their code bases. There are many decisions that its designers have taken for you under the covers; many of which are not sound.
Regards,
Greg
To be precise float (n) – is the number of bits that are used to store the mantissa. It has no nothing in common in that you wrote. While loop trick is also not honest. If you are storing value as decimal (18,2) it says that scale is 2, and in case of float it might be 18 or higher. So in this case my float value will be much more precise compare to your decimal. To stop infinite loop just add CONVERT statement because you are comparing different datatypes. This article is not applicable to any business area.
I hear what you are saying but I completely disagree.
I don't find this example dishonest. The point is that float is bad for money, which has exactly 2 decimal places in all data I've dealt with. When working with currencies that have more or less, they don't maybe have 2 and maybe have 18, they have some exact number. I've worked with high volume options data, where the number is specific to 6 decimal places even for USD, so we we use (18,6). Even this needs to be accurately rounded to 2 decimal places when the time comes to actually pay up, because I don't have any 1/10 pennies to pay with. If your values have maybe 2 digits after the decimal and maybe 18, I'm willing to bet you aren't dealing with money.
Precisely Austin ! (Pun intended)
Float/Double vs Decimal
I agree that Float/Double types is more useful for scientific uses. But there is a more important distinction exists:
If you need to convert/cast a decimal to a float/double frequently due to an external library/package, never use decimal (even if it is a business use) or double (even if it is scientific use), just design it as the required (to be converted) data type.
For example Google OR-Tools requires double data type, anything decimal has to be converted during Google lib function calls which makes run-time longer for huge number of rows.
Hi Mustafa, it would depend upon how it's going to be used. I'm usually more interested in how the data is stored in my system as that's where most of the usage actually happens. The data tends to get used in the systems way more than it's passed to/from APIs.
No Mustafa. You don't make the data fit the tools. The data are your core asset. Everything in software is about accurately processing the data. If the tool can't accurately process your data, it is not the correct tool. If my bank told me they've over charged me several thousand pounds on my mortgage due to rounding issues, it wouldn't make me feel any better if they told me it had happened because they made a bad DB design decision just to make it easier to use some third party API.
Great explanation of the float issue! However, I'm missing an explanation as to why SELECT CAST(.1 AS FLOAT) * CAST(80.0 AS FLOAT) gives me 8.0 (in SQL Server), while SELECT CAST(.1 AS FLOAT)+CAST(.1 AS FLOAT)+ … (80 times) gives me 7.999999999999? More generally, most examples I've seen of when floats become a problem are when adding, but it seems that some kind of black magic happens when multiplying? Or am I mistaken?
Hi Magnus, glad it was helpful. The problem is that you weren't really getting 8.0 (most likely). It's just that whatever was showing you the value had rounded it as part of displaying it.
I tested it in SQL Server Management Studio on a SQL Server database (version 10.50.1600.1). It could be as you say, that it is rounding/formatting the results for whatever reason, but then shouldn't the same happen when adding? Or could it be interpreting the multiplication in some "clever" way (for example doing 1.0*8.0 instead of 0.1*80.0?
No, it's a problem all the time. I doubt it's doing that. Where did you see the 8.0 though? In the results pane? As the output of PRINT?
Yes, in the results pane. Multiplication always seem to give me correct results, while addition produces float-rounding errors.
Also, if you declare a float variable, assign CAST(.1 AS FLOAT)+CAST(.1 AS FLOAT)+ … (80 times) to it and print it, you get "8". But if you just run the SELECT statement you get 7,99999999999999.
But the results pane is also doing its own rounding. You're not seeing the actual value.
And yes, I commonly see issues with float in business apps where people have columns of values that don't add up properly. They often have the "total is one cent out" types of issues.
HI,
I am facing the same issue for only one transaction when the SUM() is applied values are incorrect bt strange is that since 10+ year this issue didnt occur and was working fine.
Could you please help me?
Hi Farhin,
Not sure I quite follow the issue, but the fact that something has worked for many years doesn't mean that it's correct. With rounding, it can be the luck of the draw as to what values you're working with.
HI
Thanks a lot.
jst let me describe it to u
for example
for id = 1 there are 2 position and we are taking sum(position).
postion = 63407.00000
postion = 72731.00000
now,
select id, sum(position) as position
is giving below
output:
id position
1 4020447649 (for 63407.0000)
1 5145766756 (for 72731.00000)
and for other successful record it is giving sum(position) as it position.
Hi Farhin, can't tell from what you've posted. You might need to post some create table and insert statements, plus a sample query, so we have any chance of helping.
It probably is because each time we add, we round. Whereas multiply is just 1 time and round that.
Hi Greg,
Many thanks for the explanation, definitely one of the best I've found on the 'net.
My goal is always to be as accurate as possible when storing data and performing arithmetic functions, so 99% of the time I use Decimal data type. However, this often leads to problems with decimal overflow resulting in truncation to 6 decimal places and therefore less overall precision (just FYI I'm currently using SQL Server). Although double-precision floating point numbers are approximate, they often give me a closer result to original numbers due to the number of decimal places they store.
Decimal:
SELECT CAST(51343.10388663151356498761 AS decimal(38,20)) / CAST(4.10388663151356498761 AS decimal(38,20))
Result: 12510.848494
Float:
SELECT CAST(51343.10388663151356498761 AS float(53)) / CAST(4.10388663151356498761 AS float(53))
Result: 12510.848494783
One solution is obviously to reduce scale (i.e. decimal(38,10) vs. decimal(38,20) ). However, if the column contains numbers which typically have a scale of 15 and you reduce that to 8 (for example) then you are already truncating data and reducing overall accuracy. When I'm doing this over more than one record then differences start to creep in versus the whatever I'm comparing against (usually source data).
I appreciate there probably isn't a silver bullet solution for this but I would at least like to find a good intermediary solution. What would you suggest in these instances?
Hi Arthur, yes, the rules for precision calculations are tricky. They are documented here: https://docs.microsoft.com/en-us/sql/t-sql/data-types/precision-scale-and-length-transact-sql?WT.mc_id=DP-MVP-7914
If you're doing large divisions like that, you might have to use float to aim for higher precision. And as you say, there's no silver bullet on this one.
Hi Greg,
I thought this might be the case but wanted to make sure I wasn't (actually) losing my sanity. Many thanks for the reply & link and I wish you a Happy New Year – let's hope 2021 is a little brighter!
I do wish the high precision calculations worked a bit differently, but it is what it is.
Yes, hope 2021 will be better for all thanks.
Hi sir, Is there any difference between numeric vs decimal? which is best to use , please clarify it.
Hi Arjun, there used to be subtle differences but nowadays decimal and numeric are synonyms and functionally equivalent as described here. We just always consistently use decimal.
I have 10,000 records in a reference table. There is a column called DIS_FACT shows the percentage weightage of each record. The Sum(DIST_FACT) = 1 or 100%.
The intend is to use to multiply any given value with the DIST_FACT to create a fractional distribution.
In certain cases the "Given Value" is very small and after multiplying with the DIST _FACT it became even smaller.
ex. Given value = 0.7 DISTI_FACT for 1oth record is 0.0000178 RESULT = 0.00001246
Say SUM( RESULT) of 4 million records might be 0.378
what would be a good data type(SQL server) to store this value (RESULT) and retrieve accurately ?
I would store all of those as decimal data type. A standard precision of say 18 would work fine. But for each column that you store, you need to decide how many decimal places you should allow for. Then, whenever you make a calculation that will create values to be stored, you decide what the appropriate rounding needs to be.
Hi Greg, Really appreciate your timely response.
Thanks,
Aji
You are most welcome
With 53 digits, you really mean 53 bits for the significand? And a maximum of 17 digits?
The main problem with IEEE754 is that it is implemented differently on each CPU family. Some have a "nearest under", some have "nearest over" and some have "closest value".
See https://www.sqltopia.com/internal-structures/ieee-754/
To test this I created a table with 1 million rows containing a single FLOAT column, set the database to readonly and restored a backup to a different server and just as I suspected the sum of that column was different with a delta value of 10.2. I then ran the SUM query four times on the same table andI got four different results!
How? Float suffers equally bad as DECIMAL does (see https://www.sqltopia.com/mathematics/order-of-operations/) but is less visible due to the SSMS rounding.
Yep, don't want to use float in almost any app. For most business purposes, I want a value stored exactly, and I will determine the way the rounding works.
Great explanation. Thank you for making it easy to understand. I am taking a bootcamp on SQL and there are a lot of things that are not really explained. I will definitely be reading your books!
Glad to help.
Once you get done with the bootcamp, consider our Advanced T-SQL Training: https://sqldownunder.com/training It works through issues in/around all the data types, plus so much more.
Rather late than never, but this is a great explanation. Especially when you have already burned your fingers once. Sometools just change your export import from Numeric to Real, and then see by how many real Dollars (or ZA Rands) you are out all of a sudden. And if the originals are gone.. ouch…
Glad to hear it helped Koos.