SQL: Am I really going to run out of bigint values?

I was in another discussion recently where someone was worried about running out of bigint values. I come across this regularly. A few months back, another customer was in the middle of changing all their bigint values to GUIDs as they were worried about running out of bigints.

People don't seem to understand just how large a bigint really is. It's certainly hard to imagine.

How big is a bigint?

I remember reading an article back in 1992. They said that if you got an 8 bit computer, put zero in a register, and made it loop on an instruction to increment the register value, you’d have 256 operations before it overflowed. That would happen in the blink of an eye.

For a 16 bit computer, and it’s 65,536 operations, again it’s the blink of an eye.

For a 32 bit computer, and it’s 4,294,967,296 operations, on the machines of the time, it was around 2 to 3 minutes.

But for a 64 bit value, on the fastest machine on the planet at that time, it was 350 years.

And for a 128 bit value (like a GUID), the size of those numbers is outside our ability to comprehend in any meaningful way. (That's why they picked that size).

Current Day

Now computers have become quite a bit faster but based on the fastest MIPS value on Wikipedia’s page on it today, that’s still around 2 years. Yep, that’s two years of doing nothing but just adding one, and on a staggeringly fast machine.

Many years back, I remember someone from the product group telling me they’d had the first customer who’d exceeded bigint for an identity column. But no surprise, crazy allocation of values was involved.

Business users are never going to run out of bigints for sane processes, unless they do something dumb like big seed or increment values.

4 thoughts on “SQL: Am I really going to run out of bigint values?”

  1. Then there's the actual storage space for all bigint values (positive and negative):

    KB: 144,115,188,075,855,872
    MB: 140,737,488,355,328
    GB: 137,438,953,472
    TB: 134,217,728
    PB: 131,072
    EB: 128

    That doesn't include the storage overhead in SQL Server either, which would be at least 30% more. Nor any other columns.

    And if the GUID conversation comes up, search for "ZFS boil the ocean" and have them read that. 🙂

  2. The maximum signed value of a bigint is: 9,223,372,036,854,775,807

    Even if you're inserting into a table and deleting 'old' rows from that same table, so that storage space isn't the issue… you could insert an average of 10 rows per second (and delete 10 'old' rows per second) for 29,247,120,867 years.

    I think that should be enough for most people 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *