Skip to main content
Question Protected by gnat
deleted 9 characters in body
orokusaki
  • 1.1k
  • 1
  • 8
  • 13

In Postgres, it used to be quite common to use a 4-byte integer auto field for primary keys, until it started becomming somewhat common to run into the 2147483647 limit of 4-byte integers. Now, it's become somewhat common to start with 8-byte integer primary keys, just to be safe, because the small additional cost helps avoid potentially major issues down the road. But, who will ever have a database with 9223372036854775807 (9 billion billion) rows?

This made me wonder why 6-byte integers weren't really a thing. A signed 6 byte integer could hold something like 100140 trillion positive values (right?), and would be highly practical for uses like this (database primary keys).

I found this old post, dating back to 2004, asking about 6-byte integers in C. I also found a number of specific technical questions on StackOverflow asking about technical specifics aboutrelated to 6-byte integers, but I couldn't find anything that felt standard or well documented, or standardized that either A) suggested that 6-byte integers are a common thing, B) should or might become a thing, or C) can't, for some technical reason, become a thing.

Is there any reason why it wouldn't be a good idea, from a performance, practicality, etc. standpoint to have 6-byte integers? Feel free to think about this question in the limited context of C and/or C++, if it helps it avoid being too abstract.

My gut tells me that wide-spread adoption of 6-byte integers would present some kind of big performance boost / RAM savings for a lot of projects, especially in the ML/AI space, as well as some small performance / RAM / space savings in the database world.

In Postgres, it used to be quite common to use a 4-byte integer auto field for primary keys, until it started becomming somewhat common to run into the 2147483647 limit of 4-byte integers. Now, it's become somewhat common to start with 8-byte integer primary keys, just to be safe, because the small additional cost helps avoid potentially major issues down the road. But, who will ever have a database with 9223372036854775807 (9 billion billion) rows?

This made me wonder why 6-byte integers weren't really a thing. A 6 byte integer could hold something like 100 trillion values (right?), and would be highly practical for uses like this (database primary keys).

I found this old post, dating back to 2004, asking about 6-byte integers in C. I also found a number of questions on StackOverflow asking about technical specifics about 6-byte integers, but I couldn't find anything that felt standard or well documented, that either A) suggested that 6-byte integers are a common thing, B) should or might become a thing, or C) can't, for some technical reason, become a thing.

Is there any reason why it wouldn't be a good idea, from a performance, practicality, etc. standpoint to have 6-byte integers? Feel free to think about this question in the limited context of C and/or C++, if it helps it avoid being too abstract.

My gut tells me that wide-spread adoption of 6-byte integers would present some kind of big performance boost / RAM savings for a lot of projects, especially in the ML/AI space, as well as some small performance / RAM / space savings in the database world.

In Postgres, it used to be quite common to use a 4-byte integer auto field for primary keys, until it started becomming somewhat common to run into the 2147483647 limit of 4-byte integers. Now, it's become somewhat common to start with 8-byte integer primary keys, just to be safe, because the small additional cost helps avoid potentially major issues down the road. But, who will ever have a database with 9223372036854775807 (9 billion billion) rows?

This made me wonder why 6-byte integers weren't really a thing. A signed 6 byte integer could hold something like 140 trillion positive values (right?), and would be highly practical for uses like this (database primary keys).

I found this old post, dating back to 2004, asking about 6-byte integers in C. I also found a number of specific technical questions on StackOverflow related to 6-byte integers, but I couldn't find anything well documented or standardized that either A) suggested that 6-byte integers are a common thing, B) should or might become a thing, or C) can't, for some technical reason, become a thing.

Is there any reason why it wouldn't be a good idea, from a performance, practicality, etc. standpoint to have 6-byte integers? Feel free to think about this question in the limited context of C and/or C++, if it helps it avoid being too abstract.

My gut tells me that wide-spread adoption of 6-byte integers would present some kind of big performance boost / RAM savings for a lot of projects, especially in the ML/AI space, as well as some small performance / RAM / space savings in the database world.

orokusaki
  • 1.1k
  • 1
  • 8
  • 13

Why is there (practically) no 6-byte integer in common usage?

In Postgres, it used to be quite common to use a 4-byte integer auto field for primary keys, until it started becomming somewhat common to run into the 2147483647 limit of 4-byte integers. Now, it's become somewhat common to start with 8-byte integer primary keys, just to be safe, because the small additional cost helps avoid potentially major issues down the road. But, who will ever have a database with 9223372036854775807 (9 billion billion) rows?

This made me wonder why 6-byte integers weren't really a thing. A 6 byte integer could hold something like 100 trillion values (right?), and would be highly practical for uses like this (database primary keys).

I found this old post, dating back to 2004, asking about 6-byte integers in C. I also found a number of questions on StackOverflow asking about technical specifics about 6-byte integers, but I couldn't find anything that felt standard or well documented, that either A) suggested that 6-byte integers are a common thing, B) should or might become a thing, or C) can't, for some technical reason, become a thing.

Is there any reason why it wouldn't be a good idea, from a performance, practicality, etc. standpoint to have 6-byte integers? Feel free to think about this question in the limited context of C and/or C++, if it helps it avoid being too abstract.

My gut tells me that wide-spread adoption of 6-byte integers would present some kind of big performance boost / RAM savings for a lot of projects, especially in the ML/AI space, as well as some small performance / RAM / space savings in the database world.

close