This answer builds on top of Ralf Kleberhoff's answer, and tries to address C specifically.
First, read about Arbitrary-precision arithmetic on Wikipedia. To put it simply, it is possible:
- For a reusable software library to provide fixed size bignums, such as 64-bit integers, 128-bit integers, 256-bit integers, etc., and arbitrary size bignums, where the amount of memory needed to store the value is dynamically allocated based on how big the value is, from calculation result or copying.
- For a programming language to provide the same, and integrated into the language syntax itself - so that the "+" "-" "*" "/" operators would work with bignums as well, for example.
- For a programming language to specify a "standard library" that is standardized and which comes with the language. This standard library is sometimes called "the runtime library" of the programming language.
For some modern programming language, the typical choice is the third one - to include such bignum facilities as a standard library that comes with the language.
Some other (not-so-modern, but highly advanced and ahead-of-their-times) programming languages already implemented the second choice - in order to eliminate "integer overflow" as a source of "unexpectedness" (errors or exceptions) in programming.
However, for C, two mindsets work against the second and third choice:
- The C language tries to keep the language itself to the minimum.
- The C language also tries to keep its standard library to the minimum.
Thus, the committee that controls C refuses to rectify (standardize) a bignum library for the C programming language. Users of C must therefore look for a bignum library, or else implement their own.
The C language leaves out a lot of things. For example, most programming languages have a "Deflate" (compression/decompression) or "MD5" checksum implementation in their standard libraries. C doesn't.
Why is there a distinction between "fixed size bignums" versus "dynamically sized bignums"?
This is because fixed size bignums (where size is determined by the programmer and written into the source code) allows the memory size determination to be made at compile-time. The compiled machine code is both simpler and faster - it takes fewer instructions to perform each operation. Dynamically sized bignums require more machine instructions for each operation.
int
has a minimal maximum of 2 ** 15 - 1, and might fit into as little as one solitary memory slot, also known as byte. Some implementations have higher limits and/or different-sized bytes.