Can `unsigned long int`

hold a ten digits number (1,000,000,000 - 9,999,999,999) on a 32-bit computer?

Additionally, what are the ranges of `unsigned long int`

, `long int`

, `unsigned int`

, `short int`

, `short unsigned int`

, and `int`

?

The *minimum* ranges you can rely on are:

`short int`

and`int`

: -32,767 to 32,767`unsigned short int`

and`unsigned int`

: 0 to 65,535`long int`

: -2,147,483,647 to 2,147,483,647`unsigned long int`

: 0 to 4,294,967,295

This means that no, `long int`

**cannot** be relied upon to store any 10 digit number. However, a larger type `long long int`

was introduced to C in C99 and C++ in C++11 (this type is also often supported as an extension by compilers built for older standards that did not include it). The minimum range for this type, if your compiler supports it, is:

`long long int`

: -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807`unsigned long long int`

: 0 to 18,446,744,073,709,551,615

So that type will be big enough (again, *if* you have it available).

A note for those who believe I've made a mistake with these lower bounds - I haven't. The C requirements for the ranges are written to allow for ones' complement or sign-magnitude integer representations, where the lowest representable value and the highest representable value differ only in sign. It is also allowed to have a two's complement representation where the value with sign bit 1 and all value bits 0 is a *trap representation* rather than a legal value. In other words, `int`

is *not* required to be able to represent the value -32,768.

The size of the numerical types is not defined in the C++ standard, although the minimum sizes are. The way to tell what size they are on your platform is to use numeric limits

For example, the maximum value for a int can be found by:

```
std::numeric_limits<int>::max();
```

Computers don't work in base 10, which means that the maximum value will be in the form of 2^{n}-1 because of how the numbers of represent in memory. Take for example eight bits (1 byte)

```
0100 1000
```

The right most bit (number) when set to 1 represents 2^{0}, the next bit 2^{1}, then 2^{2} and so on until we get to the left most bit which if the number is unsigned represents 2^{7}.

So the number represents 2^{6} + 2^{3} = 64 + 8 = 72, because the 4th bit from the right and the 7th bit right the left are set.

If we set all values to 1:

```
11111111
```

The number is now (assuming **unsigned**)

128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 = 2^{8} - 1

And as we can see, that is the largest possible value that can be represented with 8 bits.

On my machine and int and a long are the same, each able to hold between -2^{31} to 2^{31} - 1. In my experience the most common size on modern 32 bit desktop machine.

Licensed under: CC-BY-SA with attribution

Not affiliated with: Stack Overflow