The reason is largely because C is portable to a much wider variety of platforms. There are many reasons why the different data types have turned out to be the various sizes they are on various platforms, but at least historically, int
has been adapted to be the platform's native word size. On the PDP-11 it was 16 bits (and long
was originally invented for 32-bit numbers), while some embedded platform compilers even have 8-bit int
s. When 32-bit platforms came around and started having 32-bit int
s, short
was invented to represent 16-bit numbers.
Nowadays, most 64-bit architectures use 32-bit int
s simply to be compatible with the large base of C programs that were originally written for 32-bit platforms, but there have been 64-bit C compilers with 64-bit int
s as well, not least of which some early Cray platforms.
Also, in the earlier days of computing, floating-point formats and sizes were generally far less standardized (IEEE 754 didn't come around until 1985), which is why float
s and double
s are even less well-defined than the integer data types. They generally don't even presume the presence of such peculiarities as infinities, NaNs or signed zeroes.
Furthermore, it should perhaps be said that a char
is not defined to be 1 byte, but rather to be whatever sizeof
returns 1 for. Which is not necessarily 8 bits. (For completeness, it should perhaps be added here, also, that "byte" as a term is not universally defined to be 8 bits; there have been many historical definitions of it, and in the context of the ANSI C standard, a "byte" is actually defined to be the smallest unit of storage that can store a char
, whatever the nature of char
.)
There are also such architectures as the 36-bit PDP-10s and 18-bit PDP-7s that have also run C programs. They may be quite rare these days, but do help explain why C data types are not defined in terms of 8-bit units.
Whether this, in the end, really makes the language "more portable" than languages like Java can perhaps be debated, but it would sure be suboptimal to run Java programs on 16-bit processors, and quite weird indeed on 36-bit processors. It is perhaps fair to say that it makes the language more portable, but programs written in it less portable.
EDIT: In reply to some of the comments, I just want to append, as an opinion piece, that C as a language is unlike languages like Java/Haskell/ADA that are more-or-less "owned" by a corporation or standards body. There is ANSI C, sure, but C is more than ANSI C; it's a living community, and there are many implementations that aren't ANSI-compatible but are "C" nevertheless. Arguing whether implementations that use 8-bit int
s are C is similar to arguing whether Scots is English in that it's mostly pointless. They use 8-bit ints
for good reasons, noone who knows C well enough would be unable to reason about programs written for such compilers, and anyone who writes C programs for such architectures would want their int
s to be 8 bits.