I always considered sizeof(char) = one byte = 8 bits.
However reading the C99 standard (N1256.pdf), and especially the C99
rationale (C99RationalV5.10.pdf) I see that the intent is to allow
for platforms where one byte != 8 bits.
"(Thus, for instance, on a machine with 36-bit words, a byte can be
defined to consist or 36 bits, these numbers being all the exact
divisors of 36 which are not less than 8.)"
So I read several sections of the C99 standard and the rationale, and if
you combine the standard with the rationale you get the only way to
satisfy all the rules,
is to have one byte = 8 bits. So why all this careful, generic
formulations to avoid defining one byte == 8 bits, when in fact you
can't have an implementation
where one byte != 8 bits that conforms to the standard/rationale.
Section 3.7.1 says "character single-byte character 〈C〉 bit
representation that ﬁts in a byte", which is further strengthened by
C99RationaleV5.10: " A char whether signed or unsigned, occupies exactly
Thus no doubt one character = one byte.
Section 3.6 defines byte: "NOTE 2 A byte is composed of a contiguous
sequence of bits, the number of which is implementation-deﬁned. The
least signiﬁcant bit is called the low-order bit; the most signiﬁcant
bit is called the high-order bit."
Section 184.108.40.206 defines int8_t: "Thus, int8_t denotes a signed integer
type with a width of exactly 8 bits."
This quote from C99Rationale V.5.10 " Thus, for instance, on a machine
with 36-bit words, a byte can be defined to consist of 9, 12, 18, or 36
bits, these numbers being all the exact divisors of 36 which are not
less than 8.)" shows that the intent was to allow for a definition of
byte that doesn't necessarily have 8 bits.
However according this quote " These strictures codify the widespread
presumption that any object can be treated as an array of characters,
the size of which is given by the sizeof operator with that object’s
type as its
operand." I should be able to treat any objects (thus including int8_t
type objects) as array of characters.
This implies that there exists an N such that: number_of_bits(char)*N =
number_of_bits(int8_t). Given what we know about char and int8_t this means:
there exists an N such that number_of_bits(byte)*N = 8, which implies
number_of_bits(byte) <= 8.
Now according to C99Rationale V5.10: " All objects in C must be
representable as a contiguous sequence of bytes, each of which is at
least 8 bits wide.",
number_of_bits(byte) >= 8.
Thus number_of_bits(byte) = 8.
Am I right, or am I wrong?