The international standard
IEC 60027-2, chapter 3.8.2, states that a byte is an octet of bits. However, the unit
byte has historically been
platform-dependent and has represented various storage sizes in the
history of computing. Due to the influence of several major
computer architectures and product lines, the byte became overwhelmingly associated with eight bits. This meaning of
byte is codified in such standards as
ISO/IEC 80000-13. While
byte and
octet are often used synonymously, those working with certain
legacy systems are careful to avoid ambiguity. Octets can be represented using number systems of varying bases such as the
hexadecimal,
decimal, or
octal number systems. The binary value of all eight bits set (or activated) is , equal to the hexadecimal value , the decimal value , and the octal value . One octet can be used to represent decimal values ranging from 0 to 255. The term
octet (symbol: o) is often used when the use of
byte might be ambiguous. It is frequently used in the
Request for Comments (RFC) publications of the
Internet Engineering Task Force to describe storage sizes of
network protocol parameters. The earliest example is from 1974. In 2000,
Bob Bemer claimed to have earlier proposed the usage of the term octet for "8-bit bytes" when he headed software operations for
Cie. Bull in France in 1965 to 1966. In
France,
French Canada and
Romania,
octet is used in common language instead of
byte when the eight-bit sense is required; for example, a megabyte (MB) is termed a megaoctet (Mo). A variable-length sequence of octets, as in
Abstract Syntax Notation One (ASN.1), is referred to as an octet string.
Octad Historically, in
Western Europe, the term
octad (or
octade) was used to specifically denote eight bits, a usage no longer common. Early examples of usage exist in British, Dutch and German sources of the 1960s and 1970s, and throughout the documentation of
Philips mainframe computers. Similar terms are
triad for a grouping of three bits and
decade for ten bits. ==Unit multiples==