# Bit

Multiples of bits
Decimal prefixes
(SI)
Binary prefixes
(IEC 60027-2)
Name Symbol Multiple Name Symbol Multiple
kilobit kbit 103 kibibit Kibit 210
megabit Mbit 106 mebibit Mibit 220
gigabit Gbit 109 gibibit Gibit 230
terabit Tbit 1012 tebibit Tibit 240
petabit Pbit 1015 pebibit Pibit 250
exabit Ebit 1018 exbibit Eibit 260
zettabit Zbit 1021 zebibit Zibit 270
yottabit Ybit 1024 yobibit Yibit 280

A bit refers to a digit in the binary numeral system (base 2). For example, the number 1001011 is 7 bits long. Binary digits are almost always used as the basic unit of information storage and communication in digital computing and digital information theory. Information theory also often uses the natural digit, called either a nit or a nat. quantum computing also uses qubits, a single piece of information with a probability of being true.

The bit is also a unit of measurement, the information capacity of one binary digit. It has the symbol bit, and less formally b (see discussion below). The unit is also known as the shannon, with symbol Sh.

## Binary digit

Claude E. Shannon first used the word bit in a 1948 paper. Shannon's bit is a portmanteau word for binary digit (or possibly binary unit). He attributed its origin to John W. Tukey.

A bit is like a light switch; it can be either on or off. A single bit is a one or a zero, a true or a false, a "flag" which is "on" or "off", or in general, the quantity of information required to distinguish two mutually exclusive states from each other.

The bit is the smallest unit of storage currently used in computing.

## Unit

The bit, as a unit of information, is the amount of information carried by a choice between two equally likely outcomes. It is the capacity of one binary digit. One bit corresponds to about 0.693 nats (ln(2)), or 0.301 hartleys (log10(2)).

The name bit is mostly used when discussing data capacity, emphasising the storage of data as individual binary digits. The name shannon, referring to the same unit, is mostly used when discussing information content, emphasising aggregate information quantity.

There is a problem with unit symbols that affects the bit. It is common in computing to use the symbol b for bit and B for byte. However, both these symbols already have other meanings: b for barn and B for bel. Furthermore, b is occasionally also used for byte. The IEC recommends to use only bit for bit and B for byte. Since the bel is almost never used by itself (only used as a decibel, dB) the chances of conflict are small.

## More than one bit

A byte is a collection of bits, originally variable in size but now almost always eight bits. Eight-bit bytes, also known as octets, can represent 256 values (28 values, 0–255). A four-bit quantity is known as a nibble, and can represent 16 values (24 values, 0–15).

"Word" is a term for a slightly larger group of bits, but it has no standard size. In the IA-32 architecture, 16 bits are called a "word" (with 32 bits being a "double word" or dword), but other architectures have word sizes of 32, 64 or others.

Terms for large quantities of bits can be formed using the standard range of prefixes, e.g., kilobit (kbit), megabit (Mbit) and gigabit (Gbit). Note that much confusion exists regarding these units and their abbreviations, see binary prefixes. It has often been recommended to use "bit" for the bit and "b" for the byte, to prevent confusion with the unit bel, B. However, "b" is often used for bit and "B" for byte. The IEC recommends to use only "bit" and "B" for maximum disambiguation. Since the bel is almost never used by itself (only used as a decibel, dB) the chances of conflict are small.

Certain bitwise computer processor instructions (such as xor) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.

Telecommunications or computer network transfer rates are usually described in terms of bits per second (not to be confused with baud).