In the area of computers and computer programming, everything comes down to bits and bytes. But what does that mean? What is a bit anyway? Read on to find out more.
When people first thought about constructing machines which could perform mathematical calculations, most initial attempts at constructing just such a machine tried to use the same numeric system that humans use for counting and calculating: the decimal system.
With ten different values (from zero to nine) for each position of a number, and a simple position-based value calculation, the decimal system is easy for most people to grasp. The values are easy to count with the help of both hands.
However, a technical implementation of this numeric system and the necessary operations for adding, subtracting, multiplying and dividing was (and still is) difficult. The technical limitations of the time added to this.
Thankfully other people came up with a solution: instead of using the decimal system, they started using the binary system. The binary numeric system only knows two possible values for each position within a number - zero and one. Binary values can be translated into decimal values and vice versa.
In addition to this, a technical computing system only needs to be able to determine between two different states in order to read and compute binary numbers.
With the rise of electricity and electric components, binary values became easy to implement as part of a machine. Over time, a standard emerged: when a single electric line (or electric storage like a capacitor) did not carry an electric current (with a voltage of zero), its binary value was equalled to zero. When the same line did carry a current, with a voltage above zero, it's binary value was intepreted as one.
In modern computers and electronic circuits, the voltage for the value of one is typically 5 volts (desktop PC etc.) or 3.3 volts (notebook etc.). On some occasions, different voltage is used (especially in low-power mobile equipment), but even then the difference between the voltages for zero and one are big enough to be usable for the electronic circuits.
The interesting side effect of this kind of technical implementation is that bit values can be used to control the behaviour of a part of an electronic circuit (switching something on or off) in addition to the usage of information storage (in the form of binary numbers).
Scientists and engineers are also currently working on different technical implementations of the same basic binary principles. Some of them are working on the creation of components that will use light and optics for the circuitry instead of electrical current and electronic components. Others are looking into creating similar functionality by using specially build molecules as logic components.
However, each of these endeavors keeps using the binary system, and therefore bits.
This finally brings us to the bit. The bit is a single binary value of one or zero which is stored in an electronic memory chip or a similar electric circuit.
By using a successive chain of bits, any numerical value can be stored or computed.
A Group of eight bits is called a byte.
When computers based on the binary numeric system where developed in the 20th century, and the first integrated circuits were also developed, 8-bit CPUs became the first standard. In an 8-bit CPU, the central processing unit could read and write 8 bits with their binary values at once. Inside the CPU, the register memory and the computing circuity could usually handle more than 8 bits because a computation like addition of high values or multiplying can produce higher values than those which can be stored in 8 bits.
Today, most CPUs can read and write 32- or 64-bit values at once, and this will likely be expanded in future CPU types. Even so, the CPUs still have functionality to read, write and otherwise manipulate single bits inside a byte etc. And even huge values of mega-, giga- or terabytes are made up of long chains of bits.
So the bit is the most basic, atomic unit of information storage in technical systems.