Modern computers use the binary system, a system that represents information using sequences of 0s and 1s. It is based on powers of 2, unlike our decimal system based on powers of 10. This is because in the binary system, another number place is added every time another power of two is reached, for example, 2, 4, 8, and so on; in the decimal system, another place is added every time a power of 10 is reached, for example, 10, 100, 1000, and so on.
Computers use this simple number system primarily because binary information is easy to store. A computer’s CPU (Central Processing Unit) and memory are made up of millions of “switches” that are either off or on—the symbols 0 and 1 represent those switches, respectively—and are used in the calculations and programs. The two numbers are simple to work with mathematically within the computer. When a person enters a calculation in decimal form, the computer converts it to binary, solves it, and then translates that answer back to decimal form. This conversion is easy to see in the following table:
Decimal |
Binary |
0 |
0 |
1 |
1 |
2 |
10 |
3 |
11 |
4 |
100 |
5 |
101 |
6 |
110 |
7 |
111 |
8 |
1000 |
9 |
1001 |
10 |
1010 |
11 |
1011 |
12 |
1100 |
13 |
1101 |
14 |
1110 |
15 |
1111 |
16 |
10000 |
17 |
10001 |
18 |
10010 |
19 |
10011 |