Binary for Assembly Language

Assembly Language is one layer above the raw machine code, which is in binary.

When learning Assembly, we will frequently encounter situations when it is necessary to be able to visualize and perform basic calculations in binary. It can be really difficult to understand what is going on in Assembly code without having a decent practical understanding of binary.

In addition, one really helpful skill when learning Assembly is being able to quickly convert between binary and hexadecimal without looking at a cheat sheet! There’s actually an amazing game to help with this, called Flippy Bit and the Attack of the Hexadecimals from Base 16. We’ll see this in more detail in the next tutorial.

In this tutorial, we’ll cover the basics of the binary number system and what we need to know to start learning Assembly Language.

What is Binary?

Binary refers to a number that is represented in the base-2 numeral system. This means that a binary number is comprised of only 0’s and 1’s. Each digit in a binary number is a ‘bit’, which is short for ‘binary digit’.

The following examples show the relationship between binary numbers and bits:

10 => 2 bits
101 => 3 bits
1010 => 4 bits

As a binary number gets longer, the number of bits increases.

Counting in Binary

We are used to the decimal number system, which is base-10. This means that the numbers that we use for each digit are 0-9. In contrast, binary only uses 1’s and 0’s:

Decimal Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Binary Digits: 0, 1

Counting fundamentally works the same in binary as in decimal, but it can take some getting used to. When we count in base-10, we add a new digit to count over 9:

9 + 1 = 10

The number ’10’ has two digits (not just one). In decimal, we need to add digits so that we can count to numbers greater than 9.

In binary, we only have two possible numbers for each digit: 0 and 1. This means that we need to add a second digit in order to count over 1:

1 + 1 = 10

Note the parallel between these two examples: in decimal, 9 + 1 = 10 but in binary 1 + 1 = 10. Remember that this second number is in binary; it’s equal to ‘two’, not ‘ten’.

Since this can get confusing, binary numbers are sometimes prefixed by ‘0b’. This allows us to differentiate between decimal and binary numbers:

10 = ten
0b10 = two

In general, we’ll use the ‘0b’ prefix only when it would otherwise be confusing.

Practice Counting in Binary

The first number in binary is 0 (equal to ‘0’ in decimal):

0

The second number in binary is 1 (equal to ‘1’ in decimal):

1

To get to the third number (equal to ‘2’ in decimal) in binary, we add a digit to the number and zero out the least significant bits:

10

We can increment this by adding another 1 (equal to ‘3’ in decimal):

11

When we want to increment this again (to ‘4’ in decimal), we need to add another digit and zero out the less-significant bits:

100

It’s a good idea to practice counting up and down in binary if you’ve never done so before. The following table shows the decimal and binary values up to a value of decimal ’15’, which corresponds with 0b1111. The binary values are shown using four bits because this is a very common format that is helpful to get used to when learning Assembly Language.

DecimalBinary
00000
10001
20010
30011
40100
50101
60110
70111
81000
91001
101010
111011
121100
131101
141110
151111

Make sure that you understand how this works before proceeding.

Bits vs. Bytes

In computing, one byte is equal to eight bits. This means that one byte can hold a value ranging from 0b00000000 = 0 to 0b11111111 = 255.

In the above table, we saw binary numbers represented in groups of 4, i.e. ‘0000’ to ‘1111’. This is a helpful representation because each binary grouping like this can be represented in hexadecimal (because hex values range from a decimal value of ‘0’ to ’15’). Hexadecimal is used because two hex numbers can cleanly represent a single byte.

Don’t worry if this is confusing at this point, as we will see more on this in the hexadecimal tutorial.

Why Computers Use Binary

The reason that computers use binary dates back well over a century. Boolean logic is a method of performing logic using the values of ‘true’ and ‘false’, and mathematicians developed the field of Boolean logic over a period of several centuries.

The first computers were relay-based, and Boolean logic was a great fit for them. Relays can be in the ‘on’ or ‘off’ positions, which means that relay-based computers could use Boolean logic to perform complex mathematical and logical operations. Tubes replaced relays in computers, and eventually transistors (specifically MOSFETs) replaced tubes.

Modern computers use billions or even trillions of transistors. Large numbers of transistors can be found in the CPU, GPU, and memory. Like relays, transistors are used as a type of switch and transistor-based computers can therefore take advantage of the robustness of Boolean logic and the historical development of the binary computers that came before.

A Closer Look at Why Computers Use Binary

Note: This section is absolutely not necessary to learn binary. It was included in this article as a point of interest and to make this tutorial as complete as possible.

In theory, it’s possible to develop many types of computers at this point, as we’re not limited by the use of relays anymore. A transistor is capable of allowing a wide range of voltage levels to pass, so we could easily develop schemes that allow for decimal or even hexadecimal computers. There are a lot of reasons why we’ve stuck with binary computers. Historical development, technological adoption, and the difficulty of transitioning to a new type of computer are all valid reasons. The entire IT industry would need to adopt to entirely new skillsets. But one of the main reasons that binary has stuck around is that it’s really robust.

If we think about what a transistor has to do, this becomes very clear. In a binary computer, a transistor only has to be in what can easily be interpreted as an ‘on’ or an ‘off’ state. They run on just a few volts (typically 3.3V or 5V).

Let’s take a transistor running on 3.3V. In a perfect world, we might expect a ‘low’ signal (logical ‘0’) to be 0V and a ‘high’ signal (logical ‘1’) to be 3.3V. However this is not achievable for many different reasons. Every circuit component, including every transistor, is a little bit different. Thermal (temperature) and capacitance-based effects also change the electronic properties of each component, and these changes aren’t equally felt by every component either.

This means that we can’t expect a digital ‘low’, or ‘0’ to be 0V or a digital ‘high’ or ‘1’ to be 3.3V. To make digital systems work, we need to have tolerances – one range that represents ‘low’ and another range that represents ‘high’, as well as a buffer range between them – a ‘no-man’s land’ of voltage logic levels.

In a 3.3V CMOS chip, digital low or ‘0’ is typically represented by a voltage between 0.0V and 0.8V, while digital high or ‘1’ is between 2V and 3.3V.

Imagine if we tried to make a decimal-based computer out of the transistors that we use for our computers today. We would need to tighten these ranges by a full order of magnitude and cram values representing all of the possible digits – 0,1,2,3,4,5,6,7,8,9 – and buffer zones – into the range currently occupied by one low, and one high state, and the buffer between them. It just wouldn’t be possible with the manufacturing techniques that we currently use and would be prone to large and frequenct errors. Efforts to do this in the past have failed or were too difficult to produce at large scale. That’s why we’re stuck with binary – at least for the time being.