What's the difference between 8 bit and 16 bit?

Well, let’s get the most obvious out of the way:

An 8-bit number is represented by 8 bits, whereas a 16-bit number is represented by 16 bits.

What this means, of course, can vary depending on what actual notation those values are taking, but the most common would be:

  • An unsigned 8-bit number has a range of 0–255, whereas an unsigned 16-but number has a range of 0–65,535.
  • Likewise, when signed, the range of numbers possible increases based on how many bits represent the number: 8-bit range is -128 to 127 and 16-bit range is -32,768 to 32,767. This is with TWOS COMPLEMENT notation, as a caveat, whic

Well, let’s get the most obvious out of the way:

An 8-bit number is represented by 8 bits, whereas a 16-bit number is represented by 16 bits.

What this means, of course, can vary depending on what actual notation those values are taking, but the most common would be:

  • An unsigned 8-bit number has a range of 0–255, whereas an unsigned 16-but number has a range of 0–65,535.
  • Likewise, when signed, the range of numbers possible increases based on how many bits represent the number: 8-bit range is -128 to 127 and 16-bit range is -32,768 to 32,767. This is with TWOS COMPLEMENT notation, as a caveat, which is by far the most common way to represent signed numbers. There are others, but you’d really be hard-pressed to actually find systems that implement them, at least in general data storage and calculations.

Floating-point numbers usually don’t manifest until you reach 32-bit numbers, but you can also find 8- and 16-bit binary coded decimal numbers in the wild. Just remember, for every bit you add to the number, the possible number of values doubles.

Computer can only processed a binary number, which is 1 or 0. One digit of binary number called bit. So if I have a data, let's say Integer number with value 10, it will be 1010 in binary and 1010 are consist of 4 digit of binary number or consist of 4 bits.

Computer generally stores data in bytes, or per 8 bit data. So, if you want to store Integer value of 10, then you will store 0000 1010 which is 8bit.

Let's say you want to store Integer value of 256. 256 in binary is 10000 0000, or consist of 9bit. So now your computer can no longer store your data in 1 byte. But since the computer stores d

Computer can only processed a binary number, which is 1 or 0. One digit of binary number called bit. So if I have a data, let's say Integer number with value 10, it will be 1010 in binary and 1010 are consist of 4 digit of binary number or consist of 4 bits.

Computer generally stores data in bytes, or per 8 bit data. So, if you want to store Integer value of 10, then you will store 0000 1010 which is 8bit.

Let's say you want to store Integer value of 256. 256 in binary is 10000 0000, or consist of 9bit. So now your computer can no longer store your data in 1 byte. But since the computer stores data in bytes, you can stored the data using 2 byte or 16 bit and your data will be looks like this : 0000 0001 0000 0000.

Some data stored in different way, like floating number, signed floating or integer number, character, or even picture. But all of them are stored in binary form inside the computer memory or storage.

In terms of processing, we can say in simple way that 8 bit processor can process 8 bit data in one go, and 16bit processor can process 16 bit data in one go, so generally 16bit processor should be faster than 8 bit processor.

8 bit refers to any number in binary from 0 to 255, in binary written as 0000 0000 to 1111 1111 (those are 8 “bits” or binary digits). 16 bit refers to numbers from 0 to 65535 .

When talking about sound, a sound is saved, generated, or sampled, using numbers ranging from 0 to 255 in 8 bit. This leaves a lot of soft sounds just being noise or silence. CDs use 16 bit numbers to encode music. It's pretty good.

In the image below I have made a picture of a waveform, one in low resolution (like 8 bit) and the other in high resolution.

The numbers represent how far the speaker is into or out of the cab

8 bit refers to any number in binary from 0 to 255, in binary written as 0000 0000 to 1111 1111 (those are 8 “bits” or binary digits). 16 bit refers to numbers from 0 to 65535 .

When talking about sound, a sound is saved, generated, or sampled, using numbers ranging from 0 to 255 in 8 bit. This leaves a lot of soft sounds just being noise or silence. CDs use 16 bit numbers to encode music. It's pretty good.

In the image below I have made a picture of a waveform, one in low resolution (like 8 bit) and the other in high resolution.

The numbers represent how far the speaker is into or out of the cabinet during the sound. It is pretty clear that the higher resolution sound more accurately approximates a sound wave. Once you hear the difference you can’t un hear it.

In photography, 16 bit grayscale images can have more subtlety in what values can be encoded.

consider an image like this one with 8 bits per pixel.

The gradient strip started at 256 pixels wide. There should be one pixel for every value of gray. If we used 4 bit color resolution with gray, we would have the strip below with 16 levels of gray.

It looks bizarre because I disabled the “dithering” that scatters pixels of adjacent colors to simulate more smooth transitions. With the dithering, It looks like this:

There are many different values of gray. If we turn it to a 1bit image (only black or white) Photoshop, and other software can turn it into a spray of black and white pixels called dithering but the subtlety is gone.

Quora does some funny things with bitmaps so here is a detail

Now imagine an image with double the number of grays of the first egg image. The first image has 256 levels of gray. A 16 bit image 65,535 levels of gray possible.

I can transform the 8 bit egg image to 16 bit but there isn’t much point. It only makes a difference if I can capture 16 bits of data. Almost none of the tools that I rely on to modify images in Photoshop work in 16 bit mode.

It really might only be visible to professionals in a very well crafted print. To make good use of 16 bit images you need a lot of knowledge, and a good process of experimentation to see whether the changes you are making are changing your final image for the better.

You can start with a raw image and convert it to 16 bit in Photoshop. If you start with a JPEG you are going to add file size but not subtlety. Think of it this way; you start with an image of 4 bits, you only have 8 values, 0 to 7, if you convert a gradation from white to black to 8 bits, you still only show 8 values because the 8bit conversion only changes the representation of the values, not the source data. If you looked at graph of your converted image, where the x-axis shows the value and the y-axis shows the number of pixels, you would have eight spikes with nothing in between. The first spike would be at 0, then nothing until 31, then nothing until 63 and so forth, all the way to 255. This is called upsampling and decreasing the resolution is called downsampling. There are many ways to decrease the information loss with upsampling and downsampling, but there is no way to create (meaningful) data that was not there originally.

In the early days of video games, there was so little power available in the graphics hardware that they had to use tricks to get things to move and be colored and respond to user interaction. This is a great video on the subject

One of the tricks is to use a lookup table which stores a few mixtures of colors so you can have very specific colors, just not lots of them. Photoshop can do this using the “indexed color” mode. This is only 16 colors but they are chosen very carefully.

An image with 8 bit resolution for each of the primary colors has a possible 16 million combinations and we cannot see any more color resolution.

This image is from the web. The quality is questionable. The color resolution at each step limits the color of the output.

Monitors and phone screens have limited color resolution as well.

As I expressed in my answer What makes a computer 32-bit or 64-bit? the answer is the same between an 8 and 16 bit processor. It is the size of the internal functional units in the CPU itself. The larger the basic ‘word’ size, the more complex/expensive/more power needed to create it and will eventually effect the amount of the time needed for each operation to propagate through the function (how to mitigate these are all taught to hardware designers in their logic classes and I will not touch on them here).

As I mentioned in the other answer, the barrel-shifter is best way to determine the ‘si

As I expressed in my answer What makes a computer 32-bit or 64-bit? the answer is the same between an 8 and 16 bit processor. It is the size of the internal functional units in the CPU itself. The larger the basic ‘word’ size, the more complex/expensive/more power needed to create it and will eventually effect the amount of the time needed for each operation to propagate through the function (how to mitigate these are all taught to hardware designers in their logic classes and I will not touch on them here).

As I mentioned in the other answer, the barrel-shifter is best way to determine the ‘size’ of the processor since it is the largest functional unit in the processor and it scales in size by the number of multiplexers required for an n-bit word is nlog2nnlog2⁡n

  • 8-bit — 8×log28=8×3=248×log2⁡8=8×3=24
  • 16-bit — 16×log216=16×4=6416×log2⁡16=16×4=64

This is a picture from the stack overflow of a mux based 8-bit barrel shifter, note the 24 mux units.

This is a picture from the Wikipedia - Barrel Shifter page of a 4-bit cross bar shifter (which is how it is typically implemented on-die). Note the X2X2 structure to give you an idea of the complexity of the physical structure needed to create this structure. The crossbar is preferred because the a n-bit shift can be performed in one ‘tick’, where as in the mux based solution the shift must propagate through all of the levels of multiplexers (note the 3 levels on an 8-bit mux style shifter) - this is a hardware example of space vs. speed. The regular structure, but large amount of real estate needed for a large crossbar based shifter such as you would find in a 32 or 64-bit means, that it can usually be picked out when looking at the picture of the die of the chip.

It goes to binary, 8 bit refers to any numbed in binary from 00000000 to 11111111, where 16 bit is any number from 0000000000000000 and likewise to 1111111111111111, binary counts from right to left starting at 1,2,4,8,16,32,64,128,256,512,1024,2048, etc, so you take the first 8, 1,2,4,8,16,32,64,128, you have a limit on how high you can go, but if you had 16 bits you can get a Max number way higher, likewise that’s why computers now use 64 bits. Not only number wise but also allows for bigger hard drive recognition, addressing, and ram sizes as well.

I assume you’re referring to CPU architectures, otherwise the difference is 8 bits, or I assume it’s a trick question. :)

Interestingly, there is a confusion and inconsistent use of the terms. Even the lowliest old school general purpose CPUs such as the 8080/8085/Z80 and 6502 class CPUs had 16 bit address buses. They could only do operations on single-byte 8-bit quantities, and were called 8-bit CPUs. Then came along CPUs like the 68000 and the 8086, which could operate on 16-bit operands, but also had wider address buses (could address more memory). They were called 16-bit CPUs.

Then, through

I assume you’re referring to CPU architectures, otherwise the difference is 8 bits, or I assume it’s a trick question. :)

Interestingly, there is a confusion and inconsistent use of the terms. Even the lowliest old school general purpose CPUs such as the 8080/8085/Z80 and 6502 class CPUs had 16 bit address buses. They could only do operations on single-byte 8-bit quantities, and were called 8-bit CPUs. Then came along CPUs like the 68000 and the 8086, which could operate on 16-bit operands, but also had wider address buses (could address more memory). They were called 16-bit CPUs.

Then, through an evolutionary period, CPUs increased their memory address spaces, and the classes associated with the CPUs was more a reflection of the memory address bus width than the size of the data operands they could use. Today, the address buses and operand sizes are both 64 bit on modern CPUs, and I’m not sure which aspect is being referenced in the use of the term, and I’m completely certain that there is no consensus, and I’m completely certain that there are people who routinely use the term without knowing anything about what it actually means.

An 8-bit operation handles eight bits at a time, a 16-bit operation handles 16 bits. Lets view addition on x86.

8 bit

ADD AL, BL

16 bit

ADD AX, BX

the latter adds two 16 bit registers. You can simulate the latter with 8 bit operations

ADD AL, BL

ADC AH, BH

ADC is add with carry, it adds the possible carry from the first operation to the result. The latter takes twice as long as the x16 can add 16 bits just as fast as 8.

Its the size of information the operations work on.

So let’s try a common operation … OR

1111 0101

0101 0111

See the operands are 8 bits? If I OR these I get 1111 0111

If we were to do this as 16 bit …

1111 0101 1111 0101

0101 0111 0101 0111

I would get

1111 0111 1111 0111

So the operation now is acting on 16 bits of information. What the “bits” are referring to is how many 1s and 0s there are for an operation.

For a processor as example an 8 bit processor works with a byte (8 bits) at a time, a 16 bit processor works with a word (16 bits) at a time.

Now there are many other cases where 8 bit vs 16 bit ca

Its the size of information the operations work on.

So let’s try a common operation … OR

1111 0101

0101 0111

See the operands are 8 bits? If I OR these I get 1111 0111

If we were to do this as 16 bit …

1111 0101 1111 0101

0101 0111 0101 0111

I would get

1111 0111 1111 0111

So the operation now is acting on 16 bits of information. What the “bits” are referring to is how many 1s and 0s there are for an operation.

For a processor as example an 8 bit processor works with a byte (8 bits) at a time, a 16 bit processor works with a word (16 bits) at a time.

Now there are many other cases where 8 bit vs 16 bit can be used … but its describing what is the base element size …

Does this make sense?

2 main differences:

  1. 16bit can represent a larger range of integers than 8 bits
  2. 16 bit words require more memory

As usual, a vague question.

Of course the difference between 8 and 16 is ( 16 -8 = 8 ).

2^8 = 256 while 2^16 = 65536 so here the difference is 65280.

I guess, though, you wonder about the difference between 8-bit procssor and 16-bit processors. I have a surprise for you, there isn’t much difference. Oh, of course there is a big difference since modern processors are built with a different architecture than past, but one COULD build an 8-bit the same way as a 16-bit. There was a couple of 16-bit processors in the 8-bit era, most notable ( in my eyes ) the 6809 and the TMI9900…

I don’t know that that’s a meaningful question, if you intend to use the calculator as a calculator with its typical built in functionality.

The HP-48 line of calculators, for example, was built around a 4-bit CPU with 64-bit registers, but you wouldn’t know it unless you tried to program it in assembly language. In fact, multiple lines of calculator were built around that CPU.

Once HP retired that CPU, they migrated the software stack to a software emulated Saturn core running on a 32-bit ARM920T core.

(What I rocked in college almost 30 years ago.)

If you have a fancy graphing calculator, you ca

I don’t know that that’s a meaningful question, if you intend to use the calculator as a calculator with its typical built in functionality.

The HP-48 line of calculators, for example, was built around a 4-bit CPU with 64-bit registers, but you wouldn’t know it unless you tried to program it in assembly language. In fact, multiple lines of calculator were built around that CPU.

Once HP retired that CPU, they migrated the software stack to a software emulated Saturn core running on a 32-bit ARM920T core.

(What I rocked in college almost 30 years ago.)

If you have a fancy graphing calculator, you can probably try to find its hardware specs by researching the calculator online.

Many TI graphing calculators since 1990 or so were based on an 8-bit Z80 derived core. Some fancier ones used a 16-bit (32-bit?) MC68EC000*. Current TI graphing calculators are based around 32–bit ARM9 cores. Summary table: Comparison of Texas Instruments graphing calculators - Wikipedia

Long before the graphing calculator era, TI had a much simpler calculators based around a very focused 4-bit CPU.

But to your original question: There’s no way to tell outwardly unless the environment chooses to expose that information. You’ll just have to research the model to see if someone’s documented the technical specifications.


*Whether you consider the Motorola MC68000 16-bit or 32-bit is a matter of debate, IMO. Motorola themselves call the original MC68000 16-bit, whereas they describe the MC68EC000 as an “an internal 32-bit architecture that is supported by a statically selectable external 8- or 16-bit data bus.”

The summary page I linked above says the TI calculators use a 68000; however, Ferenc Valenta has pointed out they actually use MC68EC000.

The m68k family is my poster child for “CPU bitness is not a well defined concept.”