Wednesday, December 6

When is a 10-bit A/D an 8-bit A/D?

Marketing guys love bigger numbers. Bigger is better, right? After all, Subway called it a “footlong” not an 11-incher. So when it comes to analog to digital (A/D) conversion, more bits are better, right? Well, that depends. It is easy to understand that an A/D will have a low and high measurement and the low will be zero counts and the high will result in the maximum count for the number of bits. That is, an 8-bit device will top out at 255, a 10-bit at 1023, and so on.

The question is: are those bits meaningful? The answer depends on a few factors. Like most components we deal with, our ideal model isn’t reality, but maybe it is close enough.

Fundamentals

The concept of an A/D converter is pretty simple. Take an analog signal in some range of voltages and convert it to a digital number. Assuming we are talking about a linear conversion, which is usually, but not always, the case, then the range of the number will correspond to the range of the voltage.

In other words, if the A/D can produce a count of 0 to 100 and the voltage ranges from 0 to 1 V, then a count of 100 is 1 V, 50 is 0.50 V and 12 is 0.12 V. In real life, though, most will use a count that is a power of two to maximize their resolution. That is, and 8-bit A/D will range from 0 to 255, and 10-bit from 0 to 4095, and so on.

The voltage range can — in theory — be anything. Sometimes it will be 0 V to 5 V or maybe -2.5 V to 2.5 V. Sometimes the reference will be a voltage that divides exactly by the count like 4.096 V, for example.

Let’s suppose you have a 10-bit A/D with a range of 0 V to 4.096 V. Each bit will be worth 4 mV — in theory, at least. However, suppose your circuit is subject to +/-24 mV of noise. Then even if the A/D were perfect, you really can’t trust the last two bits. That seems obvious, so keep your system noise low. The converter can only convert what it sees, it isn’t a mind reader. If you AC couple a scope and zoom in on a nice clean square wave, you’ll see plenty of up and down. In fact, the image to the right shows a blue square wave and the same square wave’s top (in yellow) amplified. The A/D converter will dutiful record these little irregularities. The question becomes: where is this noise coming from. Obviously, noise that is in your system is out of the control of the A/D converter, but that’s only part of the story.

The Blame Game

The problem is that your A/D isn’t perfect to start with. It should be plain that even if it were perfect, the A/D can’t split that least significant bit. That is, given the above example, measuring 1.000 V, 1.001 V, and 1.002 V isn’t likely to give you a different value even if there is no inherent system noise. This is quantization noise and appears to be — in this case a +/-2m V noise on the input, since the quantization level is 4 mV. You can see this graphically, to the left.

I’ll skip the math, but you can determine that for a signal greater than 1 bit value (4 mV, in this case) the noise will be 1.76 dB. You can use this to arrive at an ideal signal to noise ratio (SNR) for a given word length. That formula works out to 6.02 times B (where B is the number of bits). Remember, that’s ideal — you won’t even get that good. Our 10-bit A/D, then, can’t do better than 60.2 dB. If you need a refresher on dB, by the way, we did that already.

Consider this: if SNR of an ideal A/D is 6.02 dB times B then it stands to reason if we have a real A/D with a measured SNR, we could figure out how many “ideal” bits it has by rearranging the math. This is the ENOB of “effective number of bits” you may see on a data sheet. To get the ENOB, you subtract 1.76 from the measured SNR and then divide by 6.02. By the way, you usually hear ENOB pronounced “E-Knob.”

Let’s say our 10 bit A/D has a 51.76 dB measured SNR. Then 50/6.02 = 8.3. Our 10-bit device is performing only a little better than an ideal A/D with 8-bits of resolution.

Keep in mind that this noise is only due to the steps in the A/D. There are other sources of noise in a real A/D, including noise from components like resistors and other sources.

There are other even more subtle sources of errors. Nonlinearity in the conversion is one example. Essentially, the voltage change that corresponds to one bit may differ across the measured range. While the nominal value of a count in our example is 4 mV, it might be 3.95 mV on one end and 4.02 mV on the other end.

Another issue is clock jitter. You always sample at some rate. That is, you might take 10,000 samples per second or 1 million, but there is always some discrete time step. At low input frequencies, that sample clock can be pretty sloppy. As frequency goes up, though, any jitter in the clock can cause an error higher than the quantization noise.

Clock jitter gets even more of a problem as the count size goes up, since the resolution is better, and thus the quantization noise is less. For example, for an 8-bit converter, a 1 kHz sinewave won’t gain any error unless the jitter is more than 1.24 microseconds. A 12-bit converter needs to control jitter to less than 77.7 nanoseconds for the same input. Of course, as the signal frequency rises, the maximum allowable jitter goes down. That 8-bit converter would need to keep jitter to less than 12.4 picoseconds at 100 MHz, or 0.78 picoseconds for the 12-bit device.

Although the video below talks about jitter in PLL systems, the background information on what jitter looks like is generally useful. If you’ve ever wondered the difference between jitter and wander, you’ll want to check out this video.

Dithering

Noise is bad, right? Not always. It turns out when processing audio or images, there is an unfortunate side effect of quantization. Consider a simple example. Suppose you have a converter where each count is 0.1 V. You are measuring a repeated signal that has 0.09 V, 0.10 V, 0.11 V, and so on. You are only going to read 0.1 V and 0.2 V in this range, and exactly how you choose to round will determine which readings get classified as 0.1 and which as 0.2. If the signal is audio or even an image and if you reconstruct the signal, your brain will pick out the pattern. For example, in an image, you might see a stripe of one color that isn’t in the original.

Audio engineers do the same process if they are going to reduce sample sizes for the same reason. The video below covers that and the same ideas apply. The “rounding” in our case isn’t in the reduction of the sample size, but in the sampling one point and letting it represent a range of the signal.

An answer to this is to inject white noise of plus or minus a half bit’s worth into the input stream. In the above example, you’d inject +/- 0.05 V. This has the effect of randomly causing some values to round up and some to round down with no discernable pattern. Averaging over these values can actually increase resolution.

Often the noise can appear outside the frequency range the rest of the system is looking for, so it is easy to filter out. You can read more about that technique and also many other details about noise sources in A/Ds in this good and short article from Analog. If you want to hear more about ENOB, TI has the video below.

Take What You Know…

All these bit calculations are interesting, but an even more interesting topic for another day is how these converters work (and the reverse, too, of course). My old math teacher used to say “Take what you know and use it to find out what you don’t know.” I always think of that saying when I’m dealing with any sort of converter. Our computers are good at counting and counting time. They are bad at measuring voltages, currents, temperatures, and other real-world quantities. So most converters somehow convert those quantities into either counts or time. For example, a successive approximation converter will convert a count to a voltage and compare it to the unknown voltage. An unknown resistance might form a time delay with a capacitor and the computer can measure that time.

I’d talk more about analog to digital converter architectures, but [Bil Herd] already covered that nicely. If you are interested, it doesn’t cover every possible type of converter, but it does explain the ones you are most likely to see.

Just like you track significant digits in calculations or take into account real-world component inaccuracies in other designs, building — or even using — an effective analog converter requires you to understand the math well enough to not trust or even convert bits that don’t mean anything.

 

Photo credits:

Quantization error sine wave – [Hyacinth] CC BY SA 3.0.


Filed under: Hackaday Columns, hardware, Microcontrollers

No comments:

Post a Comment