When your only tool is a hammer, everything starts to look like a nail. That’s an old saying and perhaps somewhat obvious, but our tools do color our solutions and sometimes in very subtle ways. For example, using a computer causes our solutions to take a certain shape, especially related to numbers. A digital computer deals with numbers as integers and anything that isn’t is actually some representation with some limit. Sure, an IEEE floating point number has a wide range, but there’s still some discrete step between one and the next nearest that you can’t reduce. Even if you treat numbers as arbitrary text strings or fractions, the digital nature of computers will color your solution. But there are other ways to do computing, and they affect your outcome differently. That’s why [Bill Schweber’s] analog computation series caught our eye.
One great example of analog vs digital methods is reading an arbitrary analog quantity, say a voltage, a temperature, or a shaft position. In the digital domain, there’s some converter that has a certain number of bits. You can get that number of bits to something ridiculous, of course, but it isn’t easy. The fewer bits, the less you can understand the real-world quantity.
For example, you could consider a single comparator to be a one-bit analog to digital converter, but all you can tell then is if the number is above or below a certain value. A two-bit converter would let you break a 0-3V signal into 1V steps. But a cheap and simple potentiometer can divide a 0-3V signal into a virtually infinite number of smaller voltages. Sure there’s some physical limit to the pot, and we suppose at some level many physical values are quantized due to the physics, but those are infinitesimal compared to a dozen or so bits of a converter. On top of that, sampled signals are measured at discrete time points which changes certain things and leads to effects like aliasing, for example.
There was a time when it wasn’t clear analog computers wouldn’t dominate. But although they failed to win the computing architecture war, they are still around. As [Bill] points out, analog processing still occurs where you need cheap, fast, or continuous computations.
We’ve only seen part one of the series so far, but it is a great read, laying out the basics and why analog computing is still important. For example, an AC power meter might use a few op-amps to replace a pretty significant digital computing capability.
If you want to be pedantic, yes physical quantities sometimes have quantization — you can’t have a fraction of an electron charge, for example. It is also true that analog computing may introduce some small delay as circuitry settles. But, in general, pushing a digital system to be more precise than the physical quantization or to work faster than an analog computer is very difficult. A few resistors and op-amps are cheap and simple. No real time operating system required. No special techniques for dealing with discrete time measurements.
[Bill] also points out that even a slide rule is an analog computer, of sorts. There was even a time you could even get an analog laptop computer. Sort of.
No comments:
Post a Comment