Tuesday, March 8

Thanks for the Memories: Touring the Awesome Random Access of Old

I was buying a new laptop the other day and had to make a choice between 4GB of memory and 8. I can remember how big a deal it was when a TRS-80 went from 4K (that’s .000004 GB, if you are counting) to 48K. Today just about all RAM (at least in PCs) is dynamic–it relies on tiny capacitors to hold a charge. The downside to that is that the RAM is unavailable sometimes while the capacitors get refreshed. The upside is you can inexpensively pack lots of bits into a small area. All of the common memory you plug into a PC motherboard–DDR, DDR2, SDRAM, RDRAM, and so on–are types of dynamic memory.

The other kind of common RAM you see is static. This is more or less an array of flip flops. They don’t require refreshing, but a static RAM cell is much larger than an equivalent bit of dynamic memory, so static memory is much less dense than dynamic. Static RAM lives in your PC, too, as cache memory where speed is important.

For now, at least, these two types of RAM technology dominate the market for fast random access read/write memory. Sure, there are a few new technologies that could gain wider usage. There’s also things like flash memory that are useful, but can’t displace regular RAM because of speed, durability, or complex write cycles. However, computers didn’t always use static and dynamic RAM. In fact, they are relatively newcomers to the scene. What did early computers use for fast read/write storage?

Drums

Surprisingly, drum memory–a similar technology to a modern hard drive–first appeared in 1932 for use with punched card machines. Although later computers used the technique as secondary storage (like a modern hard drive), some early machines used it as their main storage.

Like a hard drive, a drum memory was a rotating surface of ferromagnetic material. Where a hard drive uses a platter, the drum uses a metal cylinder. A typical drum had a number of heads (one for each track) and simply waited until the desired bit was under the head to perform a read or write operation. A few drums had heads that would move over a few tracks, a precursor to a modern disk drive that typically has one head per surface.

The original IBM 650 had an 8.5 kB drum memory. The Atanasoff-Berry computer used a device similar to a drum memory, but it didn’t use ferromagnetic material. Instead, like a modern dynamic RAM, it used capacitors.

Drum storage remained useful as mass storage for a number of years. If you ever use BSD Unix, you may notice that /dev/drum is the default swap device, an echo to the time when your paging store on a PDP-11 might well have been a drum. You can see an example of a drum storage unit at the Computer History Museum in the video below.

clipAnother design clearly influenced by drum memory was the homemade computer from the book “Build Your Own Working Digital Computer” in 1968. The main program storage was an oatmeal container covered in foil and paper with instructions punched out in the paper (see right).

The Williams Tube

One of the first electronic mechanisms for storing information was a Williams (or Williams-Kilburn) tube. Dating back to 1946. The device was essentially a cathode ray tube (CRT) with a metal plate covering the screen. Although many Williams tube memories used off-the-shelf CRTs with phosphor on the screen, it wasn’t necessary and some tubes omitted it.

Williams-tubeCreating a dot at a certain X and Y position on the CRT would cause the associated area to develop a slightly positive charge, due to secondary emission. This also caused the surrounding area to become slightly negatively charged. Placing a dot next to a spot erases that positive charge. If there was no positive charge to start with, the attempt to erase would cause another area of charge.

By monitoring the plate while writing these probing dots, you can determine if there was previously a charge on the screen at that position or not. Although you normally think of a CRT as sweeping left to right and up and down, there’s no reason that has to be true, so the Williams tube could perform random access.

There are two problems, of course. One is, like dynamic memory, the charge on the CRT eventually fades away, so the CRT needs a refresh periodically. The other is that reading the Williams tube destroys the information in that bit, so every read has to have a corresponding write to put the data back.

A typical tube could hold between 1K and 2K bits. It is interesting that the Manchester SSEM computer was actually built just to test the reliability of the Williams tube. One interesting feature for debugging is you could connect a normal CRT in parallel with the storage tube and visually see the memory in real time.

Although the Williams tube found use in several commercial computers, it tended to age and required frequent hand tuning to get everything working. Can you imagine if every time you booted your computer you had to manually calibrate your memory? Google produced an excellent video about the SSEM and the Williams tube that you can find below.

RCA worked hard on producing a more practical version of the Williams tube known as the Selectron tube. The goal was to make something faster and more reliable than a Williams tube. In 1946, RCA planned to make 200 units of a tube that could store 4096 bits. Production was more difficult than anticipated, however, and the only customer for the tube went with a Williams tube, instead, to avoid further delays. Later, RCA did produce a 256-bit version (for $500 each–so 5 tubes would have bought a new 1954 Corvette). They were only used in the RAND Corporation’s JOHNNIAC (although, in all fairness, the machine used over 1,000 of the tubes–the cost of 200 new Corvettes).

Mercury Delay Lines

Mercury_memoryAnother common form of memory in old computers was the mercury delay line. I almost didn’t include it here because it really isn’t random access. However, many old computer systems used it (including some that also used a Williams tube) and it was used in the UNIVAC I.

The 1953 patent for this memory actually doesn’t limit the delay medium to mercury. The key was to have some kind of element that would delay a signal by some amount of time. This could be done through other means as well (including a proposal to use rotating glass disks).

Mercury was expensive, heavy, and toxic.  However, it is a good acoustic match for quartz piezoelectric crystals, especially when kept heated. That’s important because that’s how the delay line works. Essentially, bits are represented by pulses at one end of a long tube of mercury and received at the other end. The amount of time it takes to arrive depends on the speed of sound in mercury and the length of the tube.

Obviously, reading data at the end of the tube removes it, so it is necessary to route bits back to the other side of the tube so that the whole process can repeat. If you didn’t want to write any new data, you can imagine all the bits as travelling in the mercury, the first bit at the output, followed by each subsequent bit, all travelling at the same speed (about 1,450 meters per second, depending on temperature). Depending on the length of the bit pulse and the length of the column, you could store 500 or 1000 bits in a practical tube.

To read the data, a quartz crystal on the output side converted the pulses to electrical energy. To write data, the computer could insert a new bit into the stream instead of recirculating an old one. EDSAC used 32 delay lines to hold 512 35-bit words (actually, the mercury tubes held more data, but some were used for housekeeping like tracking the start of the data; a later project doubled the computer’s memory). UNIVAC I held 120 bits per line and used many mercury columns to get 1000 words of storage. You can see the UNIVAC’s memory in the video below.

Dekatron

DekatronThe dekatron is popular today among the Nixie tube clock builders that want to move on to something different. As the name implies, the tube has 10 cathodes and a gas (usually neon, although sometimes hydrogen) inside. Charge can be moved from cathode to cathode by sending pulses to the device. The tube can act as a decimal counter but it can also store decimal digits.

There were actually two kinds of dekatrons. A counter dekatron has only one connection to all cathodes. This made it like a divider and the number of cathodes (which didn’t have to be 10) determined the division rate. Counter/Selector tubes had separate cathode pins so you could use them as memory or programmable dividers.

Like a Williams tube, the dekatron was memory you could literally see. The Harwell computer, a relay computer from the 1950’s, uses dekatrons as storage. The National Museum of Computing in the UK restored this machine and uses the visual nature of its memory to demonstrate computer concepts to visitors. You can see the machine in action in the video below.

Core Memory

Ferrite_core_memoryThe most successful early memory was undoubtedly core memory. Each bit of memory consisted of a little ferrite donut with wires threaded through it. By controlling the current through the wires, the donut (or core) could have a clockwise magnetic field or a counterclockwise field. A sense wire allowed the memory controller to determine what direction a specific core contained. However, reading the field also changed it. You can learn more about exactly how it works in the 1961 US Army training film below.

One nice feature of core memory was that it was nonvolatile. When you turned the power back on, the state of the memory was just how you left it. It is also very tolerant to radiation which is why the Space Shuttle computers used core memory until the 1990s.

Core memory came at a high price. Initially, costs were about $1 per bit. Eventually, the industry drove the price down to about $0.01 per bit. To put that in perspective, I did a quick search on Newegg. Without looking for the lowest price, I randomly picked a pair of 8GB DDR3 1600 MHz memory sticks that were available for just under $69. That works out to about $0.54 per gigiabit. Even at $0.01 per bit, a gigabit of core memory would cost ten million dollars (not counting the massive room to put it all in).

One of the main reasons for the high cost of core memory was the manual manufacturing procedure. Despite several efforts to automate production, most of the work in assembling core memory was done by hand. You assume people got pretty good at it, but it is a difficult task. Don’t believe it? Check out [Magnus Karlsson] video of making his own core memory board, below.

A variation of core memory was the plated wire memory. This was similar in operation, except it replaced the magnetic toroids with plated wire that held the magnetic state information. Another variation, twistor, used magnetic tape wrapped around the wire instead of plating. So-called thin-film memory (used in the UNIVAC 1107) used tiny dots of magnetic material on a glass substrate and also worked like core. The advantage to all these was that automated production was feasible. However, inexpensive semiconductor memory made core, plated wire, and twistor obsolete.

You can see how twistor memory was made below in this AT&T video that is almost like a 1976 edition of “How Its Made.”

Bubble

During the development of twistor memory, researchers at Bell Labs noted that they could move the magnetic field on a piece of tape around. Investigating this effect led to the discovery of magnetic bubble. Placing these on a garnet substrate allowed the creation of non-volatile memory that was almost a microscopic version of a delay line, using magnetic bubbles instead of sound waves in mercury.

When  bubble memory became available, it was clearly going to take over the computer industry. Non-volatile memory that was fast and dense could serve as main memory and mass storage. Many big players went all in, including Texas Instruments and Intel.

As you could guess, it didn’t last. Hard drives and semiconductor memory got cheaper and denser and faster. Bubble memory wound up as a choice for companies that wanted high-reliability mass storage or worked in environments where disk storage wasn’t practical.

In 1979, though, Bell Labs declared the start of “The Bubble Generation” as you can see in the video below.

What’s Next?

I’m sure at one time, core memory seemed to be the ultimate in memory technology. Then something else came and totally displaced it. It is hard to imagine what is going to displace dynamic RAM, but I don’t doubt something will.

One of the things that we are already starting to see is F-RAM which is almost like core memory on a chip. Will it (or other upstarts like phase-change RAM) displace current technology? Or will they all go the way of the bubble memory chip? If history has taught me anything, it is that only time will tell.

Credits

Williams-tube by Sk2k52 Licensed under GFDL via Wikimedia Commons.

Mercury Delay Line CC BY-SA 3.0, http://ift.tt/1R3uA2M

Dekatron Image by Dieter Waechter – http://ift.tt/1UPTj0w, Attribution, http://ift.tt/1R3uCYk

Ferrite core memory by Orion 8 – Combined from Magnetic core memory card.jpg and Magnetic core.jpg.. Licensed under CC BY 2.5 via Wikimedia Commons.


Filed under: Hackaday Columns, Retrotechtacular

No comments:

Post a Comment