Tuesday, December 31

Foam Board, Old Electronics, and Imagination Make Movie Magic

When it comes to building sets and props for movies and TV, it’s so easy to get science fiction wrong – particularly with low-budget productions. It must be tempting for the set department to fall back on the “get a bunch of stuff and paint it silver” model, which can make for a tedious experience for the technically savvy in the audience.

But low-budget does not necessarily mean low production values if the right people are involved. Take [Joel Hartlaub]’s recent work building sets for a crowdfunded sci-fi film called Infinitus. It’s a post-apocalyptic story that needed an underground bunker with a Fallout vibe to it, and [Joel] jumped at the chance to hack the sets together. Using mainly vintage electronic gear and foam insulation boards CNC-routed into convincing panels, he built nicely detailed control consoles for the bunker. A voice communicator was built from an old tube-type table radio case with some seven-segment displays, and the chassis of an old LCD projector made a convincing portable computer terminal. The nicest hack was for the control panel of the airlock door. That used an old TDD, or telecommunications device for the deaf. With a keyboard and a VFD display, it fit right into the feel of the set. But [Joel] went the extra mile to make it a practical piece, by recording the modulated tones from the acoustic coupler and playing them back, to make it look as if a message was coming in. The airlock door looks great too.

Like many hacks, it’s pretty impressive what you can accomplish with a deep junk pile and a little imagination. But if you’ve got a bigger budget and you need some computer displays created, we know just the person for the job.

[Matt] tipped us off to this one. Thanks!

A Soft Robotic Insect That Survives the Fly Swatter

Swarms of robotic insects incapable of being swatted away may no longer be the product of science fiction and Black Mirror episodes. A team from EPFL’s School of Engineering has developed an insect propelled at 3 cm/s, dubbed the DEAnsect.

What makes this robot unique is its exceptional robustness. Two versions of the robot were initially developed, one tethered with ultra-thin wires capable of being squashed with a shoe without impacting its functions and the second fully wireless and autonomous. The robot weighs less than 1 gram and is equipped with a microcontroller and photodiodes to recognize black and white patterns.

The insect is named for its dielectric elastomer actuators (DEAs), an artificial muscle that propels it with vibrations and enables it to move lightly and quickly.

The DEAs are made of an elastomer membrane wedged between soft electrodes that are attracted to each other when a voltage is applied, compressing the membrane. The membrane returns to its original shape when the voltage is turned off. Movement is generated by switching the voltage on and off over 400 times per second. The team reduced the thickness of the membranes and developed soft, highly conductive electrodes only several molecules thick using nanofabrication techniques. They plan on fitting even more sensors and emitters to allow the insects to communicate directly with one another for greater swarm-like activity.

Fail of the Week: Ambitious Vector Network Analyzer Fails To Deliver

If you’re going to fail, you might as well fail ambitiously. A complex project with a lot of subsystems has a greater chance of at least partial success, as well as providing valuable lessons in what not to do next time. At least that’s the lemonade [Josh Johnson] made from his lemon of a lost-cost vector network analyzer.

For the uninitiated, a VNA is a versatile test instrument for RF work that allows you to measure both the amplitude and the phase of a signal, and it can be used for everything from antenna and filter design to characterizing transmission lines. [Josh] decided to port a lot of functionality for his low-cost VNA to a host computer and concentrate on the various RF stages of the design. Unfortunately, [Josh] found the performance of the completed VNA to be wanting, especially in the phase measurement department. He has a complete analysis of the failure modes in his thesis, but the short story is poor filtering of harmonics from the local oscillator, unexpected behavior by the AD8302 chip at the heart of his design, and calibration issues. Confounding these issues was the time constraint; [Josh] might well have gotten the issues sorted out had the clock not run out on the school year.

After reading through [Josh]’s description of his project, which was a final-year project and part of his thesis, we feel like his rating of the build as a failure is a bit harsh. Ambitious, perhaps, but with a spate of low-cost VNAs coming on the market, we can see where he got the inspiration. We understand [Josh]’s disappointment, but there were a lot of wins here, from the excellent build quality to the top-notch documentation.

The 2010s: Decade of the exoplanet

Artist conception of Kepler-186f, the first Earth-size exoplanet found in a star's "habitable zone."

Enlarge / Artist conception of Kepler-186f, the first Earth-size exoplanet found in a star's "habitable zone." (credit: NASA/Ames/SETI Institute/JPL-Caltech)

The last ten years will arguably be seen as the "decade of the exoplanet." That might seem like an obvious thing to say, given that the discovery of the first exoplanet was honored with a Nobel Prize this year. But that discovery happened back in 1995—so what made the 2010s so pivotal?

One key event: 2009's launch of the Kepler planet-hunting probe. Kepler spawned a completely new scientific discipline, one that has moved from basic discovery—there are exoplanets!—to inferring exoplanetary composition, figuring out exoplanetary atmosphere, and pondering what exoplanets might tell us about prospects for life outside our Solar System.

To get a sense of how this happened, we talked to someone who was in the field when the decade started: Andrew Szentgyorgyi, currently at the Harvard-Smithsonian Center for Astrophysics, where he's the principal investigator on the Giant Magellan Telescope's Large Earth Finder instrument. In addition to being famous for having taught your author his "intro to physics" course, Szentgyorgyi was working on a similar instrument when the first exoplanet was discovered.

Read 33 remaining paragraphs | Comments

Hurricanes, climate change, and the decline of the Maya

Hurricanes, climate change, and the decline of the Maya

Enlarge (credit: NOAA)

The year is 150 CE. It’s a humid summer day in Muyil, a coastal Mayan settlement nestled in a lush wetland on the Yucatan Peninsula. A salty breeze blows in from the gulf, rippling the turquoise surface of a nearby lagoon. Soon, the sky darkens. Rain churns the water, turning it dark and murky with stirred-up sediment. When the hurricane hits, it strips leaves off the mangroves lining the lagoon’s sandy banks. Beneath the tumultuous waves, some drift gently downward into the belly of the sinkhole at its center.

Nearly two millennia later, a team of paleoclimatologists have used sediment cores taken from Laguna Muyil’s sinkhole to reconstruct a 2,000-year record of hurricanes that have passed within 30 kilometers of the site. Richard Sullivan of Texas A&M presented the team's preliminary findings this month at AGU’s Fall Meeting. The reconstruction shows a clear link between warmer periods and an increased frequency of intense hurricanes.

This long-term record can help us better understand how hurricanes affected the civilization that occupied the Yucatan Peninsula for thousands of years. It also provides important information to researchers hoping to understand how hurricanes react to long-term climate trends in light of today’s changing climate.

Read 13 remaining paragraphs | Comments

Reverse Engineer PCBs with SprintLayout

[Bwack] had some scanned pictures of an old Commodore card and wanted to recreate PC boards from it. It’s true that he could have just manually redrawn everything in a CAD package, but that’s tedious. Instead, he used SprintLayout 6.0 which allows you to import pictures and use them as a guide for recreating a PCB layout.

You can see the entire process including straightening the original scans. There are tools that make it very easy to place new structures over the original scanned images.

One might think the process could be more automated, but it looks as though every piece needs to be touched at least once, but it is still easier than just trying to eyeball everything together.

Most of the video is sped up, which makes it look as though he’s really fast. Your speed will be less, but it is still fairly quick to go from a scan to a reasonable layout.

The software is not free, but you can do something somewhat similar in KiCAD. The trick is to get the image scaled perfectly and convert it to a component on a user layer. Then you can add the new component and enable the user layer to see the image as you work. There’s even a repository of old boards recreated in KiCAD.

There are probably an infinite number of ways to attack this. An older version of SprintLayout helped with the Re-Amiga 1200.

Your TS80 – Music Player

By now most readers will be familiar with the Miniware TS100 and TS80 soldering irons, compact and lightweight temperature controlled soldering tools that have set a new standard at the lower-priced end of the decent soldering iron market. We know they have an STM32 processor, a USB interface, and an OLED display, and that there have been a variety of alternative firmwares produced for them.

Take a close look at the TS80, and you’ll find the element connector is rather familiar. It’s a 3.5 mm jack plug, something we’re more used to as an audio connector. Surely audio from a soldering iron would be crazy? Not if you are [Joric], who has created a music player firmware for the little USB-C iron. It’s hardly a tour de force of musical entertainment and it won’t pull away the audiophiles from their reference DACs, but it does at least produce a recognisable We Wish You A Merry Christmas as you’ll see from the video below the break.

Since the TS100 arrived a couple of years ago we’ve seen a variety of inventive firmware for it. You may remember [Joric]’s previous triumph of a Tetris game for the iron, but our favourite is probably the TS100 oscilloscope.

Thanks [cahbtexhuk] for the tip.

36C3: Build Your Own Quantum Computer At Home

In any normal situation, if you’d read an article that about building your own quantum computer, a fully understandable and natural reaction would be to call it clickbaity poppycock. But an event like the Chaos Communication Congress is anything but a normal situation, and you never know who will show up and what background they will come from. A case in point: security veteran [Yann Allain] who is in fact building his own quantum computer in his garage.

Starting with an introduction to quantum computing itself, and what makes it so powerful also in the context of security, [Yann] continues to tell about his journey of building a quantum computer on his own. His goal was to build a stable computer he could “easily” create by himself in his garage, which will work at room temperature, using trapped ion technology. After a few iterations, he eventually created a prototype with KiCad that he cut into an empty ceramic chip carrier with a hobbyist CNC router, which will survive when placed in a vacuum chamber. While he is still working on a DIY laser system, he feels confident to be on the right track, and his estimate is that his prototype will achieve 10-15 qubits with a single ion trap, aiming to chain several ion traps later on.

As quantum computing is often depicted as cryptography’s doomsday device, it’s of course of concern that someone might just build one in their garage, but in order to improve future cryptographic systems, it also requires to fully understand — also on a practical level — quantum computing itself. Whether you want to replicate one yourself, at a rough cost of “below 15k Euro so far” is of course a different story, but who knows, maybe [Yann] might become the Josef Prusa of quantum computers one day.

Cat Diner Now Under New Management

Most of these stories start with a cat standing on someone’s chest, begging for food at some obscene hour of the morning. But not this one. Chaz the cat is diabetic, and he needs to get his insulin with breakfast. The problem is that Chaz likes to eat overnight, which ruins his breakfast appetite and his chances at properly metabolizing the insulin. [Becky] tried putting the bowl away before bed, but let’s face it — it’s more fun to solve a problem once than to solve the same problem every night.

[Becky]’s solution was to design and print a bowl holder with a lid, and to cover the bowl when the cat diner is closed using a small servo and a NodeMCU. It looks good, and it gets the job done with few components. Chaz gets his insulin, [Becky] gets peace of mind, and everybody’s happy. This isn’t going to work for all cats, because security is pretty lax. But Chaz is a senior kitty and therefore disinterested in pawing at the lid to see what happens. Claw your way past the break to see [Becky]’s build/demo video featuring plenty of cat tax coverage.

We’ve seen a lot of cat feeding apparatus around here, but few that solve a specific problem like this one. If it’s overengineering and cat metrics you want, come and get it.

36C3: SIM Card Technology From A to Z

SIM cards are all around us, and with the continuing growth of the Internet of Things, spawning technologies like NB-IoT, this might as well be very literal soon. But what do we really know about them, their internal structure, and their communication protocols? And by extension, their security? To shine some light on these questions, open source and mobile device titan [LaForge] gave an introductory talk about SIM card technologies at the 36C3 in Leipzig, Germany.

Starting with a brief history lesson on the early days of cellular networks based on the German C-Netz, and the origin of the SIM card itself, [LaForge] goes through the main specification and technology parts of each following generation from 2G to 5G. Covering the physical basics, I/O interfaces, communication protocols, and the file system located on the SIM card, you’ll get the answer to “what on Earth is PIN2 for?” along the way.

Of course, a talk like this, on a CCC event, wouldn’t be complete without a deep and critical look at the security side as well. Considering how over-the-air updates on both software and — thanks to mostly running Java nowadays — feature side are more and more common, there certainly is something to look at.

Injecting the flu vaccine into a tumor gets the immune system to attack it

Injecting the flu vaccine into a tumor gets the immune system to attack it

Enlarge (credit: picture alliance/Getty Images)

A number of years back, there was a great deal of excitement about using viruses to target cancer. A number of viruses explode the cells that they've infected in order to spread to new ones. Engineering those viruses so that they could only grow in cancer cells would seem to provide a way of selectively killing these cells. And some preliminary tests were promising, showing massive tumors nearly disappearing.

But the results were inconsistent, and there were complications. The immune system would respond to the virus, limiting our ability to use it more than once. And some of the tumor killing seemed to be the result of the immune system, rather than the virus.

Now, some researchers have focused on the immune response, inducing it at the site of the tumor. And they do so by remarkably simple method: injecting the tumor with the flu vaccine. As a bonus, the mice it was tested on were successfully immunized, too.

Read 12 remaining paragraphs | Comments

Monday, December 30

Focus Stacking For Tiny Subjects

Focus stacking is a photographic technique in which multiple exposures are taken of a subject, with the focus distance set to different lengths. These images are then composited together to create a final image with a greater depth of field than is possible with a single exposure. [Peter Lin] built a rig for accurate focus stacking with very small subjects.

The heart of the rig is a motion platform consisting of a tiny stepper motor fitted with a linear slide screw. This is connected to an Arduino or PIC with a basic stepper driver board. While the motor does not respond well to microstepping or other advanced techniques, simply driving it properly can give a resolution of 15 μm per step.

The motor/slide combination is not particularly powerful, and thus cannot readily be used to move the camera or optics. Instead, the rig is designed for photography of very small objects, in which the rail will move the subject itself.

It’s a tidy build that would serve well for anyone regularly doing macro focus stack photography. If you’ve been trying to better photograph your insect collection, this one is for you. It’s a valuable technique and one that applies to microscopy too. Video after the break.

Court backs Comcast, puts Maine’s à la carte cable law on hold

The back of a Comcast van driving along a street in Sunnyvale, California.

Enlarge / A Comcast van in Sunnyvale, California, in November 2018. (credit: Getty Images | Andrei Stanescu)

A whole slew of cable companies are notching up a victory in a lawsuit against the state of Maine that seeks to block a recent law that would require à la carte cable offerings.

Comcast spearheaded the coalition of companies, which filed the suit in September. The Maine law, the first of its kind in the nation, was invalid for two reasons, the suit argued: first, because it pre-empts federal communications law, and second, because it violates companies' First Amendment rights. Comcast was joined by more than a dozen other plaintiffs, including its own NBCUniversal subsidiary, CBS, Viacom (which had not yet completed its merger with CBS), Disney, Fox, A&A, Discovery, and Hearst.

As is common in such suits, the plaintiffs first sought an injunction that would block the state from enforcing the law while the rest of the legal process gets sorted out. District Judge Nancy Torresen granted the injunction in a ruling (PDF) issued just before Christmas.

Read 5 remaining paragraphs | Comments

NIST digitized the bullets that killed JFK

Courtesy of the National Institute of Standards and Technology.

It has been more than 50 years since the assassination of President John F. Kennedy shocked the nation, but the case still generates considerable public interest—particularly fragments from the bullets that killed the president, which have been preserved in a temperature and humidity-controlled vault at the National Archives and Records Administration in Washington, DC, for decades. Scientists at the National Institute of Standards and Technology (NIST) recently teamed up with forensic experts at the National Archives to digitize the bullets, the better to preserve their features for the conspiracy theorists of tomorrow. All the data should be available in the National Archives' online catalog sometime in early 2020.

There are two fragments of the bullets that killed JFK—one that hit him in the neck and another that hit him in the back of the head—as well as the so-called "stretcher bullet." That's the bullet that struck the president and also Texas Governor John Connally, found lying near the latter's stretcher at the hospital. Also in the archives: two bullets used in a test firing of the assassin's rifle for forensic matching purposes.

The curators from the National Archives were on site while all the analysis was being done, locking up the precious artifacts in a safe every night to ensure their safety. The biggest challenge, according to NIST's Thomas Brian Renegar, was figuring out how to make measurements in sufficient detail to create the kind of 3D models they needed. For instance, "How do we hold the artifacts safely and securely?" he told Ars. "We don't want them moving while we're doing the scans, but we need to hold them carefully so as not to damage the artifacts." The fragments in particular are also badly warped and twisted, making surface scanning difficult.

Read 6 remaining paragraphs | Comments

Jeremy Cook is Living His Strandbeest Dream

The first thing Jeremy Cook thought when he saw a video of Theo Jansen’s Strandbeest walking across the beach was how incredible the machine looked. His second thought was that there was no way he’d ever be able to build something like that himself. It’s a feeling that most of us have had at one time or another, especially when starting down a path we’ve never been on before.

But those doubts didn’t keep him from researching how the Strandbeest worked, or stop him from taking the first tentative steps towards building his own version. It certainly didn’t happen overnight. It didn’t happen over a month or even a year, either.

ClearCrawler at the 2019 Hackaday Superconference

For those keeping score, his talk at the 2019 Hackaday Superconference, “Strandbeests: From Impossible Build to Dominating My Garage” is the culmination of over six years of experimentation and iteration.

His first builds could barely move, and when they did, it wasn’t for long. But the latest version, which he demonstrated live in front of a packed audience at the LA College of Music, trotted across the stage with an almost otherworldly smoothness. To say that he’s gotten good at building these machines would be something of an understatement.

Jeremy’s talk is primarily focused on his Strandbeest creations, but it’s also a fascinating look at how a person can gradually move from inspiration to mastery through incremental improvements. He could have stopped after the first, second, or even third failure. But instead he persisted to the point he’s an expert at something he once believed was out of his reach.

Learning to Crawl

There’s a well known Chinese proverb that, roughly translated, states “a journey of a thousand miles begins with a single step.” It’s something to keep in mind any time you take on a challenge, but it rings especially true for anyone looking to build a Strandbeest. Rather than trying to tackle the entire machine at once, Jeremy thought a reasonable enough approach to constructing a multi-legged walking robot was to first build a single leg and understand how it operates.

One of the early wooden designs.

With one leg built and working, the next step was of course to build more of them. When he had four assembled, it was time to design a base to mount them on, and then outfit it with electric motors to get things moving.

Unfortunately this first Strandbeest, made of wood and roughly the size of a golf cart, never worked very well. Jeremy attributes its failure to a number of issues which he would eventually learn to solve, such as imprecision in the linkages, excessive friction, and undersized motors. That first build may have failed as a walker, but it served as a fantastic learning experience.

For the second Strandbeest Jeremy switched over to cutting the parts out of MDF, but the contraption was still too heavy. This version was even less successful than the first, and it soon fell apart. It seemed clear at this point that the way forward was to scale the design down to a more manageable size.

The Next Generation

Once he shrunk the walker’s design down to tabletop scale, Jeremy really started seeing some progress. It still took a few iterations to get something that could move around without jamming up or rattling itself to pieces, but with parts that could be accurately cut on a CNC router and the addition of a new geared drive system, these smaller Strandbeests really started to come alive.

Smaller, gear-driven, walkers proved successful

Not only did they perform much better, but the eventual switch to clear acrylic gave them a very distinctive look. Around this time, Jeremy also started to add some anthropomorphic features, like articulated “heads” that housed cameras or LED “eyes”. These features not only gave his bots the ability to emote, but also marked a clear separation between his creations and that of Theo Jansen, who’s designs were only getting larger and more alien as time went on.

The latest and greatest of these acrylic Strandbeests is the ClearCrawler, which takes into account all the lessons Jeremy has learned over the years. Powered by an Arduino Nano and controllable via a custom handheld remote using nRF24L01 modules, this walker is easily expandable and provides an excellent platform for further research and exploration.

Staying Humble

Despite the leaps and bounds that Jeremy has made with his Strandbeests, he still remains in awe of the original wind-powered walkers that Theo Jansen unveiled all those years ago. If anything, he says he has more respect for those creations now than when he first saw them. Looking at it with no knowledge of how it works, you’ll of course be impressed. But once you understand the mechanisms involved, and just how hard it is to build and operate these creations, you realize what a monumental accomplishment it really was.

Which is perhaps the real lesson to be learned after watching Jeremy’s talk: there’s always more to learn and be impressed by. Even if you’ve been working on a particular project for years and now are at the point where you’re giving presentations on it at a hardware hacking conference, don’t be surprised if you still find yourself scratching your head from time to time. Rather than being discouraged, use the experience as inspiration to keep pushing forward and learning more.

Linux Fu: Leaning Down with exec

Shell scripting is handy and with a shell like bash it is very capable, too. However, shell scripting isn’t always very efficient. Think about it. If you run grep or tr or sort to do some operation in a shell script, you are spawning a whole new process. That takes time and resources. But there are some answers to reducing — but not eliminating — the problem.

Have you ever written a program like this (in any language, but I’ll use C):

int foo(void)
{
  ...
  bar();

}

You hope the compiler doesn’t write assembly code like this:

_foo: 
....

      call _bar
      ret

Most optimizers should pick up on the fact that you can convert a call like this to a jump and let the ret statement in _bar return to foo’s caller. However, shell scripts are not that smart. If you have a shell script called MungeData and it calls another program or shell script called PostProcess on its last line, then you will have at one time three processes in play: your original shell, the shell running MungeData, and either the PostProcess program or a shell running the script. Not to mention, the processes to do things inside post process. So what do you do?

Enter Exec

There are a few possible answers to this, but in the particular case where one shell script calls another program or script at the end, the answer is easy. Use exec:

#!/bin/bash
# Do stuff here
...
# Almost done
exec PostProcess

This tells the shell to reuse the current process for PostProcess. Nothing that appears after the exec will run because the current process is wiped out. When PostProcess completes, the original process that called our script will resume. This is pretty much the same as the call/ret to jump optimization in C.

Built Ins

If you look at the bash manual, some things are built in and some are not. Using built ins ought to be faster than spawning a new program. For example, consider a line like:


if [ $a == $b ]

Some shells use a program named “test” to handle the square brackets. This causes a new program to launch. Modern bash provides this as a built in to help speed script execution and make it more efficient. However, you can disable it if you want to benchmark the difference. In general, you can disable a built in using “enable -n XXX” where XXX is the built in you want to disable. Use no options to enable it. Just entering the command with no arguments at all will give you a list of built in commands or use the -p option, if you prefer.

However, there’s more to it than that. If you have some common operation that takes a lot of overhead, you can write the code in a language such as C and ask the shell to load it as a shared object and then treat it as a built in. The technique is a little involved, but it shows the versatility of the shell. You can find an example that adds a few built in commands to bash in this article. For example, the code posted makes things like cat and tee part of the shell, as well as creating new commands.

Exotic Solutions

We’ll admit, that last solution is a bit exotic. However, there are other things you can do. You might create a persistent server and communicate with it using a named pipe to avoid running new code. When disks were slow, you could experiment with keeping frequently used programs on a RAM disk. Today, caching ought to do that almost automatically, but perhaps not in every scenario.

Sometimes just cleaning up your code can help. Imagine this:

cat "$1" | grep "$target"

This spawns two processes, one for cat and one for grep. Why not just say:

grep "$target" "$1"

Of course, the ultimate is to simply not use a shell script. Almost any programming language will have a richer set of things it can do without launching an external program. A compiled or semi-compiled language is likely to be faster and even will help you optimize.

Shell scripts are useful to a point. It is fun, too, to see just how far you can stretch them. However, if you are really that worried about efficiency or speed, this might be the best answer of all.

The one video game my kids played all year long

The one video game my kids played all year long

Enlarge

I was cooking this weekend when my eight-year-old son looked up from the couch, where he was listening to the Stardew Valley game soundtrack on Apple Music.

"Dad," he announced, "I'm going to read you the name of every song on this album."

"Cool," I said as minced the garlic.

Read 13 remaining paragraphs | Comments

Team that made gene-edited babies sentenced to prison, fined

Chinese geneticist He Jiankui speaks during the Second International Summit on Human Genome Editing at the University of Hong Kong days after he claimed to have altered the genes of the embryo of a pair of twin girls before birth, prompting outcry from scientists of the field.

Enlarge / Chinese geneticist He Jiankui speaks during the Second International Summit on Human Genome Editing at the University of Hong Kong days after he claimed to have altered the genes of the embryo of a pair of twin girls before birth, prompting outcry from scientists of the field. (credit: S.C. Leung/SOPA Images/LightRocket via Getty Images)

On Monday, China's Xinhua News Agency reported that the researchers who produced the first gene-edited children have been fined, sanctioned, and sentenced to prison. According to the Associated Press, three researchers were targeted by the court in Shenzhen, the most prominent of them being He Jiankui. He, a relatively obscure researcher, shocked the world by announcing that he had edited the genomes of two children who had already been born by the time of his public disclosure.

He Jiankui studied for a number of years in the United States before returning to China and starting some biotech companies. His interest in gene editing was only disclosed to a small number of advisers, and his work involved a very small team. Some of them were apparently at his companies, while others were at the hospital that provided him with the ability to work with human subjects. After his work was disclosed, questions were raised about whether the hospital fully understood what He was doing with those patients. The court determined that He deliberately violated Chinese research regulations and fabricated ethical review documents, which may indicate that the hospital was not fully aware.

He's decision to perform the gene editing created an ethical firestorm. There had been a general consensus that the CRISPR technology he used for the editing was too error-prone for use on humans. And, as expected, the editing produced a number of different mutations, leaving us with little idea of the biological consequences. His target was also questionable: He eliminated the CCR5 gene, which is used by HIV to enter cells but has additional, not fully understood immune functions. The editing was done in a way that these mutations and their unknown consequences would be passed on to future generations.

Read 4 remaining paragraphs | Comments

Surveillance camera company Wyze confirms leak of user data

Wyze web-connected personal surveillance camera, August 2019.

Enlarge / Wyze web-connected personal surveillance camera, August 2019. (credit: Smith Collection | Gado | Getty Images)

Loads of folks find brand new Wyze surveillance cameras under their trees or in their stockings this Christmas. And on Boxing Day, the company itself unwrapped a whole new world of trouble for everyone who uses its products, confirming a data leak that may have exposed personal data for millions of users over the course of a few weeks.

Wyze first found out about the problem in the morning on December 26, company cofounder Dongsheng Song said in a corporate blog post. The company's investigation confirmed that user data was "not properly secured" and was exposed from December 4 onward.

The database in question was basically a copy of the production database that Wyze created to work with, Song explained. Data points left exposed include user email addresses, camera nicknames, WiFi network information, Wyze device information, some tokens associated with Alexa integrations, and "body metrics for a small number of product beta testers."

Read 3 remaining paragraphs | Comments

36C3: All Wireless Stacks Are Broken

Your cellphone is the least secure computer that you own, and worse than that, it’s got a radio. [Jiska Classen] and her lab have been hacking on cellphones’ wireless systems for a while now, and in this talk gives an overview of the wireless vulnerabilities and attack surfaces that they bring along. While the talk provides some basic background on wireless (in)security, it also presents two new areas of research that she and her colleagues have been working on the last year.

One of the new hacks is based on the fact that a phone that wants to support both Bluetooth and WiFi needs to figure out a way to share the radio, because both protocols use the same 2.4 GHz band. And so it turns out that the Bluetooth hardware has to talk to the WiFi hardware, and it wouldn’t entirely surprise you that when [Jiska] gets into the Bluetooth stack, she’s able to DOS the WiFi. What this does to the operating system depends on the phone, but many of them just fall over and reboot.

Lately [Jiska] has been doing a lot of fuzzing on the cell phone stack enabled by some work by one of her students [Jan Ruge] work on emulation, codenamed “Frankenstein”. The coolest thing here is that the emulation runs in real time, and can be threaded into the operating system, enabling full-stack fuzzing. More complexity means more bugs, so we expect to see a lot more coming out of this line of research in the next year.

[Jiska] gives the presentation in a tinfoil hat, but that’s just a metaphor. In the end, when asked about how to properly secure your phone, she gives out the best advice ever: toss it in the blender.

Wired for sound: How SIP won the VoIP protocol wars

Wired for sound: How SIP won the VoIP protocol wars

(credit: Dan Brady)

Update: We're in the last throes of winter break 2019, which means most Ars' home office phones can stay dormant for a few more days. As such, we've been resurfacing a few classics from the archives—the latest being this look at how SIP (Session Initiation Protocol) won the VoIP protocol wars once upon a time. This story first appeared on December 8, 2009, and it appears unchanged below.

As an industry grows, it is quite common to find multiple solutions that all attempt to address similar requirements. This evolution dictates that these proposed standards go through a stage of selection—over time, we see some become more dominant than others. Today, the Session Initiation Protocol (SIP) is clearly one of the dominant VoIP protocols, but that obviously didn't happen overnight. In this article, the first of a series of in-depth articles exploring SIP and VoIP, we'll look at the main factors that led to this outcome.

A brief history of VoIP

Let's go back to 1995 in the days prior to Google, IM, and even broadband. Cell phones were large and bulky, Microsoft had developed a new Windows interface with a "Start" button, and Netscape had the most popular Web browser. The growth of the Internet and data networks prompted many to realize that it's possible to use the new networks to serve our voice communication needs while substantially lowering the associated cost. The first commercial solution of Internet VoIP came from a company called VocalTec; their software allowed two people to talk with each other over the Internet. One would make a local call to an ISP via a 28.8K or 36.6K modem and be able to talk with friends even if they lived far away. I remember trying out this software, and the sound was definitely below acceptable quality. (It frequently sounded like you were attempting to speak while submerged in a swimming pool.) However, the software successfully connected two people and introduced real-time voice conversation for a bandwidth-constrained network.

Read 23 remaining paragraphs | Comments

Parallel Pis for Production Programming; Cutting Minutes and Dollars Off of Assembly

Assembly lines for electronics products are complicated beasts, often composed of many custom tools and fixtures. Typically a microcontroller must be programmed with firmware, and the circuit board tested before assembly into the enclosure, followed by functional testing afterwards before putting it in a box. These test platforms can be very expensive, easily into the tens of thousands of dollars. Instead, this project uses a set of 12 Raspberry Pi Zero Ws in parallel to program, test, and configure up to 12 units at once before moving on to the next stage in assembly.

Fixing Fixture Bottlenecks

The company where I work, Propeller Health, develops IoT products that are assembled in a way similar to many other companies; there is a circuit board and a plastic enclosure. The bare PCBs go through SMT twice (components on the front and back), then they go through ICT (In-Circuit Test) where they are programmed and pogo pins on each of the test points verify all components on the circuit. They are also given a unique MAC address for Bluetooth based on a 2D barcode sticker placed on the board after SMT. After that they are assembled into the plastics and run through a functional test fixture before they are put into inventory mode and boxed. We have a product line of around a dozen different devices, and each uses a different PCB and enclosure.

Production runs need test and programming fixtures, a topic i wrote about a few years ago. To increase scale and reduce costs we started working more closely with our contract manufacturer to identify and fix the bottlenecks in the assembly line process, and we noticed that the ICT stage was taking 6 minutes for a panel of 6 boards. The ICT section itself was relatively fast, but the firmware programming and mac programming were taking a long time because the test platform was not capable of multiple simultaneous serial port connections.

This was a $30k fixture with 6 Segger programmers, 6 Keysight scanners, sitting on top of a $100k Teradyne platform, and it was our biggest bottleneck and greatest expense. Further, each device in our product line requires different fixturing. The combination of these expensive fixtures and slow cycle time was making our COGS (Cost Of Goods Sold) untenable.

On an assembly line, time is literally money, as every cycle time for every operator on the line is measured and added to the cost of the product. Reducing a cycle by even a second adds up quickly over a volume of thousands. Over 10k units, 2 second reduction is 6 hours of labor, at $15/operator hour is $.01/unit, and CMs, especially US based ones, are charging more than $15/hour. Having a 6 minute cycle time for 6 pieces was ripe for improvement.

Stage One – Arduino

The first stage was more a proof of concept. Could we pull some of the functionality off of the expensive ICT fixture onto a cheaper faster fixture so that we could scale in serial instead of buying another fixture to scale in parallel?

For this stage we implemented a feature in our firmware we call self-test, which is similar to a JTAG boundary scan. Rather than placing pogo pins on every net, we put a feature into the firmware to run a self-test on as many components as it could, and report the results over serial. This way we’d be able to connect only with the power and UART pins and get a complete and very fast readout of what exactly failed. Eight Arduinos connected via a USB hub to a tablet PC with a custom Python script and interface was the solution.

Diagram of the self-test fixture.

A single 40pin IDE connector was enough to have power, an RGB LED, RX, TX, and a single button that would kick off the cycle for all 8 at once. The Arduino would request a self-test from the DUT (Device Under Test), which would test its components and report back. Tests included verifying that sensors were reading values within an expected range, but also turning on the LED and buzzer and measuring the voltage drop to verify that outputs worked as well.

The microcontroller executes a series of tests on its own components and verifies they are within preset ranges, then reports success or failure.

This worked, and proved the concept of creating a box with most of the electronics paired with a simple customized fixture for the panel, but it wasn’t enough. Pulling out some of the circuit tests was insufficient to reduce time significantly. We needed to pull out more tasks.

Stage Two – Raspberry Pi

One of the more expensive parts of the ICT fixture was the Segger J-Link programmers. If we could pull out programming from ICT with something cheaper, we could reduce fixture costs significantly. Many chip manufacturers offer programming at the factory so that your microcontrollers come on a reel with your firmware pre-programmed. This is useful when you don’t trust your manufacturer or don’t want to have a special fixture for programming, but it does have an added cost, and there’s a lead time, and changing your firmware is more difficult, and you can’t use that part on any other product.

Programming in house had a lot of appeal for us, simplified our supply chain, and gave us more flexibility to change our firmware and shift inventory as needed. Enter the Raspberry Pi Zero W and OpenOCD. The Raspberry Pi is more than capable of running OpenOCD to program a microcontroller, and was surprisingly easy to configure and get running. It also had a serial port and GPIO for controlling WS2812B LEDs, and was small enough that it was easy to put a bunch in a box.

Block Diagram of Neptune

The connector expanded to a second 40pin IDE cable, and support for up to 12 parallel units, the size of our largest panels. Each Pi runs completely independently, without communicating with each other. The only thing they have in common (besides power supply) is that all 12 are connected to the same pushbutton on the front of the fixture. The button is used by the operator to start the programming cycle, but each Pi runs its process independently.

The prototype box contains all the components, wired together in a giant nest with Dupont cables that easily disconnected, in a custom laser-cut and glued acrylic enclosure for extra fragility.

Drawing on the Arduino experience further, the Raspberry Pi process would first program the unit, then verify it was programmed correctly by communicating over the UART, then execute the self-test on the device, which would test the accelerometer, microphone, IR sensors, LED, buzzer, etc, and report back the results and a PASS/FAIL status. We had accomplished our primary goal of removing a significant portion of the process time and cost from the ICT fixture, but we could go further.

Stage Three – Adding Features

A panel of 10 PCBs with unique MAC address stickers. The MAC address is scanned and programmed into the module.

With the addition of cameras to the fixture, we could read the MAC address on the sticker and use the serial port to program the MAC address into the unit. This allowed us to tie our process to the MAC address that was used to track the unit through the rest of the assembly line.

If you’re wondering why we didn’t pre-assign numbers and then have the operators put on the correct number, it comes down to reducing the chances for operator error. Giving the operator a sheet of stickers and telling them to put them on the units is way less error-prone than having them put specific stickers on specific units, and faster than printing out barcodes one at a time for individual units.

A fixture with 12 cameras for scanning barcodes. Notice the big green button in front of the fixture for starting a cycle.

Next we added the unique key stored on the unit for authenticated communication. The pi generates a 64 byte key, programs and verifies that key on the sensor, then encrypts the key and stores it in the log file. That key gets decrypted on our servers and allows us to communicate with our sensors.

In addition, we added thorough logging; a system log tracks the version numbers of the script, config file, and firmware file, and a separate file logs each cycle with diagnostic data about each stage of the process. These logs are stored on a thumb drive, but also uploaded regularly to a secure FTP so that we can import the logs (and the encrypted keys) immediately, as well as keep tabs on our CM and monitor when they are running our jobs and help them diagnose any problems.

We also made all of this run off a thumb drive. On boot the pi starts a Python script whose sole job is to wait for a thumb drive to be inserted, then run the script on that thumb drive. The thumb drive contains the main Python script, config file, firmware hex file, and logs. This way we have a single box that is completely agnostic to the assembly line, and then we have a dozen thumb drives that stay with the line for that particular device. We don’t need an interface to the pi because we can just plug the thumb drive into a computer to configure it however needed. Anything that makes the job easier for operators makes the assembly line run smoother, and saying “plug these into the back of the box” worked pretty well.

At this point we could program the microcontroller, then execute a self-test, then program the MAC address, and generate a unique key, and output to an LED the status of the process (green for success, yellow for in progress, and red for failure), and save the log to a file. This shaved MINUTES off the cycle time. Before, a panel of 12 was split in two, then each half went through the ICT fixture for 6 minutes, for a total cycle time of around 12 minutes for 12 devices. After the change, we had the full panel of 12 take under a minute, then split and spend under 2 minutes each at ICT. We went from 12 in 12 minutes to 12 in 5 minutes, with more control over the process and greater transparency.

Stage Four – Neptune is Alive

The assembled Neptune box contains 12 Raspberry Pi Zero W, 12 LEDs, 12 USB cables, and cost less than $500 total.

After our first box worked, our CM asked for more so they could service multiple assembly lines, plus have backups and one for test/development. The design was formalized, we found a new enclosure that could be modified easily, and designed a PCB. Total cost for all components is around $500, and the PCBs are easy enough to assemble by hand in a couple hours. We called the project Neptune and the boxes started coming together. Now we have plenty of Neptune boxes, and we have a spec for our fixture developer that allows them to have a very simple circuit they can replicate over and over. Fixtures are rugged beasts that require special knowledge and experience that we don’t have and don’t want to develop, so we outsource that part. The most recent fixture worked immediately with no changes, which is usually nearly impossible.

Stage Five – Changing Cameras

A MAC address sticker is placed on every unit after SMT, but this address must be programmed into the microcontroller itself. To do this, a barcode scanner is used. This is a production environment, and we are packing 12 scanners into a small space, and the scanners must accurately and reliably detect a 2D barcode about 1/2in^2, so not just any scanner will do.

Originally, on the suggestion of our CM, the fixtures were outfitted with Keyence scanners. After we discovered that these are $1400 each, we realized a huge potential for cost reduction. Each of our fixtures has 10-12 devices per panel, so replacing the Keyence scanners with $250 Zebra scanners was an immediate gain. The downside is that our Neptune PCB was designed for RS232 communication with the Keyence scanners using only 2 UART pins, and the Zebra scanners required 5V logic levels with hardware RTS/CTS pins as well. Fortunately we were able to build a cable with an inline circuit that contained an ATtiny84 for each of the cameras.

The adapter converts RS232 to 5V logic and triggers the Zebra camera.

With a small config file change, we can tell the system to use a slightly different communication protocol. The Pi sends a serial signal, which is stepped up to RS232 inside the Neptune box, then on the cable dongle it steps down to 5V, goes into the ATtiny, where it is parsed, and if the desired signal then it sends a pulse to the trigger pin of the Zebra scanner, which then scans the barcode and sends the result back to the dongle, where it is stepped up to RS232 and heads back to the unit. The dongle meant that we didn’t have to change anything about the box, and the fixture didn’t have any additional circuitry as well.

Results

The results of Neptune have saved the company a lot of money in fixtures and in unit cost, improved our scalability, given us greater transparency in our production, added features, and made development of new features easier. The Raspberry Pi was instrumental in this, as was OpenOCD. We learned a lot and developed some techniques that would likely be valuable to anyone else doing manufacturing of electronic devices. We’re still working out exactly how we can share this without compromising security or trade secrets, but this is our first step.

The 2010s were a veritable golden age of opening credits in television

Our picks for the decade's best TV main title sequences include <em>Jessica Jones</em>, <em>Star Trek: Discovery</em>, <em>Manhattan</em>, and <em>The Chilling Adventures of Sabrina</em>.

Enlarge / Our picks for the decade's best TV main title sequences include Jessica Jones, Star Trek: Discovery, Manhattan, and The Chilling Adventures of Sabrina. (credit: Collage by Sean Carroll)

The era of peak TV has brought with it a veritable Golden Age of distinctive, innovative main title sequences. So much so, in fact, that there is an entire website, The Art of the Title, devoted to this burgeoning art form, featuring in-depth interviews with the creative people behind all those stunning title sequences. (Fair warning: it's way too easy to lose yourself down the rabbit hole on that site.)

We made a list of our favorite opening credits over the last decade and quickly noticed that two production houses in particular dominated our selections: Imaginary Forces—which produced the main titles for Counterpart, Stranger Things, and Jessica Jones, among others—and Elastic, the creative minds behind the title sequences for Halt and Catch Fire, Masters of Sex, Game of Thrones, and so forth. Along with Prologue (Star Trek: Discovery, Elementary), it's fair to say that these houses have played a major role in shaping the art form over the last ten years.

HBO had already flirted with an artier approach to opening credits with its Emmy-award winning sequence for Six Feet Under in 2001 (produced by Digital Kitchen) and the main title sequence for The Sopranos. "It was the premium networks, like HBO in the late 1990s, that began to set a higher standard for viewer expectations," Alan Williams, a director with Imaginary Forces, told Ars. "TV could be more than just TV. It could be more like film. That type of thinking translated over into other networks."

Read 41 remaining paragraphs | Comments

Image Sensor from Discrete Parts Delivers Glorious 1-Kilopixel Images

Chances are pretty good that you have at least one digital image sensor somewhere close to you at this moment, likely within arm’s reach. The ubiquity of digital cameras is due to how cheap these sensors have become, and how easy they are to integrate into all sorts of devices. So why in the world would someone want to build an image sensor from discrete parts that’s 12,000 times worse than the average smartphone camera? Because, why not?

[Sean Hodgins] originally started this project as a digital pinhole camera, which is why it was called “digiObscura.” The idea was to build a 32×32 array of photosensors and focus light on it using only a pinhole, but that proved optically difficult as the small aperture greatly reduced the amount of light striking the array. The sensor, though, is where the interesting stuff is. [Sean] soldered 1,024 ALS-PT19 surface-mount phototransistors to the custom PCB along with two 32-bit analog multiplexers. The multiplexers are driven by a microcontroller to select each pixel in turn, one row and one column at a time. It takes a full five seconds to scan the array, so taking a picture hearkens back to the long exposures common in the early days of photography. And sure, it’s only a 1-kilopixel image, but it works.

[Sean] has had this project cooking for a while – in fact, the multiplexers he used for the camera came up as a separate project back in 2018. We’re glad to see that he got the rest built, even with the recycled lens he used. One wonders how a 3D-printed lens would work in front of that sensor.