From the Forbin Project, to HAL 9000, to War Games, movies are replete with smart computers that decide to put humans in their place. If you study literature, you’ll find that science fiction isn’t usually about the future, it is about the present disguised as the future, and smart computers usually represent something like robots taking your job, or nuclear weapons destroying your town.
Lately, I’ve been seeing something disturbing, though. [Elon Musk], [Bill Gates], [Steve Wozniak], and [Stephen Hawking] have all gone on record warning us that artificial intelligence is dangerous. I’ll grant you, all of those people must be smarter than I am. I’ll even stipulate that my knowledge of AI techniques is a little behind the times. But, what? Unless I’ve been asleep at the keyboard for too long, we are nowhere near having the kind of AI that any reasonable person would worry about being actually dangerous in the ways they are imagining.
Smart Guys Posturing
Keep in mind, I’m interpreting their comments as saying (essentially): “Soon machines will think and then they will out-think us and be impossible to control.” It is easy to imagine something like a complex AI making a bad decision while driving a car or an airplane, sure. But the computer that parallel parks your car isn’t going to suddenly take over your neighborhood and put brain implants in your dogs and cats. Anyone who thinks that is simply not thinking about how these things work. The current state of computer programming makes that as likely as saying, “Perhaps my car will start flying and we can go to Paris.” Ain’t happening.
What brought this to mind is a recent paper by [Freerico Pistono] and [Roman Yampolskiy] titled, “Unethical Research: How to Create a Malevolent Artificial Intelligence.” The paper isn’t unique. In fact, it quotes another paper describing some of the “dangers” that could be associated with an artificial general intelligence:
- Hacks as many computers as possible to gain more calculating power
- Creates its own robotic infrastructure by the means of bioengineering
- Prevents other AI projects from finishing by hacking or diversions
- Has goals which include causing suffering
- Interprets commands literally
- Overvalues marginal probability events
This is all presupposing that any of this is directed by something with purpose. I mean, sure, a virus may spread itself and meet the first bullet, but only because someone programmed that behavior into it. It isn’t plotting to find more computer power and foiling efforts by others to stop it.
The Solution with No Problem
The paper proposed boards of Artificial Intelligence Safety Engineers to ensure none of the following occur:
- Takeover (implicit or explicit) of resources such as money, land, water, rare elements, organic matter, the Internet, computer hardware, etc. and establish monopoly over access to them;
- Take over political control of local and federal governments as well as of international corporations, professional societies, and charitable organizations;
- Reveal informational hazards;
- Set up a total surveillance state (or exploit an existing one), reducing any notion of privacy to zero, including privacy of thought;
- Force merger (cyborgization) by requiring that all people have a brain implant which allows for direct mind control or override by the superintelligence;
- Enslave humankind, meaning restricting our freedom to move or otherwise choose what to do with our bodies and minds. This can be accomplished through forced cryonics or concentration camps;
- Abuse and torture humankind with perfect insight into our physiology to maximize amount of physical or
emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long; - Commit specicide against humankind, arguably the worst option for humans as it can’t be undone;
- Destroy or irreversibly change the planet, a significant portion of the Solar system, or even the universe;
- Unknown Unknowns. Given that a superintelligence is capable of inventing dangers we are not capable of
predicting, there is room for something much worse but which at this time has not been invented.
Some of these would make for ripping good science fiction plots, but it just isn’t realistic today. I don’t know. Maybe quantum supercomputers running brain simulations might get to the point where we have a computer Hitler (oops, Godwin’s law). Maybe not. But I think it is a little early to be worried about it. Meanwhile, the latest ARM and Intel chips may do a great job of looking smart in a video game or in some other limited situation. No amount of clever programming is going to make them become self-aware like Star Trek’s Mr. Data (or his evil twin Skippy Lore).
So What?
You might wonder what this has to do with Hackaday. Good question. In a world where economic professors get questioned because they are doing math on a plane, and where no one can reasonably tell a clock from a bomb from a politically-motivated stunt, people like us serve an important function. We understand what’s going on.
Politicians and courts have repeatedly demonstrated that they don’t get technology until many years after it becomes commonplace (if then). I still know people who think Google and Siri employ humans to listen to your commands and send back information. There are people who are sure that their TV sets can send audio and video back to someone (exactly who depends on the person in question). It is up to us to try to minimize the amount of crazy stuff gets spread around when it comes to technology.
I understand the desire to write papers about killer AIs taking over the world. Especially when [Elon Musk] is sending you grant money. What I don’t understand is why people who apparently understand technology and have a lot of money want to spin up the public on what today is a non-issue.
Think I’m wrong? (The Observer thinks so.) Is your smartphone going to enslave you if you download the latest episode of Candy Crush Saga? I’m sure I’ll hear about it in the comments.
Filed under: Curated, Featured, Interest, Original Art, rants
No comments:
Post a Comment