The fear of robots has been there from the beginning — where I’m including creations like Frankenstein. At a mythological level, robots are our dark reflection. (They often look like us.) They instantiate our creeping fear that our own technology will rise up to destroy us.
That destruction need not be malevolent. We may simply become obsolete. In that sense, Blade Runner and the ballad of John Henry are much closer than they might appear.
It’s isn’t that there’s no cause for concern. Still, I find it very odd that we’re more worried about murderous machines than the murderous apes that build/wield them, a distinction writers like Isaac Asimov and Arthur C. Clarke understood. In “2001,” HAL kills not because he’s evil but because he’s given secret paranoid instructions from the military.
Some people are terrified at the prospect of killer robots, but consider: if we must war — if it’s inevitable — then I’d rather a proxy one, where armies of weaponized roombas slaughter each other on our behalf. The best defense against killer robots might be… robot-killing robots, which require them to possess the same skills.
The question with all of these technologies, from nuclear weapons to nanotech, is how you stop them once they become not merely feasible but cheap. Even with a legal moratorium, there is still a strong incentive to cheat, if only out of suspicion that your enemy is — or might be.
Even so, they are all just tools. I can use a baseball bat to play ball with my nephew or to beat his head in. The difference of course is that, having beat his head in, the baseball bat doesn’t go on to beat everyone’s head in.
Technology has become an existential threat. That is not unique to robots (or AI). In all scenarios, I’m still much more worried about what the humans will do with the machines than what the machines will do on their own.