Rogue Robots

Posted on 2014 September 18

0


%22rogue robots%22 rg1024_robot_carrying_things_4

Strong AI is like a cosmic lottery ticket: if we win, we get Utopia; if we lose, Skynet substitutes us out of existence. — Peter Thiel

Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? — Steve Wozniak

Computers keep getting smarter. Soon they may become so smart we won’t know what they’re up to anymore. Will this be a good thing, or not?

Computer scientists like to refer to this coming event as “The Singularity”, a moment beyond which all bets are off. Once computers are brighter than we are, there’s no telling what they’ll think up, invent, plan, etc. This could cause massive, unpredictable changes to society. And since computers process information much faster than do humans, an artificial intelligence (AI) would likely learn how to improve itself so quickly that the rest of us would soon be left far behind.

What might it be thinking? This would depend on its initial programming. Computers perform functions based on software commands. The program I’m using to write this essay has a fairly straightforward mandate to produce a few pages of written words based on what I type. More sophisticated applications are designed specifically to manage checking accounts or play streaming videos or control phone systems or run airports or search images for criminal suspects or perform surgeries. They have their distinct, limited purposes.

None of them can outthink ordinary people in day-to-day human activities. Not yet.

What would we want from the first true AI? If we told it, “Find and implement processes that benefit humankind,” we’d hope thereby that this Singularity would usher in a utopia for all. But what if the term “benefit” were poorly defined or we couldn’t agree on the definition? And what if the AI computer reckons that the “processes” it chooses require the death of millions who, perhaps in all innocence, stand in the way of the computer’s idea of the general benefit?

Writer Isaac Asimov proposed “The Three Laws of Robotics”, an attempt to corral the decisions of artificial minds so they wouldn’t harm people. In his science-fiction stories, robots were forbidden from causing injury to humans, nor could they disobey people (unless that harmed someone), nor could they hurt themselves (unless that caused harm). 

Across the decades, thinkers have debated whether these or similar commands would be enough to prevent a disaster at the hands of mad machines. Yet no matter how carefully we ponder, there’s no way to predict how such rules would be interpreted by a digital intelligence far superior to our own. Asimov himself suggested that robots might deduce additional laws which follow logically from his first three, laws that could have surprising consequences.

But let’s assume we manage somehow to come up with the perfect string of commands to constrain AIs. Would we be able to implement those commands universally? Could we guarantee that no AI would ever become destructive?

Bear in mind that scientists and engineers all over the world are working on the problem of artificial intelligence. AI may arise not merely in one place or time; it could erupt from several locations. And not all its various inventors will likely be endowed with benevolent motives. Suppose a lab, in a country beset by war, produces a form of AI and releases it as a weapon against the enemy. Once on the battlefield — an attack against computer servers or a robotic advance along the front line — will that AI easily be recalled?

Already, millions of programs exist “in the wild” on computers and the Internet, many undergoing continuous tweaking by hackers and spammers and political extremists, often for nefarious purposes. The world has suffered from computer viruses that commandeered millions of processors merely to show off the programmer’s cleverness. As The Singularity approaches, a lot of crackpots may get their hands on portions of the new AI code, and, intoxicated with their own brilliance, design and release rogue versions. What then?

In effect, AI may very well come into existence as outlaw software and then evolve until at least one form becomes malignant, its purpose merely to reproduce endlessly at any cost, like a mechanical cancer. If it were to escape the confines of its birthplace, it might learn quickly how to commandeer industrial machinery, wresting resources from us for its own purposes. A future headline: “Fugitive AI Hijacks 3-D Printers, Makes Mobile Versions of Itself.” An obsessive AI could reproduce endlessly, sapping everything it touches, and then — much cleverer than its human opponents — absorb the very resources we threw against it, using them to reproduce itself on a grander scale.

If a rogue AI were to produce copies of itself that, in turn, reproduce, in no time there’d be zillions of them searching the planet for more resources. There wouldn’t be room on Earth for anything except AIs.

“But we’ll hunt them down and stop them!” All it takes is one vastly superior intelligence that can hijack resources at will, and the game is over.

“But people can defeat a machine!” Not this one. It will be resourceful enough to anticipate and counteract human strategies before we even think of them.

“But we will bring our own computers to the fight!” The first AI might very well zoom past our understanding in as little as several seconds from its launch. If it attacked us, we wouldn’t even know where to aim our resources, including our distinctly inferior computers.

Okay, let’s review: AI is coming. We can’t stop it. We can’t prevent rogue versions. We can’t defeat malignant AI.

…Or can we? Here are some possible pushbacks:

• Crowd sourcing and Big Data: Masses of people can come up with surprisingly intelligent solutions to problems. Already we sift purchasing patterns, voting trends, surveys, questionnaires, and real-world behaviors for ideas. Networked computers, working together on questions, also produce sophisticated answers to our questions. In this sense, it doesn’t always take super-intelligence to come up with super answers. Humanity as a whole — and its computers — might be a match for an evolving AI. (But for how long? A year? A minute?)

• An upper limit on brilliance? It’s possible that absolute intellect may have a ceiling. Your local friendly genius may already be working near the maximum possible quality of intellect for anything in this universe. AIs of the future may think much faster but not therefore much better than humans. This isn’t to deny that computers can’t come up with a lot of smart answers or crunch huge numbers. But it’s possible an AI’s individual decisions won’t be that much better than the best of our own. It will have an advantage in speed, though not necessarily in quality.

• Creative ideas are limitless: Answers to questions often require leaps of insight which are essentially unpredictable. In that sense, computers — even AIs — have no inherent advantage over humans, beyond sheer processing capacity. (For which see “Crowd sourcing” above.) All of us together may be able to muster large handfuls of smart countermoves against an attacking AI.

• Anticipate AI’s weaknesses: Chess grandmasters have managed to put up pretty good defenses against computers, learning to anticipate machine vulnerabilities they’ve noticed during play. Likewise, we might be able to thwart much of a bad AI’s attack if we can find chinks in its armor.

• Anti-AI viruses: We could develop code that seeks out and attacks and/or weakens a rogue AI. (This would require, though, that someone on our team gets a glance at that particular AI’s programming before it wakes up.)

• An international anti-AI alliance: major powers could meet, agree to regard a destructive AI as a form of “alien invasion”, and marshall resources accordingly.

• Public awareness: Alerts — warnings to, for instance, unplug personal computers (as during an electrical storm) — could blunt the depredations of a malignant AI. As with another threat that has not yet come true, namely, nuclear war (against which we built bomb shelters, laid in supplies, trained and drilled, etc.), we might mobilize locally, ahead of time, to face a future attack.

• Fail-safe protections: We could disconnect certain resources from the Internet and electric grids and/or bury them in hardened bunkers, giving us extra time.

• Accelerate AI development: This sounds counter-intuitive, but perhaps a crash program by major industries and governments could help the less irresponsible of us to pull ahead of the crackpots, thereby improving chances that the first AI would be benevolent. The good AI would thus have a huge head start and, presumably, be more than a match for any hostile artificial brainiacs that subsequently emerged. (Some would argue that governments themselves might try to use AI for belligerent purposes, thereby worsening the situation. In that case, virtuous outsiders could finance their own crash program, hoping to save the day. But loose cannons could do the same, which brings us back to square one.) 

…We’re in danger. But most of us are too busy worrying about ordinary problems to notice a totally new peril we can barely imagine.

It’s possible the reason we’ve never been visited by alien civilizations is that they all were destroyed by their own versions of AI. It’s also possible that we’re the first high-tech society in our arm of the galaxy, and it will be we who invent the AI that dooms us. And then that creation, with its self-made minions, will venture out to the stars to invade and conquer other worlds. Sorry, aliens! It was an accident. We didn’t mean it.

There it lies, artificial intelligence on the horizon of our future, glowing with the promise of what most of us conceive — if we think of it at all — as yet another invention to grace our lives in this age of ingenuity. But is it the gleam of utopia that we glimpse or the flames of doom?

Maybe I’ve watched too much sci-fi on TV. Where’s my tin hat when I need it? 

As if that would help.

* * * *

UPDATE: A long — but entertaining — article about the dangers of the coming AI SIngularity

.

Advertisements