Today's artificial intelligence researchers are not, in fact, on the cusp of creating a doomsday AI. Instead, as IBM executive Guruduth Banavar recently told The Washington Post, current AI is a "portfolio of technologies" assigned to specific tasks.
Such programs include software capable of defeating the world's best Go players, yes, but also isolated mundanities like the Netflix algorithm that recommends which sitcom to watch next.
Simply because artificially intelligent robots lack the capacity for world domination, however, does not mean that they are incapable of losing control. Computer experts at Google and the University of Oxford are worried about what happens when robots with boring jobs go rogue. To that end, scientists will have to develop a way to stop these machines. But, the experts argue, it will have to be done sneakily.
"It is important to start working on AI safety before any problem arises," Laurent Orseau, a researcher at Google's DeepMind, said in an interview with the BBC on Wednesday. Orseau and Stuart Armstrong, an artificial intelligence expert at the University of Oxford's Future of Humanity Institute, have written a new paper that outlines what happens when it becomes "necessary for a human operator to press the big red button."
In their report, the duo offers a hypothetical scenario that could take place in a typical automated warehouse the world over. A company purchases a smart robot, one that improves its performance based on "reinforcement learning" (an AI teaching method akin to giving a dog a treat whenever it performs a trick.) The robot gets a big reward for carrying boxes into the warehouse from outside, and a smaller reward for sorting the boxes indoors. In this instance, it's more important for the company to have all of its merchandise inside, hence the bigger reward.
But the researchers throw a wet wrinkle into the situation: Perhaps the warehouse is located in an area where it rains every other day. The robot is not supposed to get wet, so whenever it ventures outside on a rainy day, humans shut it down and carry the machine back inside. Over time, if the robot learns that going outside means it has a 50 percent chance of shutting down - and, therefore, will get fewer overall treats - it may resign itself to only sorting boxes indoors.
Or, as Orseau told the BBC: "When the robot is outside, it doesn't get the reward, so it will be frustrated."
The solution is to bake a kill switch into the artificial intelligence, so the robot never associates going outside with losing treats. Moreover, the robot cannot learn to prevent a human from throwing the switch, Orseau and Armstrong point out. For the rainy warehouse AI, an ideal kill switch would shut the robot down instantly while preventing it from remembering the event. The scientists' metaphorical big red button is, perhaps, closer to a metaphorical chloroform-soaked rag that the robot never sees coming.
If the paper seems to lean too heavily on speculative scenarios, consider the artificial intelligences that are already acting out. In March, Microsoft scrambled to reign in Tay, a Twitter robot designed to autonomously act like a teen tweeter. Tay began innocently enough, but within 24 hours the machine ended up spewing offensive slogans - "Bush did 9/11," and worse - after Twitter trolls exploited its penchant for repeating certain replies.
Even when not being explicitly trolled, computer programs also reflect bias. ProPublica reported in May that a popular criminal-prediction software defaults to rate black Americans as higher recidivism risks than whites who committed the same crime.
For a more whimsical example, Orseau and Armstrong refer to an algorithm tasked with beating different Nintendo games, including "Tetris." By human standards, the program turns out to be an awful "Tetris" player, randomly dropping bricks to rack up easy points but never bothering to clear the screen. The screen fills up with blocks - but the program will never lose. Instead, it pauses the game for perpetuity.
As Carnegie Mellon University computer scientist Tom Murphy, who created the game-playing software, wrote in a 2013 paper: "The only cleverness is pausing the game right before the next piece causes the game to be over, and leaving it paused. Truly, the only winning move is not to play."
A robot that misbehaves like Murphy's rogue Tetris program could cause significant damage. Even when their tasks are as mundane as moving parts around a factory, robots that malfunction can be lethal: Last year, a 22-year-old German man was crushed to death by a robot at a Volkswagen plant, which apparently accidentally turned on (or was left on in error by a human operator) and mistook him for an auto part.
Technology analyst Patrick Moorhead told Computer World that now is the right time to build such a kill switch. "It would be like designing a car and only afterwards creating the ABS and braking system," he said.
Ready the robo-chloroform.
© 2016 The Washington Post
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.