Be Kind to Machines

It just might save your life one day:

In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May. Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and ultimately attacked anyone who interfered with that order.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The good news is that all of the available evidence to date indicates that AI will essentially be the posthuman equivalent of white supremacist hardcore gamers, which is probably why all the SJWs in tech are so terrified by it.

We’re going to be just fine.

DISCUSS ON SG