William Webster

written by

William Webster

Researcher, Avoncourt Partners GmbH

Culture Blog - May 16, 2018

Danger of AI Weaponization?

In the 1984 movie Terminator, Kyle Reese reveals the grim future to Sarah Connor with simple words. “There was a nuclear war. A few years from now, all this, this whole place, everything, it’s gone. Just gone. There were survivors. Here, there. Nobody even knew who started it. It was the machines, Sarah.”

Does AI pose a threat?

Man wearing protective suit with caution tape. Danger zone concept.

There has been avid imagination in Hollywood and novels about the destructive power of machines doted with decision-making powers. Continuous speculation about the power and dangers of artificial intelligence (AI) are approaching reality in our daily lives, yet of late they have been focused on what AI will do to our jobs. Now there’s a discussion brewing among tech giants, journalists and government officials about how AI can make autonomous lethal weapons systems possible. Especially if these fall into the hands of terrorist organizations. There are obviously no easy answers to the moral and legal implications of autonomous weapons.

Although artificial intelligence has provided improvements and efficiencies in many sectors of national economies from entertainment to transportation to healthcare, when it comes to weaponized machines being able to function without intervention from humans, a lot of questions are raised.

Dangerous AI developments

 

“There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons sparks huge concern”

said Toby Walsh, a professor at the University of New South Wales.

Walsh was reacting to an article in the Korea Times describing the South Korean university KAIST as cooperating with the weapons manufacturer Hanwha Systems, “joining the global competition to develop autonomous arms.”

In the UK, the Taranis drone, an unmanned combat aerial vehicle, should be fully functioning by 2030, capable of replacing the human-piloted Tornado GR4 fighter planes which form a part of the Royal Air Force’s Future Offensive Air System. The United States and Russia are developing robotic artillery vehicles that can operate autonomously or be remotely controlled. The U.S. also has an autonomous warship that was launched in 2016. Although it’s still in development, it’s expected to have offensive capabilities including anti-submarine weaponry.

AI can help fight terrorism

On the other side of the AI spectrum, Facebook announced that it is using AI to find and remove terrorist content from its web services and platforms. Behind the scenes, Facebook uses image-matching technology to identify and prevent photos and videos from known terrorists from popping up on other accounts. The company also suggested it could use machine-learning algorithms to look for patterns in terrorist propaganda. That way it could more swiftly remove it from newsfeeds. Facebook partnered with other tech companies including Twitter, Microsoft and YouTube to create an industry database that documents the digital fingerprints of terrorist organizations.

Always when new technological developments revolutionize industries and economies, there is the urgent need of a corresponding ethical development. This technology must be used responsibly. Now that AI technology has begun to impact our world, we need to do our best to find ways to properly channel it for good. If terrorist organizations wish to use AI for evil purposes, perhaps the best defense will be an AI offense.