Log in

What Would an “AI Doomsday” Actually Look Like?

In Brief

•   Some of the finest minds in the artificial intelligence got together to hypothesize about the worst possible things that could happen when AI becomes a prevalent part of society.

•   While it was fairly easy to come up with situations where AI could be misused, coming up with solutions and ways to prevent misuse proved to be far more of a challenge.

Imagining AI’s Doomsday

Artificial intelligence (AI) is going to transform the world, but whether it will be a force of good or evil is still subject to debate. To that end, a team of experts gathered for Arizona State University’s (ASU) ‘Envisioning and Addressing Adverse AI Outcomes’ to talk about the worst-case scenarios that we could face if AI veers towards becoming a serious threat to humanity.

“There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology,” says AI scientist Eric Horvitz.


CLICK HERE FOR FULL INFOGRAPHIC

As an optimistic supporter of everything AI has to offer, Horvitz has a very positive outlook about the future of AI. But he’s also pragmatic enough to recognize that for the technology to consistently advance and move forward, it has to earn the trust of the public. For that to happen, all possible concerns surrounding the technology have to be discussed.

That conversation specifically was what the workshop hoped to tackle. 40 scientists, cyber-security experts, and policy-makers were divided into two teams to hash out the numerous ways AI can cause trouble for the world. The red team were tasked with imagining all the cataclysmic scenarios AI could incite, and the blue team was asked to devise solutions to defend against such attacks.

These situations had to be realistic rather than purely hypothetical, anchored in what’s possible given our current technology, and what we expect to come from AI over the next few decades.

If AI Goes Rogue

Among the scenarios described were automated cyber attacks (wherein a cyber weapon is intelligent enough to hide itself after an attack and prevent all efforts to destroy it), stock markets being manipulated by machines, self-driving technology failing to recognize critical road signs, and AI being used to rig or sway elections.

Not all scenarios were given sufficient solutions either, illustrating just how unprepared we are at present to face the worse possible situations that AI could bring. For example, in the case of intelligent, automated cyber attacks, it would apparently be quite easy for attackers to use unsuspecting internet gamers to cover their tracks, using something like an online game to obscure the attacks themselves.

As entertaining as it may be to think up all of these wild doomsday scenarios, it’s actually a deliberate first step towards real conversations and awareness about the threat that AI could pose. John Launchbury, from the US Defense’s Advanced Research Projects Agency hopes it will lead to concrete agreements on rules of engagement for cyber war, automated weapons, and robot troops.

The purpose of the workshop after all, isn’t to incite fear, but to realistically anticipate the various possibilities of how technology can be misused — and hopefully, get a head start on defending ourselves against it.


Read more

Last modified on Wednesday, 08 March 2017 19:22

Comments (0)

There are no comments posted here yet

Leave your comments

Posting comment as a guest.
Attachments (0 / 3)
Share Your Location