Log in

Tech Expert Warns That AI Could Become “A Fascist’s Dream”

AI: Ripe For Abuse

In her March 12 talk at the 2017 SXSW Conference, Kate Crawford of Microsoft Research warned that artificial intelligence is ripe for abuse. Crawford, who researches the social impact of large-scale data systems and machine learning, described how encoded biases in AI systems could be abused to target certain populations and centralize power in the hands of authoritarian regimes.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said in Dark Days: AI and the Rise of Fascism, her SXSW session.

Crawford believes the issue is that AI is often invisibly coded with human biases that often correspond with the characteristics of fascist movements: to demonize outsiders, track populations, centralize power, and claim neutrality and authority without accountability. AI can be a potent tool in achieving those goals, especially if it is coded with human biases.


CLICK HERE FOR FULL INFOGRAPHIC

 

Avoiding Coding Bias

As an example of this kind of biased coding, Crawford described research from China’s Shanghai Jiao Tong University. The authors claimed they created a bias-free system — which had been trained on Chinese government ID photos — that could use facial features to predict criminality. The conclusion of the research data found that “criminal” faces were more unusual in appearance than “law-abiding” faces. The interpretation being that law enforcement was less likely to trust people whose physical appearance deviated from “the norm”. As Crawford explained it:

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data. Our biases are built into that training data.”

Crawford’s concerns center around using AI as a black box of algorithms that mask discrimination. AI could also be misused to build registries, which could in turn be used to target specific populations. To this end, Crawford cited IBM’s Hollerith Machine, used by Nazi Germany to track ethnic groups, and the Book of Life’s role in South African apartheid.

Back in 2013, researchers could already predict religious and political affiliations on Facebook with more than 80 percent accuracy, and since that time AI has made leaps and bounds.

In the U.S., an AI system to assist in mass deportations has been in the works at Palantir since 2014, and the company’s co-founder Peter Thiel is an advisor to President Trump. But Crawford believes predictive policing has already failed, because research has shown that it results in unfair targeting and excessive force against minorities.

To avoid biased systems with bad data, we must develop AI to be more accountable and transparent. This way we can map unintended effects despite their complexity.“We want to make these systems as ethical as possible and free from unseen biases,” Crawford says, and she’s also putting in the work. She founded the research community AI Now, which focuses on AI’s social impacts, with these goals in mind.


Read more

Last modified on Thursday, 16 March 2017 17:30

Comments (0)

There are no comments posted here yet

Leave your comments

Posting comment as a guest. Sign up or login to your account.
Attachments (0 / 3)
Share Your Location