Why it's dangerous to outsource our critical thinking to computers
- Submitted by: Love Knowledge
- Category: Philosophy
The lack of transparency around the processes of Google’s search engine has been a preoccupation among scholars since the company began. Long before Google expanded into self-driving cars, smartphones and ubiquitous email, the company was being asked to explain the principles and ideologies that determine how it presents information to us. And now, 10 years later, the impact of reckless, subjective and inflammatory misinformation served up on the web is being felt like never before in the digital era.
Google responded to negative coverage this week by reluctantly acknowledging and then removing offensive autosuggest results for certain search results. Type “jews are” into Google, for example, and until now the site would autofill “jews are evil” before recommending links to several rightwing antisemitic hate sites.
That follows the misinformation debacle that was the US general election. When Facebook CEO Mark Zuckerberg addressed the issue, he admitted that structural issues lie at the heart of the problem: the site financially rewards the kind of sensationalism and fake news likely to spread rapidly through the social network regardless of its veracity or its impact. The site does not identify bad reporting, or even distinguish fake news from satire.
Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.
This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.
The engineered environments of Facebook, Google and the rest have increasingly discouraged us from engaging in an intellectually meaningful way. We, the masses, aren’t stupid or lazy when we believe fake news; we’re primed to continue believing what we’re led to believe.
The networked info-media environment that has emerged in the past decade – of which Facebook is an important part – is a space that encourages people to accept what’s presented to them without reflection or deliberation, especially if it appears surrounded by credible information or passed on from someone we trust. There’s a powerful, implicit value in information shared between friends that Facebook exploits, but it accelerates the spread of misinformation as much as it does good content.
Every piece of information appears to be presented and assessed with equal weight, a New York Times article followed by some fake news about the pope, a funny dog video shared by a close friend next to a distressing, unsourced and unverified video of an injured child in some Middle East conflict. We have more information at our disposal than ever before, but we’re paralyzed into passive complacency. We’re being engineered to be passive, programmable people.
Read more https://www.theguardian.com/technology/2016/dec/10/google-facebook-critical-thinking-computers
Courtesy of Guardian News & Media Ltd
Related items
- Marcy Wheeler on Showdown over Nunes Memo, Mueller Probe & Reauthorization of Mass Surveillance
- 6 Elements of Police Spin: An Object Lesson in Copspeak
- Debate: Western Democracy Is Threatening Suicide
- Artificial Intelligence Is the New Science of Human Consciousness
- The Neuroscience of Creativity, Perception, and Confirmation Bias
Comments (0)