Future Intelligence: Will A.I. Be A Friend or Potential Enemy?
- Submitted by: Love Knowledge
- Category: Science
In Brief
• Robots and artificial intelligence are excellent fodder for science-fiction writers but seldom do they get them right. Human fears about AI come from a misunderstanding of what they do and how they manage to do it.
AI: Artificial Imitation
Artificial Intelligence (AI) is enjoying one of its periodic moments in the limelight. Why this interest now? Some of this we can put down to the ongoing fascination Hollywood seems to have with AI. From Stanley Kubrick’s 2001 to Ridley Scott’s Blade Runner, Stephen Spielberg’s Artificial Intelligence to Alex Garland’s Ex Machina; Hollywood has made enjoyable films and good money out of AI.
These films have inspired generations of AI students. Indeed AI was once described as making computers that behave like the ones in the movies! However, Hollywood invariably takes a dystopian view of the subject. The computers and robots are usually mad, bad and dangerous to know. Yet this doesn’t seem to hinder AI’s popularity.
A second reason that AI is in vogue is that some of the planet’s greatest scientists and innovators have been telling us to take care. Stephen Hawking is worried that super smart computers could spell the end of the human race. Elon Musk donated $10 million to keep AI beneficial. And a year ago we saw the publication of an open letter from leading Artificial Intelligence experts, arguing for vigilance so as to ensure that this fast developing field benefits humanity.
A third reason that AI is much talked about is because our machines seem to be getting ever more prescient, even anticipating our needs. This is what struck Stephen Hawking when he upgraded the system that enables him to write and communicate despite his motor neuron disease. What the computer could do surprised him – just how smart it was – seeming to anticipate what he wanted to write next. This set him thinking about just how intelligent computers were becoming and how quickly that was happening.
Credit: Christine Daniloff / MIT file
The fact is that our computers just get better and better. Propelled by the exponents that drive the science and engineering of hardware and software, our devices become twice as powerful every eighteen months or so. The machines my students use are now one million times more powerful than those I used when I began my studies in AI.
These more powerful machines have more data to access using more sophisticated algorithms; they contain more sensors that deliver more functionality. And, of course, they can draw on even more power and data from the Cloud. The result is a supercomputer in our pockets that sometimes acts as a phone, but can also recognize speech, faces, and patterns of all kinds. A device that can connect to the colossal repository of human knowledge that is the World Wide Web and answer questions on anything from Genghis Khan to the weather in Houston, current traffic conditions in London to a book I might like to read given my reading preferences over the past few years.
AI research has exploited the increased power of computers and the access to huge amounts of data to write programs that can “learn”, “understand” and “anticipate”. This is why when you ask your phone “When did Michelangelo die?” certain brands will answer in synthesized speech “Michelangelo died the 8th of February 1564 aged 88”. It will also provide you with a screen full of additional information about the Italian genius and polymath. This kind of response is now accepted as normal and routine.
AI has delivered many components of smart behavior. If we think about it, we don’t expect that they really “learn”, “understand” or “anticipate” in the way we humans do. There is no attempt to cognitively emulate the task; to build a system that recognizes faces or speech in the way humans do. They deliver on tasks that were once seen as the high frontier of AI. It is worth remembering that the programs on our phone have no interest or sense of pride in a task well executed.
It’s Just a Game
Nevertheless, AI systems, with their unnervingly swift and accurate responses, lead us to assume a formidable intelligence sat behind this performance. The US features a hugely popular game show called Jeopardy, in which contestants are presented with general knowledge clues in the form of answers, and must phrase their responses in the form of questions.
If you watch a YouTube clip of IBM’s computer Watson playing Jeopardy, we see it ultimately beat the best human players. It is easy to think of Watson as a super intelligent machine. It is smart at playing Jeopardy. But it couldn’t suddenly start playing Go or Monopoly. It is good at a particular task.
The imminent arrival of a general AI was also much discussed in the late 1990s when a computer beat the world chess champion, Gary Kasparov. That machine’s performance psychologically undermined Kasparov; he was convinced it was reading his mind. Deep Blue played chess by looking deep into millions of possible moves, by having databases with huge numbers of opening moves, the end stage of a game and so on.
But Deep Blue could no more transfer its skills to Jeopardy than Jeopardy could use its methods and algorithms to play chess. We have no real idea how to endow our computers with overarching general intelligence. We have no real idea how to build an intelligence that is reflective, self-aware and able to transfer skills and experience effortlessly from one domain to another.
That doesn’t mean that the achievements of AI should be diminished. I have mentioned a few of the services we now enjoy thanks to AI research. I could have included machine translation, robotics and software agents. We have new variants of machine learning that can achieve super-human levels of behavior on specific tasks with only a few hours training. But our programs are not about to become self-aware or decide that we are redundant. The threat as ever is us; not artificial intelligence but rather our own natural stupidity.
We most certainly need to think about the limits of ever more capable autonomous systems, whether pilotless drones or automated trading systems. We need to consider what limits we place on these systems. We most certainly need to consider the restraints and safeguards that we need to engineer into the hardware, software and deployment policies of our current AI systems. But the next self-aware computer you encounter will only be appearing at a cinema near you.
Read more https://futurism.com/future-intelligence-will-a-i-be-a-friend-or-potential-enemy/
Related items
- Bill Gates Warns Sillicon Valley of Technology’s Dangerous Potential
- A Fully Solar-Powered Car May Be Hitting the Road by 2019
- Rippling Graphene Sheets May Be the Key to Clean, Unlimited Energy
- New Device Lets You Charge Your Phone Just By Walking
- We May Finally Have a Way to Cheaply Manufacture Pure Graphene
Comments (0)