Table Partners

Consultoria de estratégia e liderança

Mais uma vitória da inteligência artificial

por The Economist | Feb 18th 2011

ALL THE SMART money was on the machine.

Since the first rehearsal over a year ago, it had become apparent that Watson—a supercomputer built by IBM to decode tricky questions posed in English and answer them correctly within seconds—would trounce the smartest of human challengers. And so it did earlier this week, following a three-day contest against the two most successful human champions of all time on “Jeopardy!”, a popular quiz game aired on American television.

By the end of the contest, Watson had accumulated over $77,000 in winnings, compared with $24,000 and $21,600 for the two human champions.

IBM has a long tradition of setting “grand challenges” for itself

—as a way of driving internal research and innovation as well as demonstrating its technical smarts to the outside world.

A previous challenge was the chess match staged in 1997 between IBM’s Deep Blue supercomputer and the then world champion, Garry Kasparov. As shocking as it seemed at the time, a computer capable of beating the best chess-player in the world proved only that the machine had enough computational horsepower to perform the rapid logical analysis needed to cope with the combinatorial explosion of moves and counter-moves. In no way did it demonstrate that Deep Blue was doing something even vaguely intelligent.

Riddles and puns

Even so, defeating a grandmaster at chess was child’s play compared with challenging a quiz show famous for offering clues laden with ambiguity, irony, wit and double meaning as well as riddles and puns—things that humans find tricky enough to fathom, let alone answer. Getting a mere number-crunchier to do so had long been thought impossible.

The ability to parse the nested structure of language to extract context and meaning, and then use such concepts to create other linguistic structures, is what human intelligence is supposed to be all about.

Brainchild

Four years in the making, Watson is the brainchild of David Ferrucci, head of the DeepQA project at IBM’s research centre in Yorktown Heights, New York. Dr Ferrucci and his team have been using search, semantics and natural-language processing technologies to improve the way computers handle questions and answers in plain English. That is easier said than done. In parsing a question, a computer has to decide what is the verb, the subject, the object, the preposition as well as the object of the preposition. It must disambiguate words with multiple meanings, by taking into account any context it can recognise. When people talk among themselves, they bring so much contextual awareness to the conversation that answers become obvious. “The computer struggles with that,” says Dr Ferrucci.

Another problem for the computer is copying the facility the human brain has to use experience-based shortcuts (heuristics) to perform tasks. Computers have to do this using lengthy step-by-step procedures (algorithms). According to Dr Ferrucci, it would take two hours for one of the fastest processors to answer a simple natural-language question. To stand any chance of winning, contestants on “Jeopardy!” have to hit the buzzer with a correct answer within three seconds. For that reason, Watson was endowed with no fewer than 2,880 Power750 chips spread over 90 servers. Flat out, the machine can perform 80 trillion calculations a second. For comparison’s sake, a modern PC can manage around 100 billion calculations a second.

The sons of Watson

For the contest, Watson had to rely entirely on its own resources. That meant no searching the internet for answers or asking humans for help. Instead, it used more than 100 different algorithms to parse the natural-language questions and interrogate the 15 trillion bytes of trivia stored in its memory banks—equivalent to 200m pages of text.

Geting a computer to converse with humans in their own language has been an elusive goal of artificial intelligence for decades. (…) Should a machine manage the feat without the human participants in the conversation realising they are not talking to another person, then the machine would pass the famous test for artificial intelligence devised in 1950 by Alan Turing, a British mathematician famous for cracking the Enigma and Lorenz ciphers during the second world war.

It is only a matter of time before a computer passes the Turing Test. It will not be Watson, but one of its successors doubtless will.

2029

Ray Kurzweil, an engineer and prognosticator, believes it will happen by 2029.

He notes that it was only five years after the massive and hugely expensive Deep Blue beat Mr Kasparov in 1997 that Deep Fritz was able to achieve the same level of performance by combining the power of just eight personal computers. In part, that was because of the inexorable effects of Moore’s Law halving the price/performance of computing every 18 months. It was also due to the vast improvements in pattern-recognition software used to make the crucial tree-pruning decisions that determine successful moves and countermoves in chess.

Now that the price/performance of computers has accelerated to a halving every 12 months, Mr Kurzweil expects a single server to do the job of Watson’s 90 servers within seven years—and by a PC within a decade.

Master and pet

Mr Kurzweil believes that once computers master human levels of pattern recognition and language understanding, they will leave mankind way behind.

By then, they will have combined the human skills of language and pattern recognition with their own unique ability to master vast corpora of knowledge.

Will that mean game over for humans—with robots keeping people around merely as pets?

“Absolutely not,” says Oren Etzioni, director of the Turing Centre at the University of Washington in Seattle. But it does mean, he notes, that computers will be able to achieve vastly more than they can today.

Long term, Watson’s progeny could help people sift through the thousands of possibilities they confront in their public and private lives, and come up with handfuls of appropriate recommendations—whether in medical diagnosis and treatment, legal precedents, investment opportunities, design configurations or whatever.

Vote neste artigoVote neste artigoVote neste artigoVote neste artigoVote neste artigo
Loading...
  • Michel Hannas

    Antes de ler o artigo, já ia dizer que não era inteligência artificial, mas memória artificial. Muito interessante o artigo. Quanto às perspectivas sobre o que farão os computadores com todo o excesso de capacidade de processamento que será criado no futuro, provavelmente vão rodar sistemas operacionais Windows 256 ou Android Zebra Cake muito mais poderosos, provavelmente com inúmeros bugs, e que vão consumir grande parte dos recursos. No final, o que importa é a inovação que criaremos nas experiências para os usuários. Lembram do iMode? A NTT DoCoMo tinha aplicações para celulares com transmissões de 9,6kbps há mais de 10 anos atrás…