Good enough to fool a human?
On October 12 2008, it seemed as though computers edged a little bit closer to fooling humans. I’m sure you know of the Turing Test. Alan Turing was a pretty clever dude – mathematician, cryptographer and logician who devised the so-called Turing Test in a 1950 paper “Computing Machinery & Intelligence”. A human judge engages in a text-based conversation with a human and a machine. If the judge cannot tell human from machine, then the machine is said to be “thinking” and can be attributed with “intelligence”.
At the University of Reading, as part of the 18th Loebner Prize, artificial conversational entities (ACEs and aka machines) slugged it out in an artificial intelligence competition. So how close have machines come to imitating human communication and fooling people into thinking they are sentient human beings? Pretty close it seems.
Five of the world’s top notch machines engaged in a series of five-minute long, unrestricted conversations with humans. The aim was to fool the humans into thinking they were chatting to another human. The human judges did not know during the series of conversations whether it was a machine or a human.
The winner of the 2008 Loebner Prize was a machine named Elbot, created by Fred Roberts (an American computer scientist living in Germany). Elbot fooled 25% (3 out of 12) of the human interrogators, which is pretty close to the 30% threshhold set by Turing. So Elbot just needed to have convinced one more human and that would have pushed the machines over the 30% mark.
The interesting part I thought was when the humans correctly identified a machine, the conversational abilities of the machines were scored at 80% to 90%. So it seems there will be a day, maybe not in the so distant future, when machines will be busy chatting to us and we’ll be quite comfortable with human/computer interaction.
Maybe Elbot could be commercialised and we could buy him to talk to on those lonely nights at home! If you want to check out what Elbot and the human spoke about, go here.