I quote a passage from Pearl: The Book of Why, which provides a gentle introduction to the newly developed field (largely by him) of causal inference via path diagrams:
In 1950, Alan Turing asked what it would mean for a computer to think like a human. He suggested a practical test, which he called “the imitation game,” but every AI researcher since then has called it the “Turing test.” For all practical purposes, a computer could be called a thinking machine if an ordinary human, communicating with the computer by typewriter, could not tell whether he was talking with a human or a computer. Turing was very confident that this was within the realm of feasibility. “I believe that in about
fifty years’ time it will be possible to program computers,” he wrote, “to make them play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning.”
Turing’s prediction was slightly off. Every year the Loebner Prize competition identifies the most humanlike “chatbot” in the world, with a gold medal and $100,000 offered to any program that succeeds in fooling all four judges into thinking it is human. As of 2015, in twenty-five years of competition, not a single program has fooled all the judges or even half of them.
Note that this is a very POSTIVIST idea — if the surface appearances match, that is all that matters. The hidden and unobservable reality – the structures within the computer – do not matter. This is like other major mistakes in the theory of knowledge made by childless philosophers, who did not experience and observe how children acquire knowledge. Turing came up with this ridiculous test and idea because he was just another childless philosopher – The Book of Why continues the passage above as follows, confirming that Turing was clueless about children:
Turing didn’t just suggest the “imitation game”; he also proposed a strategy to pass it. “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked. If you could do that, then you could just teach it the same way you would teach a child, and presto, twenty years later (or less, given a computer’s greater speed), you would have an artificial intelligence. “Presumably the child brain is something like a notebook as one buys it from the stationer’s,” he wrote. “Rather little mechanism, and lots of blank sheets.” He was wrong about that: the child’s brain is rich in mechanisms and prestored templates.
Pearl does not reference a source for the last sentence that the child’s brain is rich in mechanisms. However, by now a lot of studies of child development show that children are born with a lot of knowledge about the world they are coming into. Childless philosophers are prone to such gross mistakes about the nature of human knowledge, and also about how human beings learn about the world world we live in. Unfortunately, their failures have had catastrophic real effects on the world we live in — see “The knowledge of childless philosophers“ and “Beyond Kant“ for more discussion of this.