Saturday 11 January 2014

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

57 comments:

  1. Harnad adds several nuances to the Turing Test proposed in Turing’s 1950 paper. To recap, a machine passes the Turing Test (TT) if it so successfully imitates a human being that an interrogator is fooled into thinking the machine is a human. Cognitive scientists want to build a machine that will pass the TT in order to explain how and why organisms do what they do (ex: read, write, do math, play chess …).

    First, Harnad argues that the machine must not only fool the interrogator in an email exchange (ie write and respond to emails so well that the interrogator thinks it is human), but rather, the machine must be a full robot. The robot doesn’t need to look human, but it needs to be able to do everything a human can do (ex: bike, walk). Harnad’s argument for the robotic version of the TT is pretty compelling: namely that if the point of the TT is to explain how we do what we do, it is pretty arbitrary to only care about our verbal performance capacities. Furthermore, “our verbal abilities may well be grounded in our non verbal abilities.” I think this has something to do with the symbol-grounding problem, which I don’t really get yet. But I suspect that our ability to communicate verbally has something to do with us being creatures who have senses and who can move about in the world. So the fact that we can call a flower a flower is tied to the fact that we have sensed (saw/touched/smelled) flowers before. But I need to wait to the class on symbol-grounding to totally figure this out.

    Second, Harnad argues that the machine need not just be a computer. It can be any machine whatsoever! And a machine, for Harnad, includes any “dynamical, causal system,” which I think means anything that is susceptible to the laws of cause and effect. So even we are machines! But using a human being to pass the TT wouldn’t be that enlightening, because we have no idea how we do what we do. The point of building a machine that passes the TT is to explain how we do what we do – that is why we are interesting in “reverse engineering” something that seems an awful lot like a human being (this is reverse engineering because humans already exist).

    Third, Harnad says that to pass the TT, the machine must not only fool some experiments sometimes, but any interrogator, and for its entire lifetime. Only then can we say that we have actually explained human cognition!

    ReplyDelete
    Replies
    1. Not fooling or imitating: Reverse-engineering and generating

      Jessica, you are understanding perfectly, so I can't understand why you keep saying "fooling"!

      The purpose of the TT is to really generate real human performance capacity, all of it, for a lifetime. It's not about fooling, imitating or pretending. The "imitation" was just to set up the game, with a woman imitating a man or vice versa. The rest is about reverse-engineering human performance capacities.

      The movie "The Imitation Game" makes too much of the somewhat saccharine notion that Turing was trying to reincarnate the mind of his lost love in a machine. My guess is that the early love and loss no doubt increased Turing's motivation to do something with his mind, but that it was not as silly or simplistic as "resurrecting Christopher." A better candidate for Turing's tribute to Christopher might be what's come to be called the "Church-Turing Thesis" which is about what computation (not cognition!) really is, and how powerful it is. (I'll discuss it in class.) Computation is something universal, and it's the same in everyone's mind (or computer, or pencil and paper.)

      If Turing had Aspergers Syndrome (as the movie portrayed him, with a formal mind unable to read other people's feelings and unable even to grasp a metaphor or a joke) then the fact that what all mathematicians are really doing turns out to be just formal symbol manipulation, as done by a Turing Machine, would have been very satisfying to Turing; and perhaps he would have felt that that somehow brought him closer to his deceased first love, who had had -- or Turing thought he had -- the same sort of gift for mathematics as Turing.

      Delete
  2. “The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices.” (Turing)

    Professor Harnad responds to this quote with: “Yes, but using T2 as the example has inadvertently given the impression that T3 is excluded too: not only can we not see or touch the candidate, but the candidate cannot see or touch anything either -- or do anything other than compute and email.”

    T3 is defined as “total indistinguishability in robotic performance capacity”. It seems to me that Turing’s statement is more agreeable with T2 than T3, thus, shouldn’t Turing be arguing for T2? Turing purposely designed his test to be done through email, so that the interrogator would not be able to bias his or her decision through the visual appearance of the robot nor the artificial voice. If T3 were excluded, T2 would be the level at which Turing intended for his test. It seems to me as though Turing’s statement vouches for T2 rather than T3, so I do not quite understand Professor Harnad’s counterargument that it would result in the exclusion of T3, since it seems biased towards accepting only those arguments that support T3.

    ReplyDelete
    Replies
    1. The Power of Language

      Even though Turing invented computation, the computer and the Turing Test, I don't think he would have described himself as a computationalist (cognition = computation). He knew the difference between physics (dynamical systems) and computation, and I doubt he believed that the only role of physics (or chemistry, or physiology) in generating cognition was to serve as the hardware on which to run the right software.

      I also don't think he would have left out all the sensorimotor things people can do as being irrelevant to cognition, or even irrelevant to passing the TT.

      I think the reason Turing presented the TT as T2 (verbal only) rather than T3 (verbal plus sensorimotor robotic) was partly because he thought it would be too hard (and unnecessary) to make a robot sufficiently lifelike to be indistinguishable from what a real person looks like -- but partly because he thought (and he was probably right) that as a test T2 was strong enough to test for cognition, because all that you can say in language (all that is "verbalizable") is so enormous and all-encompassing (and rather similar to all that you can do with computation -- all that is "computable"). (Btw, all that is computable is part of all that is verbalizable, because formal languages are a subset of natural languages, and hence part of the power of language.)

      But here's a simple way to understand this point and reconcile the seeming inconsistency:

      T2 is not strong enough to test all cognitive capacity directly, because you can't test with T2 whether the candidate has the T3 robotic capacity to recognize the objects it is talking about verbally.

      But T2 is strong enough to test all cognitive capacity indirectly, because without the T3 robotic capacity to recognize the objects it is talking about verbally the candidate would not be able to pass T2 in the first place.

      In other words, T2 capacity is grounded in T3 capacity, even if the T3 capacity is not tested directly: The successful candidate still has to be a T3 robot, in order to have the capacity to pass T2.

      Hope that helps.

      Delete
  3. " Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! That would reduce the Turing Test to the Gallup Poll that Turing rightly rejected in raising the question of what "thinking" is in the first place!"

    This seems to be addressing a major and very common misunderstanding with regards to the Turing Test and the significance thereof. For example, in Block's "The Mind as the Software of the Brain" (a reading which will be familiar to all PSYC532 students), the issue is brought up of selecting judges, arguing that somebody well-versed in computers is far more likely to correctly tell apart a computer from a human than a layperson selected at random from the street. However, as is made abundantly clear in Harnad's commentary, the Imitation Game is *not* a game, nor is it truly Imitation; it is indistinguishable performance capacity. Hence, all of the arguments brought forward by those such as Block (that the TT is flawed because it acts like a Gallup poll) are based on a fundamental misunderstanding of what "passing the Turing Test" means, and all crumble away with a solid understanding of the significance of perfect performance on this test.

    ReplyDelete
    Replies
    1. Gaming the Turing Test

      And here's a more recent one, on this summer's claim to have passed the TT:

      Harnad, S. (2014) Turing Testing and the Game of Life: Cognitive science is about designing lifelong performance capacity not short-term fooling. LSE Impact Blog 6/10 June 10 2014

      Delete
  4. Harnad offers an insightful look into what he prefers to call “performance capacity” question of Turing. Harnad correctly states that is not only computers that should be considered for the Turing Test but that it should also extend to “sensorimotor” performance capacity robots that can perform at the T3 level. In the imagined possibility of building a machine with this capacity we have never been closer to being able to observe the brain’s activity as it is in nature as we are today with the CLARITY technique of Karl Deisseroth lab at Stanford University (http://wiki.claritytechniques.org/index.php/CLARITY_Technique). In regard of Turing’s “skin-of-an-onion” analogy, CLARITY may well be the last skin left to be stripped off the brain if you will. Whether or not this will reveal anything about the hard-problem, remains to be seen (and I don’t believe this subject is of main interest to the researchers) but it can reveal a lot about the “dynamical causal system” we are interested in and developing such machine, that really does what we are able to do. In my opinion I don’t think either Harnad or Turing give much credit to the fact that there is much more biology can tell us about this than it may seem apparent at first sight.

    ReplyDelete
    Replies
    1. Reverse-engineering our doing-capacity

      It's clear how CLARITY could be a help in studying the doings of the liver, spleen or pancreas -- perhaps even the "vegetative" doings of the brain (thermoregulation, homeostasis, posture). But I would be very surprised if it could help us understand the mechanisms of cognitive function. What the brain can do is everything we can do. And that's what trying to reverse-engineer the causal capacity to pass the TT requires. How do you think CLARITY can help us in that?

      And that's not the "hard" problem (feeling) but the "easy" one (doing)!

      Delete
  5. "Like any other science, cognitive science is not the art of fooling most of the people for some or most of the time! The candidate must really have the generic performance capacity of a real human being -- capacity that is totally indistinguishable from that of a real human being to any real human being".

    Harnard raised an issue that I had discussed in the comment section for reading 2a; that a computer cannot pass the TT if it may fail in the future (in the case of my last comment, it was regarding the possibly limited storage capacity of the computer). If a computer were to maintain its "cover" for an unlimited amount of years, it would need, in my opinion, a personality. Otherwise, a person may see some contradictions in what the computer says, and then doubts may be raised.

    I will give an example. Let's say a person writes to the "imitating computer" the following: "I went to a big and loud party yesterday. I feel like I just charged my batteries for the week!". What the computer will respond may leave clues as to what kind of personality it possess. If the computer were to respond "Wow, lucky you! I always feel drained after parties. I prefer to relax at home with a good book", the person may deduce that the computer is introverted. On the other hand, would the computer respond "Wow, lucky you! I like parties too. I feel itchy if I stay alone for too long", then the person may deduce that the computer is rather extroverted.

    Thus, the computer will have to continue his imitation as an introvert or an extrovert (to a reasonable extent of course, as people cannot be pure introverts or pure extroverts). Otherwise, the TT would fail, because the person would detect inconsistencies in what the computer is saying (for example, the computer appears outgoing at some times, and reserved at others). I do think that personality is part of cognition. Our personality will shape how we interpret information and how we react to it. With respect to the TT, adding a personality to the computer will be an important factor to hold the "imitation game" for an unlimited amount of time.

    ReplyDelete
    Replies
    1. No doubt the TT-passer would have to have some sort of personality, otherwise how could it pass the test of emailing you for 40 years without giving you any reason to doubt it's a real person?

      But it's not an imitation game! It's really reverse-engineering and then generating full human performance capacity.

      Delete
  6. “Here is the beginning of the difference between the field artificial intelligence (AI), whose goal is merely to generate a useful performance tool, and cognitive modelling (CM), whose goal is to explain how human cognition is generated. A device we built but without knowing how it works would suffice for AI but not for CM.” (Harnad 2008)
    I think this distinction is very important to make when one is looking to explain a human’s doings. Many times I found myself asking but how does having a robot which computes things help us understand how humans do what they do? Creating an algorithm (which transforms an input into output through a string of squiggles and squaggles) produces an outcome, but how would this help one understand how human cognition occurs? All along I was confusing the goals of AI engineers and CM scientists. Originally I believed that both AI and CM were reverse engineering, however only CM is reverse engineering because it’s working with an already existing organism and tries to figure out the mechanism behind this device. Whereas AI is forward engineering because it focuses on the creation of a mechanism.
    I wonder…
    A) If in order to better understand how humans do what they do, both AI and CM have to eventually combine their knowledge to get a more complete picture to solving this problem of inquiry?
    B) What the intersection would be amongst the two fields of AI and CM; as in what would be the foundational ground to which they both can communicate similar ideas (ex: on the level of mathematics)?
    C) If (hypothetically) AI was to create a successful T3, would CM scientists need to explain how that robot’s cognition is generated, or would the algorithms be the complete explanation?

    ReplyDelete
    Replies
    1. If anyone created a successful T3, that would certainly be cognitive modelling (CM), not just AI.

      So far AI is ahead of CM in generating performance capacity, but light years from the TT. Once either of them comes up with something that has a lot of performance power, no doubt everyone will take it up to see how far it will take them.

      Delete
  7. “Here, with a little imagination, we can already scale up to the full Turing Test, but again we are faced with a needless and potentially misleading distraction: Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! That would reduce the Turing Test to the Gallup Poll that Turing rightly rejected in raising the question of what "thinking" is in the first place! No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989).”

    That seems about right, but that also seems like quite a lengthy test. The Hard Problem might be impossible, but by this criterion The Easy Problem seems pretty darn difficult. Is this criteria a little too strict? What is our temporal baseline for what we deem human cognition? Would we take-back our attribution of human cognition to a person who died or went comatose at the age of 20? I grant that Turing’s email type format is limited in its scope of what humans can do, but how do you qualify a lifetime? This isn't a critique, but simply a question. Would this be at your T3, T4 or T5 level? Something being “totally indistinguishable” seems to dependent on how other people are attempting to distinguish it.

    “It is of interest that contemporary cognitive robotics has not gotten as much mileage out of computer-simulation and virtual-worlds as might have been expected, despite the universality of computation. "Embodiment" and "situatedness" (in the real world) have turned out to be important ingredients in empirical robotics (Brooks 2002, Kaplan & Steels 1999), with the watchword being that the real world is better used as its own model (rather than roboticists' having to simulate, hence second-guess in advance, not only the robot, but the world too).”

    The discrepancy Harnad elucidates here really drove the point home as to why we cannot use virtual worlds as anything but useful tools. For sake of understanding, I find it revealing to consider how evolution designed us. We, as protective vessels for self-replicating molecules, were molded to the world and its contingencies. What better way to understand the design of ourselves than to run the test in the same “laboratory” that we (our ancestors and their not so successful relatives) were built to make use of to promote our survival and reproductive success? Why go through the trouble of attempting to duplicate the world to run tests of attempted duplications of our cognitive abilities? Why double our problems when the platform for testing is already set up?


    “The real theological objection is not so much that the soul is immortal but that it is immaterial. This view also has non-theological support from the mind/body problem: No one -- theologian, philosopher or scientist -- has even the faintest hint of an idea of how mental states can be material states (or, as I prefer to put it, how functional states can be felt states).”

    That claim doesn't resonate with me. A perfect correspondence theory between mental and material states might be a pipe-dream (pun intended), but I still find deflationary accounts of the Hard Problem to provide some hints as to how the Hard Problem can be approached and/or subsumed under the rest of the so-called Easy Problems. I’m eagerly awaiting to see how Professor Harnad manages to squash my optimism. ; )

    ReplyDelete
    Replies
    1. The "easy" problem is pretty darn difficult. Lots of Nobel prizes waiting along that long road.

      You don't have to actually conduct the TT for a lifetime. You just have to generate capacity that would be capable of continuing for a lifetime. No short-term tricks like the Loebner prize.

      If Ethan MacDonald has not already passed T3, he will within a few weeks: Test him!

      But a TT candidate that went comatose after a few minutes probably would not cut it. (Nor would its mechanism tell us much about cognition: just about how to play the imitation game and fool everyone for 10 minutes.)

      That's exactly what the TT is not. And that's why the capacity (though not necessarily the testing) would have to be lifelong.

      And so far we're talking only about T3.

      As to materialism: We're all materialists, of course. Of course it's the brain that generates feeling, just as it's the brain that generates doing. What else? Trouble is that explaining how and why the brain generates doing is doable in principle, hence "easy," whereas explaining how and why the brain generates feeling is out of sight (and -- "Stevan says" -- it looks like it's undoable -- for reasons we'll discuss later).

      So no one (here) is saying that there's an immaterial soul. Just that it's hard to explaining how and why the brain generates feeling. No doubt it does. What's missing is the explanation of how (and why).

      Delete
  8. This paper helped clarify some of the confusion I had with Turing’s paper. In my previous post (2a), I asked if passing the Turing test was equivalent to thinking or if it was equivalent to simply performing like a human. Harnad (2008) wrote that the goals of Artificial Intelligence (AI) and Cognitive Modelling (CM) are “to generate a useful performance tool, and… to explain how human cognition is generated”, respectively. Before reading Harnad’s paper, I was not completely aware of the distinction between the two fields. To me the difference between AI and CM is that AI is more focused on the completion of the task, whereas CM is more focused on how the task is being done.
    I was also very intrigued by the hierarchy of Turing tests. To me, the most interesting jump between levels was between T2 and T3. If I understand correctly, T3 is a real-life robot that can think and act as a human does while T2 is only digital and, hence, a simulation of T3. In other words, T2 can simulate something, but cannot actually BE or DO something. Therefore, T2 can simulate thinking, but cannot actually think. This distinction helped me understand why it was necessary that Turing used a T3 type machine (instead of a T2 type machine) for his test. A T3 machine can actually do the things that a human can do (in terms of verbal and sensorimotor performance). A T3 machine would be able to do what a human can do in the real world. Thus, this machine can actually generate what a human can do (in terms of thinking), not just simulate it.

    ReplyDelete
    Replies
    1. Not quite.

      T2 is just verbal (formal, computational, symbolic) indistinguishably from a real penpal, lifelong. T3 can do everything a real person can do, verbally as well as "in the world" (as you put it).

      The thesis is that whatever passes T2 or T3 thinks, because it can do anything a thinker can do, and we can't tell it apart from someone that thinks (and we can't read minds).

      T3 tests everything a thinker can do. T2 only tests what a thinker can do verbally.

      "Stevan says" only a T3 robot could pass T2, but who knows?

      What's sure is that computation alone could only pass T2, not T3. And if it did, Searle's Chinese Room argument (and the symbol grounding problem) suggest that a purely computational T2 would be thinking.

      Delete
    2. "Stevan says" only a T3 robot could pass T2, but who knows?

      I wonder if one could fit people into these T categories? For example, what about a person who cannot move or speak but can only communicate verbally? Essentially a T2 person. I would argue that this person would be able to pass T2, despite being limited to thinking verbally.

      Delete
  9. First, Harnad rightly points out that imitation and game should not be understood as “caprice or trickery” nor as “fakery or deception”. The goal of the game is to empirically test theories about the mechanisms underlying intelligent behaviour, and the blind test is necessary merely to avoid prejudicing the machine for the fact that it is not made of meat or, on the flipside, to avoid giving it too much leeway for the same reason. In defence of the game terminology however, I would say that it connotes an inherent interest in ‘winning’ for all the participants, an appropriate measure to make sure that the standards of human-likeness are unbiased.

    Harnad further contends that Turing’s criterion (being identified as a human, most of the time by most people) is insufficient and that the only empirically valid criterion is for “the performance of the machine to be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime.” Considering the way Turing sets the game up, that is, with 3 players (one human interrogator, another human and a machine), then I think Turing’s criterion is sufficient: a successful machine is one for whom the odds of success is a coin toss. If we set up the game such that there are only two players and the interrogator has to say whether it’s a human or a machine, then Harnad is right: the successful machine must always win. The problem with this setup however is that there is no way to make sure that the thresholds of the interrogator are unbiased. Turing’s setup does ensure fair treatment. Finally, if we apply Harnad’s win-all criterion with Turing’s setup, then we have a machine that is more human than any human, i.e. one that has most certainly recourse to deceptive or manipulative techniques.

    Another point of contention is Turing’s satisfaction with an email test (or T2), which puts too much emphasis on human’s verbal behaviour; we can do more than use language. Hence, Harnad argues that T3, i.e. “indistinguishability in robotic (sensorimotor) performance capacity” is the only satisfying test of cognitive performance. Of course, not only does this raise the bar for the machine, but it also necessitates that we change the setup of the test which is meaningless without a fair judgement between human and machine. To insist on sensorimotor performance capacity, though defendable, risks violating the exclusion of appearance or drawing some arbitrary line between what counts as performance and what does not. I can sing and even people who say they cannot sing can sing just fine, and even a pathetic attempt to sing is still a very human performance. Same with dancing. Same with getting dressed. Same with dressing fashionably. Same with taking part in a beauty pageant. However pathetic our attempts, these are things we can do. Most of what we mean when we say “he/she looks (as though) x” is performance. This is still not T4 (indistinguishability of inner states), but I argue that T3 is a constraint which demands a body that is convincing for all of my senses.

    ReplyDelete
    Replies
    1. One more thing, the claim that “[a]ccording to the Church/Turing Thesis, there is almost nothing that a computer cannot simulate, to as close an approximation as desired, including the brain” may be problematic and based on a misinterpretation of the thesis.

      “The Church-Turing thesis does not entail that the brain (or the mind, or consciousness) can be modelled by a Turing machine program, not even in conjunction with the belief that the brain (or mind, etc.) is scientifically explicable, or exhibits a systematic pattern of responses to the environment, or is ‘rule-governed’ (etc.). Each of the authors quoted seems to be assuming the truth of a close cousin of thesis M, which I will call Thesis S: Any process that can be given a mathematical description (or that is scientifically describable or scientifically explicable) can be simulated by a Turing machine.
      As with thesis M, neither the Church-Turing thesis properly so-called nor any result proved by Turing or Church entails thesis S. This is so even when the thesis is taken narrowly, as concerning processes that conform to the physics of the real world. (Thesis S taken in the wide sense is known to be false; see the references given earlier re the wide version of thesis M.)”
      (http://plato.stanford.edu/entries/church-turing/#Bloopers).

      Delete
    2. The Weak and Strong Church-Turing Thesis

      The TT is not only not a game, it is not about trying to compete or win. It's just about trying to reverse-engineer and generate human cognitive capacity -- all of it, and totally indistinguishable from any of us humans, to any of us humans.

      Appearance is not relevant, and I think today we could interact open-mindednly with what clearly looks like a robot, to Turing-Test whether it has a mind, as I do.

      But the goal is only to generate generic human performance capacity. Leave generating Fred Astaire or Einstein for later...

      Thanks for pointing to this on the Church/Turing Thesis (CTT). Let's discuss this in class.

      Briefly, according to the weak CTT, formal computation (the Turing machine) captures all that mathematicians do and mean by "computing."

      The strong CTT is that computing can simulate or approximate just about any physical thing (clocks, atoms, solar systems, organisms, organs, devices, yes: brains too). But the simulation is just a simulation -- squiggles and squoggles that can be interpreted as corresponding to properties of the thing being simulated. A simulated plane does not fly and a simulated waterfall is not wet. (And don't mix up computer simulation with virtual reality, which can be a computer simulation that is hard-wired into generating inputs to your senses.)

      The strong and weak CTT are about the power of computation. Neither of them is the same thing as Computationalism (cognition = computation). (Everyone should make sure they understand the difference: potential exam question!)

      Delete
  10. “… [W]hat Turing will be proposing is a rigorous empirical methodology for testing theories of human cognitive performance capacity… calling this an “imitation game”… has invited generations of needless misunderstanding” (Harnad, 2008).
    I especially liked this argument by Harnad, as I initially found it difficult to decipher what Turing meant by his “imitation game” given the term. As Harnad points out, the words “game” and “imitation” suggest that trickery will be involved, and I also think that they downplay the severity of Turing’s empirical methodology for testing theories of human cognitive performance capacity. It may be that Turing was being modest by calling his methodology a game, although it is more likely that he did not expect his ideas to become so revolutionary in the cognitive sciences. Moreover, I also think that the term misses the point of what Turing was really trying to do, which was discover human cognitive performance capacity by reverse-engineering it; Harnad suggests that the “game” is actually a branch of reverse bioengineering, which he calls the future science of cognitive. This idea was not made clear in Turing’s paper on computing machinery and intelligence. I think Turing describes the “imitation game” in an overly simplistic way, which distracts the reader from the true meaning of it. Harnad gives Turing’s “imitation game” more justice by emphasizing the significance of it in discovering human cognitive performance capacity.

    ReplyDelete
    Replies
    1. I think Turing knew all of this. He just did not spell it all out, the way great mathematicians often do when they do a proof, leaving out steps that are just dead obvious to them, but require laborious explanation for lesser mathematicians. It was this loose way of expressing his thoughts that led to the many misinterpretations of Turing (such as the howler this summer about the TT having been passed):

      Harnad, S. (2014) Turing Testing and the Game of Life: Cognitive science is about designing lifelong performance capacity not short-term fooling. LSE Impact Blog 6/10 June 10 2014

      Delete
  11. "Some have interpreted this as implying that "knowing" (which is just a species of "thinking") cannot be just computation. Turing replies that maybe the human mind has similar limits, but it seems to me it would have been enough to point out that "knowing" is not the same as "proving." Godel shows the truth is unprovable, not that it is unknowable. (There are far better reasons for believing that thinking is not computation.)"

    This comments makes me question how "knowing" can be captured in a machine. Knowing about something does not always make it a true fact. A person can be fully confident about the answer to a question and still be completely wrong. That person can come up with creative arguments to support their false answer. They will believe that they know they're right. How can such a strong conviction in false facts be captured in a machine? If knowing is just a type of thinking, how can we trick machines to know something when it has previously been programmed to compute the real truth value? Can we program computers to have the same gullibility that humans can display? I suppose this could be just another element of the T3 test. If the machine can behave as humans do, then they must be able to be tricked as humans do.

    ReplyDelete
  12. "Surely the goal is not merely to design a machine that people mistake for a human being statistically more often than not! That would reduce the Turing Test to the Gallup poll that Turing rightly rejected in raising the question of what “thinking” is in the first place! No, if Turing’s indistinguishability criterion is to have any empirical substance, the performance of the machine must be equal to that of a human being - to anyone and everyone, for a lifetime." (p.26)

    But isn’t that as good as it could possibly get? It seems to me that any human being would, from time to time, “fail” the Turing test as it is proposed. If that is so (and i am not sure if the test has been done as much with humans as it has with computers) how can we expect a computer to pass the test every time?


    "Or if one’s pen pal was totally ignorant of contemporaneous real-world events, apart from those we describe in our letters? Would not even its verbal performance break down if we questioned it too closely about the qualitative and practical details of sensorimotor experience? Could all of that really be second-guessed purely verbally in advance?" (p.31)


    For instance, here is one way in which someone might fail the test: when, for instance, they lack knowledge of some event that is, from the interrogator's point of view, essential. If you take, for instance, someone who grows up is a remote location is a country not overtly subjected to war, they may very well not be aware of events that have been life changing in the life of others.

    ReplyDelete
    Replies
    1. I think you are not thinking realistically about the TT: A lifelong penpal about whom you haven't the slightest suspicion that he may be just a computer. The kinds of occasional lapses and gaps you describe would not make you conclude he was a machine.

      Delete
  13. “There is no one else we can know has a mind but our own private selves, yet we are not worried about the minds of our fellow-human beings, because they behave just like us and we know how to mind-read their behavior. By the same token, we have no more or less reason to worry about the minds of anything else that behaves just like us -- so much so that we can't tell them apart from other human beings”

    The philosopher in me reading this had to jump and shout, “Some people DO worry about the minds of our fellow humans!!” Following a Descartesian train of thought, it is impossible to know anything about any other mind except your own which has caused much worry and many tests and questions. If only my mind is a sure thing, what else can exist for sure? Am I just a brain in a vat? Am I even a brain in a vat? Most of this may seem irrelevant, but I argue that it is very relevant due to the fact there is currently no (to my knowledge) robotic Turing machine that is capable of passing for a human indistinguishably. To me, that makes this a problem for both philosophy and psychology. The problem pointed out by Harnad is that if there is such a robot, it would need to be able to imitate what human minds DO. We have some ideas about what human brains do and some idea of how this happens, but there is far from extensive explanation of how the human mind can do all of the things we experience it do.

    ReplyDelete
    Replies
    1. Although I find questions about the other minds problem interesting, I have formulated my own way of sort of ignoring them.
      To me, it becomes a matter of faith. In nearly every aspect of science, and although empiricism tries to provide hard evidence, we make leaps that turn observations into proof, or ideas into laws. Nothing can be one hundred percent proven or certain, and things that are 'confirmed' are only so under particular circumstances and from our unique perspective. But if we are willing, on a daily basis, to make assumptions and believe things without complete, causal relations, then why suddenly do we question that others think as we do, or have minds as we do?
      Really, we have ample evidence that people have the same physiology, the same neurological mechanisms, and we have language that allows us to share a common understanding or experience of being conscious and feeling. To me, this is enough to revoke any doubt that others think and feel as I do.

      Delete
    2. I understand why this would cause internal crisis but I do not think it helps to question it further with regard to the Turing Test. Whether or not I am a brain in a vat does not really help me or prevent me from wondering about the other minds problem but more also the hard problem. I think this issue is much more for philosophers who will have better ways of introspecting thsn the cognitive scientist.

      Delete
  14. Yes, there's the other-minds problem. You can't be sure other people have minds. But you can be pretty sure, if they act just like people who have minds. And that's the gist of the TT: Once you can't tell them apart from real people, you have no more (or less) reason to doubt that a TT-passer has a mind than any other person.

    ReplyDelete
  15. ''What we mean by "think" is, on the one hand, what thinking creatures can do and how they can do it, and, on the other hand, what it feels like to think.''

    I am surprised by the use of the verb ''to feel''. Do we actually ''feel'' that we think?

    The material meaning of feeling (''I feel the soft sheets'') is here irrelevant, thinking does not require external receptors.
    Maybe, one could argue however that thinking is a kind of internal perception because, for example, an individual can visualize a person he or she is thinking about. And there probably exists studies showing that the brain of people performing such a task shows activation in the visual system areas. Then we would feel we are thinking in the sense that our nervous system gets activated as if we were actually perceiving visually the person in the external world. The same seems to apply to language and activation of the visual and the auditory brain areas, etc...
    But interpreting feeling that we think in this sense would mean that I obligatorily think with images or language be it actual words or mathematical signs. However, there is no evidence that blind and deaf person cannot think. So we cannot feel that we think in that sense.

    Feeling can also have an immaterial meaning as in ''I feel guilty''. But I do not see how thinking could be considered a sentiment.

    Thus, the argument according to which we need to be a machine to feel that it feels becomes irrelevant.

    It seems rather that we know that we think. And as long as a person knows, he or she can tell that he or she knows.
    Now the problem goes back to defining what thinking is because a computer could only tell whether it thinks if it knew what thinking was.
    The question also is then how do human beings know they think if they don't feel it?

    ReplyDelete
  16. "a computer, simulating the first system formally, but not actually doing what it does" - to be honest, this is not the idea of a computer I had in mind when reading Turing's paper. Just because a computer is made to do something in the same way as another system, doesn't mean necessarily that it is simulating it, does it? The way I was taught about the universal Turing Machine, it would literally just take on all of the instructions and assume the initial state and position on the tape of each Turing Machine it would 'simulate,' and by doing so essentially become that individual Turing Machine - not only would the UTM it act the way a regular TM did but it would literally be doing exactly the same computations as the regular TM. Would not both operations be exactly identical? That seems to me more real than a simulation.

    One other interesting note from Professor Harnad's paper - "Turing is dead wrong here. This is not solipsism.. It is merely the other-minds problem." While I agree that the idea Turing describes doesn't seem like true solipsism, I find it amusing that it should be described just "merely" as the other-minds problem, which I would love to discuss more in class; I think it has a lot more to do with our relationship with machines, internal states, and thinking than Turing would give it credit for.

    ReplyDelete
  17. “Turing’s proposal will turn out to have nothing to do with either observing neural states or introspecting mental states, but only with generating performance capacity (intelligence?) indistinguishable from that of thinkers like us” (Harnad, 2008, pg.1).
    This is true, but I still find it to be a fatal gap in Turing’s proposal, because I still fail to wrap my head around the idea that one could generate performance capacity like our own, without even a basic understanding of our own.
    My difficulty with this idea is furthered when Harnad (2008) says “Calling this an “imitation game” (instead of a methodology for reverse-engineering human cognitive performance capacity) has invited generations of needless misunderstandings” (pg. 3). However, I still find it difficult to understand how we can suggest that we are even coming close to “reverse engineering human cognitive performance capacity” (let alone well enough to create a machine that passes the Turing test) if we understand essentially nothing of human cognition.
    Later in the paper, Harnad (2008) also says that we will be able to speak of machines thinking without being refuted “only at a cost of demoting “thinking” to meaning only “information processing” rather than what you or I do when we think, and what that feels-like” (pg. 18/19). However, if we don’t understand human cognition and human thinking and we can’t define what it “feels-like”, then aren’t we demoting “thinking” to meaning only “information processing” anyway?

    ReplyDelete
  18. I thought that Prof. Harnad's "Annotation Game" was quite clarifying, though I found myself disagreeing with it on multiple occasions in three fundamental ways.

    1. Although the goal of the Turing Test is to create a machine indistinguishable from a person, Prof. Harnad's criticism of Turing's "as often as not" criteria ignores the way in which Turing set up his test. If the examiner has to choose A or B to be the person, then - if they are indistinguishable - he will choose randomly between the two, with the machine chosen statistically as often as not. This is just an empirical reality of the test setup, not pointless distraction.

    2. The "imitation" aspect of the test is absolutely imperative, contrary to Prof. Harnad's framing. It is a contradiction to ask something to appear indistinguishable from something that it is not without "imitation". Conceivably if the machine can think similarly to the way that we do, then it will know that it is not a human being. If we ask it then to be indistinguishable from something that it is not, and set the standard as something telling the truth, the criteria for the machine are too strict, as we are asking it not only to think like an average person, but like an exceptional liar to itself and to us as well. If the machine is allowed to be "fooling" in the same way that the man has to fool in the initial "imitation game", then the machine and the man need only match performance, the machine need not be superior.

    3. A less grounded critique, more of an opinion - with regards to all of the reference to sensorimotor function as necessary for T2: I don't know if Turing would disagree with this for certain; I feel more that he wanted to ask a more interesting question - why do we think what we think, instead of why do we do what we do. If all my senses were to be numbed right now, I would likely still be thinking (probably something like "am I dead?"). Even if sense is required to establish this ability, I think that it was whatever process that drives this "senseless thought" that Turing was interested in: it certainly explains the verbal nature of the T2 requirements. Beyond that I think that it is certainly an important step in understanding why we do what we do, as well as a little bit more challenging for physiology to solve as opposed to the integration of sensory inputs.

    ReplyDelete
  19. “But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world” pg. 8

    This is Harnad’s response to the claim that a random element within the machine could constitute free will. He seems to separate out autonomy from free will, saying that autonomy is an important feature for a Turing robot to embody, while free will is beyond the scope (along with feeling). In what ways is autonomy separate from free will? It seems like autonomy is possibly one small subcategory of free will that the robot could conceivably embody. In fact, as Harnad says, a robot would need a degree of autonomy in order to successfully pass the Turing Test. According to dictionary.com, autonomy is “the freedom to determine one’s own actions and behaviours”. This seems to be a slightly more limited version of free will that does not necessarily require consciousness, just the ability to think and decide.

    I am interested in better understanding why some argue that randomness could constitute free will. The quality of being random seems entirely opposed to the quality of choice. Free will in humans manifests as choices and lots of factors inform rational choices. Randomness within a Turing robot could simply simulate the appearance of unpredictable free will, but how could a random element ever take past experiences and logic into account? I know that Harnad and Turing both disagree with equating randomness and free will, but I am interested in learning more about the philosophy behind this.

    ReplyDelete
  20. "No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989)."

    For regular human beings, if asked to recite, say, a string of numbers, (e.g. 749232) it would be quite simple, and could be done automatically. If asked recite them in reverse order, it would then become challenging, and take noticeably longer to do. To ask a robot or machine to recite a string of numbers, it would be very simple and they could do it automatically and quickly. It would be equally automatic and fast for the robot to recite the numbers in reverse order too!

    If we were to program a robot that models human cognition, it too would have to stumble and take it's time in responding to the simple command of reciting a string of numbers in reverse order. I wonder, how could a seemingly human "flaw" of being slower in reversing the order of the a string of numbers (and definitely not a flaw seen in computers) be engineered in a robot, and be done in the same mechanism as in humans? It could not be done by putting a time delay when a questions like that is asked, because humans do not just arbitrarily take longer. Humans go through some sort of biological process in the brain, which requires more effort (or energy). A robot would then have to be programmed, and built so that the answer to a question like that would physically take longer to be broadcasted.

    I am not the most educated in computer and hardware engineering, but I believe that we have here a fundamental physical difference between the electrical properties of man made computers and the biological properties of nature made human brains. Could this also lead to a fundamental chasm between having modeled cognition and human cognition?

    ReplyDelete

  21. "This is one of the many Granny objections. The correct reply

    is that (i) all causal systems are describable by formal

    rules (this is the equivalent of the Church/Turing Thesis),

    including ourselves; (ii) we know from complexity theory as

    well as statistical mechanics that the fact that a system's

    performance is governed by rules does not mean we can predict

    everything it does; (iii) it is not clear that anyone or

    anything has "originated" anything new since the Big Bang."

    Reading the above passage helped me clarify my understanding

    between being able to create new things and being able to

    explain the reasoning behind why a machine does what it does.

    That is, if a child-machine had been taught to say "I like

    apples" through punishment/reward conditioning -a child

    machine would have learnt that saying "I like apples" is

    good, but it would not be able to explain why saying "I like

    apples is good."

    That being said, I don't think that it is necessary for a

    thinking being to be able to explain the reasoning behind

    everything it does -my point goes towards the inability of

    Turning's child computer to learn without a punishment/reward

    method of reinforcement. How would the machine learn words

    that weren't presented in this a reward/punishment context?

    One could argue that the child machine could also have a

    scanning function, where it could scan media and construct

    sentences from what it read, although I do not think scanning

    sentences and creating sentences from them constitutes

    learning. (even though I am confused as to how exactly to

    differentiate between learning and scanning).
    Furthermore, this scanning-child machine would not be able to

    create it's own sentences -not in the sense that it has to

    create something original and never done before, but in the

    sense that if it created sentence A after scanning media, it

    would not be able to create a sentence B that analysed or

    expanded upon sentence A without scanning for further media.

    ReplyDelete
  22. Harnad

    "A reasonable definition of 'machine', rather than "Turing Machine," might be any dynamical, causal system."

    If everything is considered to be a "machine" in that it is a dynamical causal system, where does feeling fit in to this framework? If feeling can't be causally explained then it exists beyond the physical realm (which is hard enough to conceive). If our intuition is that feeling does exist beyond physics, then can we conclude that a robot wouldn't feel? Is the idea that it wouldn't matter either way, as long as the robot convinces us it feels?


    "Turing's criteria, as we all know by now, will turn out to be two (though they are often confused or conflated): (1) Do the two candidates have identical performance capacity? (2) Is there any way we can distinguish them, based only on their performance capacity, so as to be able to detect that one is a thinking human being and the other is just a machine? The first is an empirical criterion: Can they both do the same things? The second is an intuitive criterion, drawing on what decades later came to be called our human "mind-reading" capacities (Frith & Frith 1999): Is there anything about the way the candidates go about doing what they can both do that cues me to the fact that one of them is just a machine?"

    The "intuitive criterion" doesn't seem obvious to me, it seems to hinge on putting the machine to the test against an "ideal" human. Does the Turing Test require a simplified view of human psychology?

    "Disabilities and appearance are indeed irrelevant. But nonverbal performance capacities certainly are not. Indeed, our verbal abilities may well be grounded in our nonverbal abilities (Harnad 1990; Cangelosi & Harnad 2001; Kaplan & Steels 1999). (Actually, by "disability," Turing means non-ability, i.e., absence of an ability; he does not really mean being disabled in the sense of being physically handicapped, although he does mention Helen Keller later.)"

    Maybe appearance is irrelevant but I would argue that the human physicality shapes/defines us in some way as it is the extension that allows us to interact with the outside world. I could be wrong but I feel like a robot with significantly different appearance (i.e. missing eyes or legs) or with "non-abilities" (ability to gesture with hands, for example) could not develop and learn from its environment the same way a human would.

    ReplyDelete
    Replies
    1. continued..

      "With the Turing Test we have accepted, with Turing, that thinking is as thinking does. But we know that thinkers can and do do more than just talk. And it remains what thinkers can do that our candidate must likewise be able to do, not just what they can do verbally."

      If we agreed that animals think and feel (not necessarily in the linguistic way humans do) then would making a robot animal that convinces a real animal that it's real be sufficient to say a machine can think/feel? I realize this is purely hypothetical since we can't know for sure if the animal is convinced that it's robot counterpart is real, but it's just something that crossed my mind. Or conversely, what if a robot animal convinced a human that it could feel? Could we decide that machine can feel based on that? I understand that the point behind making a human-like robot is to reverse engineer human cognition to achieve a better understanding of why we do what we do.. but if we were to be just talking about seeing if a machine could convince us it feel.. maybe kind of far fetched, any thoughts?

      "real performance capacity"

      Again, I'm a bit stumped with how we're defining how the robot should act. What is "real performance capacity"? I almost feel like if we were to present someone two people and told them one was a robot (while actually presenting them with two real people) they would start finding "odd behaviours" and convince themselves that one of them is actually a robot. Alternatively, the example we talked about in class where we find out someone we've become acquainted with is actually a robot.. what if our reaction was simply "well that explains a lot".. we noticed that they did odd things but since they look like a human and tell us that they are human we have a tendency to believe them, even if they do odd things. If we met the criteria that the robot didn't have to have the appearance of a human then again I am left wondering what would constitute appropriate human performance..

      Delete
  23. When distinguishing between levels of Turing tests (based on what capacities they measure), Harnad makes the point that in T2, an email-only machine would fall short of human performance when asked about configurations of things in the real world (i.e. the moon). When reading, I had the thought that perhaps the machine would be able to answer such questions were it connected to the internet. However, that method would a) be unhelpful for understanding how humans generate performance and b) would rely on the observations of humans (humans have already constructed a representation of the world, which the robot would be tapping into, thus not utilizing its own capacities). As Harnad states on page 11, the Turing test is looking for performance capacity in the real world, not within some sort of representation of the world (i.e. the internet.)

    pp12: “…Turing testing which, even if it will not explain how thinkers can feel, does explain how they can do what they can do.”
    Something general that I’ve been wondering about that this reading brought up is whether we can solve the easy problem without solving the hard problem. It seems that what we feel strongly influences what we think. If we construct something that can think and not feel, we may have a very difficult time replicating human behaviors (which is what we want to do if our aim is to explain human performance.) Sensory input may be present to a thinking thing, but my intuition is that another input to cognition is feeling. The feeling we get from doing or experiencing something frequently affects our mental states, and thus our behavior; feelings are an essential cog in the machinery that causes us to do what we do. If this is true, then, due to the other minds problem, we might not be capable of building a machine that solves the easy problem for all human capacities (Harnad 13: “There is no way to know whether either humans or machines do what they do because they feel like it…”).

    ReplyDelete
    Replies
    1. “If we construct something that can think and not feel, we may have a very difficult time replicating human behaviors”

      On the contrary, I think that it is possible that human behaviours can be replicated without feeling. For example, when I touch something hot, I automatically retract my hand to stop the pain and to prevent damage. I know that it feels like something to be in pain. However, I think that it is possible to do this action without feeling. It is possible to build and program a robot without feeling, but still have it perform exactly as I did. If a robot touches something hot, it would be possible to program it so that it limits the damage to itself.

      In that sense, I think it is possible to solve the easy problem before solving the hard problem. In the future, I think that it would be possible to build a robot that can do things indistinguishably from a human. This robot can explain how we do what we do, but the question of feeling is up in the air. How do we know that this robot feels? And if it does feel, how?

      Delete
  24. "The points about determinism are probably red herrings. The only relevant property is performance capacity. Whether either the human or the machine are completely predictable is irrelevant. (Both many-body physics and complexity theory suggest that neither causal determinacy nor rulefulness guarantee predictability in practise -- and this is without even invoking the arcana of quantum theory.)"

    I would have to disagree and say that I do believe that the level of predictability of a human as compared to a machine has a great deal of relevance. Although, it is stated that predictability in practice is not guaranteed, a human or a computer under normal conditions, when faced with a specific situation, will reliably yield an appropriate behaviour/output in response. Take our response to painful stimuli, for instance. We all have different thresholds for pain, our level of arousal at any given moment modulates our experience and responses to the environment, however when we burn our skin, almost all of us will respond by quickly retracting our finger from the flame (some exceptions lie in the significantly small percentage of people with extremely rare neurological conditions who are insensitive to pain, or those with motor disabilities). The behavioural reaction may vary in many ways, say speed of the retraction, however for the most part this reaction is predictable and is telling of the nature of human behaviour, physiology and basically how we work. Similar predictable input/output relationships exist in computers (for example: holding the power button on my phone turns it on and off predictably, unless it is broken, and thus in a condition that deviates from normal). I understand that saying things like 'for the most part' may not be precise enough for this type of discussion, and does not sound convincing. However I can't help but think that because causality is one of the dimensions connecting people and computers and putting them into the broad category of machines, shouldn't the extent to which each reacts, more or less predictably, to different types of stimuli matter a great deal?

    ReplyDelete
  25. “Godel's theorem shows that there are statements in arithmetic that are true, and we know are true, but their truth cannot be computed. Some have interpreted this as implying that "knowing” ... cannot be just computation. Turing replies that maybe the human mind has similar limits..."
    I wonder how an intelligent machine would react to Gödel’s Incompleteness Theorem. Can it grasp this concept of provable unprovability? Moreover, I’ve often found that mathematical theorems like this can sometimes be very difficult to wrap one's head around. For instance, I can still recall vividly all the confused faces with vacant expressions, staring at the blackboard, when this very theorem was first introduced in a philosophy lecture. In this sense, how would machines process such a piece of knowledge differently from humans who, for the life of them, cannot understand it and/or refuse to believe it’s true?

    “No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime."
    This is how I image a true T2-passing machine should be like: not only does it “fool” you for years, even decades, into believing that it’s a real human through and through, but when it finally tells you that it is in fact a machine, you would laugh and think that’s the stupidest joke you’ve ever heard. Because even when you start with the premise that you have been talking to a machine this whole time, you still can’t seem to find any evidence that your pen-pal is anything other than a real, actual human being. (By “fool" I mean that the machine’s verbal communication capability is completely indistinguishable from humans.) If an interrogator is biased to believe that s/he is having a conversation with a human, s/he may not pay much attention to the subtle cues that might give away the machine’s identity. This is why it’s important to always carry out the TT with the machine of interest and its human counterpart, so that distinctions, if any, can be made.

    ReplyDelete
  26. Harnad’s comments really helped clear up Turing’s paper. Here are a few of my own:
    “A reasonable definition of "machine," rather than "Turing Machine," might be any dynamical, causal system. That makes the universe a machine, a molecule a machine, and also waterfalls, toasters, oysters and human beings. Whether or not a machine is man-made is obviously irrelevant. The only relevant property is that it is "mechanical" -- i.e., behaves in accordance with the cause-effect laws of physics”

    This brings on a very interesting point, one that I had not thought about. Machines can be anything. Not just computers. Yet as Harnad later explains and with which I fully agree, there is no point in making a human being pass the TT, for us to be able to understand human cognition we much be able to build it, that’s the only way we can possibly be able to understand everything that is happening within human cognition. This is why we need CM and not AI.

    …[fatal] flaw in T2 itself: Would it not be a dead give-away if one's email T2 pen-pal proved incapable of ever commenting on the analog family photos we kept inserting with our text? (If he can process the images, he's not just a computer but at least a computer plus A/D peripheral sensors, already violating Turing's arbitrary restriction to computers alone)

    This to me is quite a fascinating point. Turing spends the whole time explaining that this “game” is restricted to digital computers. Harnad indeed brings out a fatal flaw. To be able to pass the TT for a lifetime (as it is required) the computer would without a doubt receive images to which he would need to respond. Yet to compute these images, he would need some kind of sensors or a visual programmer, making him more than a digital computer as Harnad states. The only other way to solve this issue would be to make the human think that the computer is blind. The problem here is that not all Turing machines could be blind. Taking away this ability to see, takes us one step further away from the fundamental question: “How do we do what we do?”.
    The fact that the computer might be totally ignorant of “contemporaneous real-world events” doesn’t seem as big of a problem (who doesn’t know someone that isn’t up to date with current events?) but more importantly, the computer could have these events periodically integrated into his system, therefore not blowing his cover. Or, for certain events, his reply by email could just question the start or the outcome of the event. Allowing the human responder to explain the situation to the computer.

    ReplyDelete
    Replies
    1. To rule out the senses, T3 would not just have to be blind but deaf, and unable to smell, taste, touch or move -- in other words, not T3!

      Delete
  27. 2b. Turing: “The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.”

    Harnad continues to “clean up” Turing’s paper, if you would, by unpacking, clarifying, and expanding on Turing’s core ideas. From a scientific standpoint, it is important that Harnad differentiates between formal conjectures and empirical hypotheses. While conjectures are speculative conclusions, a hypothesis is a proposition to be explored. I think that conflating these two the way Turing does, by writing that conjectures offer “useful lines of research,” is part of the reason why the TT has been so misinterpreted over the years.

    For example, taking the Church-Turing thesis as a hypothesis, as a beginning to examine thinking, would be difficult to test (you would need a Turing Machine, and that alone requires hypotheses upon hypotheses). It makes sense, then, that there would be all sorts of backlash towards the actual hypothesis that one possible way to look at thinking is that it is “just” computation—people had already been misled to believe that a conjecture about computation was the end-all, be-all. Therefore, as Harnad points out, it is important to keep in mind that the TT is a useful method, but not necessarily the explanation when it comes to thinking about thinking.

    ReplyDelete
  28. Harnad offers a very clear dissection of the principle elements to Turing’s article on Computing, Machinery, and Intelligence. He begins by explicitly defining thinking as “an internal state” which is “introspectively observable as our own mental state when we are thinking”. Following this, he gives the definition of a machine as a “dynamical, causal system”. Both these definitions I believe to be necessary in order to fully appreciate the ideas presented in both papers.
    Harnad then considers the many misunderstandings associated with the Imitation Game, such as how using the word “game” causes people to believe deception to be involved. I must admit I am guilty of doing this. Harnad fully clears up that what is meant is “a rigorous empirical methodology for testing theories of human cognitive performance capacity” which is very helpful in understanding the rest of the paper.
    Furthermore, Harnad gives a hierarchy of Turing Tests and determines that he believes Turing to have intended T3 –total indistinguishability in robotic sensorimotor performance capacity. Meaning that he believes that the machine must be a full robot and not just an email performer. He then explains that the machine does not just need to be a computer by claiming “any engineered device should be eligible; and it must be able to deliver T3 performance, not just T2”. This is where my understanding becomes less clear. I still do not quite understand what distinguishes a machine from something that would not be accepted in the Turing test. It is claimed that the “Turing Test is about finding out what kind of machine we are, by designing a machine that can generate our performance capacity”, but that cloning is not permitted because of explicatory power we want for what thinking is. Therefore the machine must be man-made and capable of being reproduced so that we understand it and can compare it to human thinking. It is also claimed that this machine must be able to learn and change its model. The ultimate goal of the Turing test is to model human thinking in order to understand it, I think this will be very hard if we program a machine to change itself every time it learns a new thing. How will we know what has changed and understand it if it continuously is changing its programming? This is what I think is ultimately so hard about understanding thinking as well –it is so different from other biological functions because it is continuously changing with no physical manifestation.

    ReplyDelete
  29. Harnad’s the Annotation Game definitely helped to clarify my understanding about parts of Turing’s article; however, it also led me to be a little confused as to what Turing was really saying.

    I agree with Harnad that T3, the indistinguishable robot, is the “correct” level of the TT in determining a machine can “do everything that we do”. After all, a machine must be able to ACTUALLY do what we do and not just virtually…and it seems to make sense that this is necessary for determining a cognitive model.

    However, I am not sure whether or not this is what Turing intended. The article distinguishes between an AI and a CM approach. In Turing’s article, the level of TT determined is t2: the email test. As long as a machine can convince a human that that it is human, isn’t that enough? If the machine can virtually simulate walking or seeing a starry sky, then surely it can convince an email penpal that it is human. From Turing’s article I keep getting the feeling that it wasn’t a matter of whether the machine could ACTUALLY do everything that humans could do, but that without having physical contact with the Turing Machine, could it still convince you that it can do everything that we can do?

    ReplyDelete
    Replies
    1. “In Turing’s article, the level of TT determined is t2: the email test. As long as a machine can convince a human that that it is human, isn’t that enough? If the machine can virtually simulate walking or seeing a starry sky, then surely it can convince an email penpal that it is human.”

      I believe that the main point of the Turing Test is to build a machine that can do what a human can do. It’s not enough for a computer to simply convince another human that it is human to pass the Turing test. Like you said in you commentary, the machine must “do everything that we do”. I don’t disagree with you that a robot could potentially convince another human penpal that it is human. But I don’t think that this is enough.

      We humans are more than just emails; we are more than just verbal creatures. What we do every day is more than just talk to one-another. If we were, then T2 would be the correct level, because T2 only tests verbal capacities. T3, on the other hand, can do everything verbally and everything in the real world (just like humans).

      Delete
  30. “But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world.” People who think and feel do a lot more than sit at the computer and write emails so in order to be truly indistinguishable from a human, the machine that passes the Turing test will have to be independently mobile. According to Harnad “the question is whether simulations alone can give the T2 candidate the capacity to verbalize and converse about the real world indistinguishably from a T3 candidate with autonomous sensorimotor experience in the real world.” Surely simulations of seeing and hearing and tasting and running and jumping don't have an equal impact on feelings/personality/thinking as the real thing. I think that if the machine that passes the Turing test is to be considered relevant to human cognition, it should at least be able to do some of the sensorimotor things that we do. A robot/computer without vision or mobility would be writing emails about things without actually being able to identify them in reality. If we’re going to declare something’s performance capacities indistinguishable from a ‘thinker,’ I’d want to know that it would at least notice, and ideally do something to change the situation, if it were about to be run over by a bus.

    ReplyDelete
  31. “We all have some sensorimotor capacity.” (p. 6)

    Harnad breaks down the Turing test into different levels. T0 refers to the ability of a computer to complete any random verbal task, although verbal is expanded past our typical understanding to include any task that can be verbalized. For example, tic-tac-toe, which is typically played with pen on paper, can be verbalized by assigning a number to each square. Although T0 is obviously incomplete relative to the Turing Test, it provides a useful foundation to understand the type of system that Turing envisions passing the Turing Test. It is also useful to understand what a failure of the Turing Test looks like, because we have all experienced such a system in our lives already. T2 involves a computer communicating with a person only via e-mail, and matches the classic understanding of the Turing Test and its application. Once again, this version is much more accessible than when Turing originally wrote the paper because the majority of people have experienced email. Something to consider is that we do not actually know if the email we have received is from a person or a computer (as a self-operating unit), unless we then discuss the sending of the e-mail face to face with the sender. However, we work on the assumption that our correspondence is with a person and not a computer. As Harnad points out, our uncertainty could ultimately be resolved by invoking a sensory experience that a computer could not have experienced. Harnad then introduces a third level of the Turing Test, T3, which he argues is the level Turing really intended for the test to occur. This level of the test involves a robotic sensorimotor element, wherein a robot (a type of computer) could receive sensory input, and produce output based on this constantly changing and evolving input. This level of the Turing Test is not accessible to the reader in the same way likely due to the way more classic digital computers play so heavily in their lives. However, this proposal makes the most sense strategically for passing the test. This plan is not without its own issues though, as certain sensory experiences are more easily computed than others. Visual input is already on its way to being part of a computer, right now in scan barcodes popular on smartphones. How does one impart touch to a computer? Arguably yes, since experience a hug or a tender caress is something that could expose an imposter in the Turing Test. On the other hand, an appreciation of the gesture could lead to a similar, if not the same effect. Knowing that a person is trying to hug you is definitely a part of the hugging experience, and a robot could experience this aspect with limited senses like vision. Clearly, passing T3 is complicated since new elements are added by the inclusion of sensory motor elements, and more research needs to establish what is necessary to achieve sensorimotor capacity.

    ReplyDelete
  32. The impossibility of second-guessing the robot's every potential "move" in advance, in response to every possible real-world contingency also points to a latent (and I think fatal) flaw in T2 itself: Would it not be a dead give-away if one's email T2 pen-pal proved incapable of ever commenting on the analog family photos we kept inserting with our text? (If he can process the images, he's not just a computer but at least a computer plus A/D peripheral sensors, already violating Turing's arbitrary restriction to computers alone). Or if one's pen-pal were totally ignorant of contemporaneous real-world events, apart from those we describe in our letters? Wouldn't even its verbal performance break down if we questioned it too closely about the qualitative and practical details of sensorimotor experience? Could all of that really be second-guessed purely verbally in advance?


    While I agreed with the main premises of your article, namely that a proper TT can only be passed with an indefinitely indistinguishable robot, not just one that was statistically believed to be human for 5 minutes, I do think that in this quote the line between verbal processing and sensorimotor interactions gets a little blurred. By saying that a machine would be “incapable of ever commenting on the analog family photos we kept inserting with our text”, You are essentially removing all input processing from the T2 machine. If one was talking through an email chat, all analog information, such as pictures, could be converted to a digital format that a T2 machine could process. In the example of the picture, the family photo would be converted to color-coded pictures, which a T2 robot could then process. Using a facial feature recognition software as well as some knowledge about how human’s interact with their families, a T2 robot could easily come up with answers like “ is that you’re daughter on the right, she seems is very beautiful”.

    Taking a step back, A T2 robot could have access to all analog information transcribed to digital information, including real word knowledge and a simulated motor experience, and as such should have no problem passing the original TT

    ReplyDelete
  33. As I progressed through Harnad’s “The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence” several points were brought to my attention. In particular, the following quote struck me: “There is no one else we can know has a mind but our own private selves, yet we are not worried about the minds of our fellow-human beings, because they behave just like us and we know how to mind-read their behavior.” He then continues, “By the same token, we have no more or less reason to worry about the minds of anything else that behaves just like us -- so much so that we can't tell them apart from other human beings. Nor is it relevant what stuff they are made out of, since our successful mind-reading of other human beings has nothing to do with what stuff they are made out of either. It is based only on what they do.”
    I don’t necessarily agree with this statement. To rephrase the above quote, if we consider T3 robots (Total indistinguishability in robotic (sensorimotor) performance capacity) we should not worry about their mind because they behave similarly to us and/or because it is impossible to know (whether they have a mind). Referring back to the example we discussed in class (whether or not we would kick Ethan if he were a robot), it is possible that Ethan thinks and feels just as we do but there is no way for us to know for sure. This is an example of the other-minds problem. This does not mean however, that we should disregard the mind of T3 robots. We are simply more likely to believe that other human beings have a mind in comparison to T3 robots. In other words, there are more factors that lead us to doubt the existence of a mind in T3 robots. To elaborate, it is easy to assume that other humans have minds due to the astounding similarities humans have to each other. For example, we share similar behaviours, physiological composition, and so on. These similarities aid to support the hypothesis that other human beings have minds, more so than T3 robots, not because there’s no way to know (other-minds problem) but because these robots differ from human beings in more respects (i.e. physiological composition).









    ReplyDelete
  34. With regards to Turing designation of which systems may be included in the definition of a machine, Harnad notes “any dynamical system we build is eligible (as long as it delivers the performance capacity). But we do have to build it, or at least have a full causal understanding of how it works. A cloned human being cannot be entered as the machine candidate (because we didn’t build it and hence don’t know how it works).” (p.6-7)

    It is clear that both computational and dynamical systems may function as appropriate machines when conducting the Turing test as long as it is the performance capacity and not the physical appearance that is being assessed. But, it is the point in the aforementioned quote concerning the ineligibility of cloned human beings that initially confused me. At first, this seemed to be indistinguishable from the system described by the T5 level Turing Test, and helped me beg the question as to why must a system necessarily be built prior to the investigation of its cognitive capacity. Is performance capacity alone not enough? Although it is premature to say that all bio/physiological mechanism explaining human behavior are understood, it is fair to say that certain causal mechanism are. Distinct neurons (or neural pathways rather), for example, are causally related to visual perception, and, upon their electrical stimulation, can influence the oriented movement of visual stimuli as well as its perceived shape and semantic association. The reason, it seems, why the “machine” to be investigated must be built (with a full causal understanding of its function) is to avoid making the claim that a human is equal to itself, and is therefore a cognitive being. This tells us nothing about why and how we do what we do, but rather says that an assembly of organic material behaves indistinguishably from another grouping of organic material. This tells us nothing about how and why cognition occurs, and no further helps us understand how bio/physiological mechanism lead to cognition. Instead it helps us realize the already known fact that humans behave like humans.

    If, however, an investigator builds a “machine”, both computational or dynamic alike, has an understanding of the mechanism it functions by, and observes it complete the Turing Test (at a T3 level or higher), then the investigator may claim that he or she has successfully understood the mechanism that generates the performance capacity of a human, and by definition a cognitive being. In this case, the complete mechanism for cognition would be understood.

    ReplyDelete
  35. In the paper The Annotation Game: On Turing (1950) on Computing Machinery and Intelligence, Stevan Harnad gives a lot of details and explanations of Turing's paper to help readers understand; at the same time, he also commented with his own opinions, among which some are against Turing's arguments.

    Harnad mentions that Turing makes a contradiction in his paper regarding to the engineering devices that could be used in the imitation game. Harnad quotes:
    "This prompts us to abandon the requirement that every kind of technique should be permitted. We [accordingly] only permit digital computers to take part in our game."
    and comments:
    "This is where Turing contradicts what he said earlier, withdrawing the eligibility of all engineering systems but one, thereby introducing another arbitrary restriction  -- one that would again rule out T3. Turing earlier said (correctly) that any engineering device ought to be eligible. Now he says it can only be a computer."
    Regarding to this part, I would say Turing is not really contradicting himself. Instead, he is being thorough as a researcher. It is very common in researches that there exist a lot of options and researchers present all the options, and then they tell you their pros and cons and finally choose they one they like and they want. Turing is kind of doing the same thing. He is actually saying that a computer is not his only option; he already considers other options and computers stand out.

    Harnard also mentions the hierarchy of Turing Test and comments on T2 and T3. He quotes:
    "the interrogator cannot demand practical demonstrations"
    And writes
    "This would definitely be a fatal flaw in the Turing Test if Turing had meant it to exclude T3 -- but I doubt he meant that"
    Later he says:
    "So the restriction to computer simulation, though perhaps useful for planning, designing and even pre-testing the T3 robot, is merely a practical methodological strategy. In principle, any engineered device should be eligible; and it must be able to deliver T3 performance, not just T2"
    I am not sure if I understand this right, but I do agree that Turing means to exclude T3. It is easier for a computer to answer using typewritten words and typewritten words is a good indicator of whether it could think like human beings. It is just an easier way to represent the result. If we are using a robot which does not have a screen, it might be easier for it to do T3 than T2. In that case, the way we carry out the imitation game will be adjusted to make the game harder and more indistinguishable.

    Later, Harnard also comments on the mathematical objection. Regarding to the fact that I have mentioned Gödel's incompleteness theorem in my previous commentaries, I would say the idea that Harnad proposes is very surprising to me because it does not look at the technical part of the objection; instead, he looks at the definitions of "knowing" and "proving". Although he does not give out clear definitions for them, I agree that there is a difference between knowing and proving, and that Gödel's theorem may not be the best way to prove why cognition is not computation.

    ReplyDelete
  36. Harnad offers explanations in this article about some details on Turing’s article which are responsible for later equivocations. Some confusion-creating terms are machines (between machines that simulate and other machines with an actual action on the world), thinking ( it is not about thinking but about generating capacities of thinkers) and game (whereas it should be called test of reverse-engineering of our capacities).

    Most importantly, Harnad describes that the Turing Test (TT) –indistinguishability from human- must be understood at different levels. T0 = in completion of a specific task, but this is so limited that it doesn’t really count as TT. T2 is complete reproduction of verbal capacity; T3 is for complete sensorimotor capacities, which include verbal; T4 on top of previous capacities demands identical internal structure and function; finally, t5 subsumes all the previous and demands identity ‘right down to the last molecule’.

    According to Harnad, the level Turing intended to attain in order to reproduce cognition is T3, which needs T2 and has a fuzzy boundary with T4. Therefore, the author suggests that Turing was wrong in insisting that digital computers alone were to be used in this ‘game’, because they lack sensorimotor capacities.

    ReplyDelete
  37. This comment has been removed by the author.

    ReplyDelete