Saturday 11 January 2014

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"





1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: Part I and Part 2
3Le modèle Turing (vidéo, langue française)

62 comments:

  1. The pass the Turing Test: To have a model that does all the cognitive acts that humans can do to the point that an interrogator cannot distinguish the machine from a real human.

    We, humans, however, are complex and dynamic. We change over time, we learn new cognitive acts, new ways of doing them, forget others and get altered significantly from our environment. On this topic, I am impressed with Turing's substantial writing on the possibility of "learning machines", which is analogous to the field of machine learning today.

    The cognitive behaviour of humans is dynamic and ever changing. The dependence on technology today to aid with cognitive acts (versus decades ago) is an example this. The answers given be a human now are different than the past, and will be different in the future. When we speak of passing the Turing Test nowadays, then the model/machine/robot would also have to be able to adapt at the same time humans' cognition adapts. If we had a machine that passes the TT today, there could never be a guarantee that it would pass the TT in the future. So whatever model is used to pass s TT - no matter how gloriously it passes the test - could never, with certainty, be an accurate model for human cognition.

    ReplyDelete
    Replies
    1. Turing does propose that one creates a ‘child machine’ and mentions three components that affect the state the adult human mind is in, those being:

      “(a) The initial state of the mind, say at birth,
      (b) The education to which it has been subjected,
      (c) Other experience, not to be described as education, to which it has been subjected”

      I believe that his proposal of such a ‘child machine’ could be account for the dynamic aspect of humans. If such a machine were to be programmed and pass the Turing Test, it can adapt (since it is able to ‘learn’) and still pass the TT in the future.

      Delete
    2. Part of passing the Turing Test is to be capable of the dynamic changes that a generic human is capable of, including learning. And the capacity to pass the TT is supposed to be for a lifetime, not just 10 minutes.

      Delete

  2. “In short then, I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position. They will then probably be willing to accept our test.”

    I am confused about this statement. Prior to this statement, Turing discusses the arguments of consciousness. He discusses a possible sonnet-writing machine that answers in the same manner as the witness in the interrogator/witness conversation about summer vs. winter days. It seems to me that you can pass the TT (play the imitation game), without having to deal with consciousness. Although the point of artificial intelligence is to put our theories of how the mind works to the test (reverse engineer), the TT is to determine whether a machine can imitate a human agent well enough so that it is not distinguishable from a human. If a machine can behave and interact like a ‘normal’ human, does it necessarily have to be conscious or even feel to pass the Turing Test? Why would we need to abandon consciousness or take the solipsist position – could we not accept that the test is an important standard, but is not enough to determine that it is not equal to that of the brain?

    ReplyDelete
    Replies
    1. The other-minds problem

      I don't think Turing meant "solipsism" here ("I am the only mind that exists in the universe"). I think he means scepticism about other minds ("I can't be sure I am not the only mind that exists in the universe").

      And so his point (and it's a very strong one) is that if a candidate can pass the Turing Test then you have no better or worse reason for being sceptical about whether it has a mind than you do about any other person.

      (Think about it.)

      Delete
  3. Turing proposes the question "can machines think?" In the text, he discusses and counters many possible objections to the idea. My thoughts and comments on several of them are below:

    In reading several of the possible objections, I was reminded of our discussion in class, particularly concerning the problem of other minds. That is, given that we only have access to our own mind, we can never be certain that others have a mind.

    Dealing with the argument from consciousness, that a machine would need to feel and think that it is a machine. We can never be certain if someone else feels, nor if a machine feels. All we can do is assume based on their external actions, which may be programmed into a machine.

    Same with the arguments from various disabilities, that machines may do y, but will never do x. Many of the suggestions for x, we cannot be certain of in other humans. Do other humans genuinely fall in law, or is it just an act? Coming back to the problem of other minds, we can only know our own experience, and merely deduce the experience of others through their behaviors, and that holds for machines as well.

    Of the theological argument, the basis of evidence that man has a soul is merely religious tradition from an ancient text. Would someone who accepts this argument, also accept that a machine has a soul (and may therefore think) if another ancient text suggested that? Otherwise the standards of evidence necessary are different, and flawed.

    Finally, with the argument from continuity of the nervous system, I would disagree with Turing's argument. The understanding I have of the nervous system is also one which is discrete, at least substantial aspects of it. Action potentials either fire or don't. Neurotransmitters are released in specific discrete quantities. Receptors and enzymes are in discrete numbers in neural synapses.

    ReplyDelete
    Replies
    1. "Soul" = "mind" = feeling

      Hi Bandi, good points. Yes, the methodology of the TT is predicated on the limits imposed by the other-minds problem: Don't ask or expect more from a human-made machine than of any other machine (e.g., humans).

      To have a mind, a machine would have to feel (not necessarily feel that it's a machine, just feel). But whether it feels, we cannot know. We can only know what it does, and can do.

      The notion of a "soul" is just a symptom of the "hard" (mind/body) problem: The "soul" is merely the capacity to feel. We cannot explain how or why organisms are able to feel. Only how and why organisms are able to do what they can do. Because of that explanatory gap, supernatural ideas (usually dualistic ones) have been proposed, such as that the "soul" is something immaterial and immortal.

      Before you ask whether religions would accord machines a "soul" (which would just mean that they accept that machines feel), consider that many religions do not accord nonhuman animals a soul (even though it's obvious that they feel). I would say this is a symptom of the fact that religions are not very moral, or kind. And of the other-minds problem (when it is more convenient for us not to give the other body the benefit of the doubt about whether it has a mind). (Draw your own conclusions.) And even the religions (like Buddhism and Hinduism) that do accord nonhuman animals a soul, do so at a huge speculative price (belief in reincarnation!).

      But all that religious stuff is just a fanciful attempt to deal with the "hard problem," which is to explain the how/why of feeling. If it were possible to solve the other-minds problem then we would know what does and does not feel, and the question would not be "Do machines feel" but "Which machines feel," since a "machine" is merely a causal mechanism, and all organisms, human and nonhuman, are causal mechanisms of some kind -- just not all of them are the kinds of machines that feel (e.g., plants probably don't; and neither do living livers or hearts in isolation, any more than toasters or rocks do).

      The question of whether cognition is computation does not hinge on continuity/discontinuity but on the software/hardware distinction: is a mental (felt) state just an algorithm, that merely needs to be run on some hardware? An alternative is: no, a mental state is a hybrid dynamical/computational state, and the dynamical part (e.g., a certain electrical or biochemical activity) is essential to its being a felt state -- and not just the way hardware is essential in order to run an algorithm.

      Delete

  4. “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localize it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper” pg. 12.

    With this statement, Turing refutes the “Argument from Consciousness” by deeming it irrelevant to his central question - “Can machines think?” The “Argument from Consciousness” maintains that no machine would ever be able to feel, or that at least no one can ever be sure whether or not a machine is feeling. In his rebuttal, Turing says knowing whether something feels is not a prerequisite to knowing that something thinks. Turing’s claim suggests that consciousness is made up of more than just thinking and that thinking can, in theory exist without feeling. Despite possessing the necessary underpinnings for thought, a machine may still not feel. Turing seems to believe that we can localize thought (to a mechanic implementation of algorithms) but not feeling. In fact, the more we try to localize feeling, the more we get trapped in the solipsist view that we can only know what exists in our own experience. Even if Turing’s machine can replicate human cognition to an outside observer, the observer will still not know whether the machine has feeling or consciousness. Is this the paradox to which Turing refers? And is this just an example of weak equivalence? This robot displays input-output relationships that are indistinguishable from a human’s, but a human may have consciousness while this robot may not, In this case, different processes are yielding the same output. I find it hard to understand how thought could be separated from feeling. It feels like something to think. And many thoughts are about feelings. How can a robot exhibit human cognition if feeling is not part of that?

    ReplyDelete
    Replies
    1. Turing and the "Hard" and "Easy" Problems of Cognitive Science

      Good points, Lila. See what I said above about Turing's reference to "solipsism."

      Let me suggest a simple way of putting it:

      Yes, cognition is thinking. And it feels like something to think. But because of the other-minds problem we can never know for sure whether anyone or anything else feels. So we can never know for sure whether anyone or anything else feels either. The best we can do in either case (organisms or robots) is the Turing Test.

      So if you can successfully reverse-engineer what thinkers can do -- i.e., build a system that can pass the TT, which means you have solved the "easy" problem of explaining how and why organisms can do what they can do -- that's as close as you can come to explaining what thinking is. Because of the "hard problem," you cannot explain how or why the TT-passer feels (if it feels). And if the TT can be passed without being able to feel, then that just makes the hard problem all the harder!

      N.B. The "hard problem" (the mind/body problem of explaining how or why any system can feel) is not the same as the "other minds problem" of how you can know whether anyone or anything else feels. But the two problems are related.

      Delete
    2. Thanks! I better understand the distinction between the "hard problem" and the "other minds problem" now. It does seem like they go hand in hand. If we were able to overcome the other-minds-problem and see into someone else's consciousness, we would have a good start on solving the hard problem. Then again, we are able to introspect but still unable to solve the hard problem, so who knows how helpful it would be to peer into someone else's mind.

      Delete
    3. Actually, the reason the hard problem -- of explaining how and why organisms feel rather than just do -- is hard is not just because of the other-minds problem. Even if we could accurately sense what other organisms were feeling it still would not explain how and why they feel (any more than introspection explains how or why I feel). The hard problem is hard because of two things: (1) all evidence says that there is no such thing as "mind over matter" -- which means all doings are fully explained without any reference to feeling; there is no room for an extra "psychokinetic" force. (2) Because of that, causal explanation of feeling is bound to be a problem.

      Delete
  5. Throughout the course of my reading of Turing’s paper, I couldn't help but notice the test wasn't in fact a test of human cognition, but rather a test of human-like language use as a proxy for human cognition. I don’t mean to imply the test is flawed. In fact, I find the test to be the most adequate way of addressing the question “Can machines think?”. It’s simple and draws a line in the sand which is more than can be said about any alternative experiments, which I’m yet to be acquainted with.

    Having said that, I think this points to what makes human cognition so special: language. Not only does language somehow allow us to think and express an infinitude of possible thoughts, but it just might be where we come to exist. It seems like no matter what, we’re stuck in our linguistic thoughts. We make stories about ourselves where we are the central characters and we perceive ourselves to be more real than the so-called real objects out there which we perceive. Even when we try referring to our non-linguistic thoughts (I’m using “thought” rather loosely), we are stuck within whichever language we've acquired to express that thought. Some take “feeling” or “subjective experience” to be the critical factor in being conscious (in the Hard Problem sense), but the only way we are able to allude to the mystery and capture it is through the use of those acquired terms. One question, which I don’t dare yet posit an answer to, seems to be whether we have any reason to disbelieve a machine which would speak in such terms about the subjective nature of its own consciousness? The TT doesn't seek to answer that question. In fact Turing seems to explicitly see it as a complete side-issue: “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.” I raise the question because I think it’s a nice way of understanding why The Hard Problem is so very hard. It seems like we are stuck in some sort of Descartian (or Harnadian) solipsism when it comes to non-human robots. We might grant that robot the ability to think (passing TT), but we’d have to tell ourselves that when it thinks that, it’s mistaken, while when we do, we got it right because we really understand (or feel) what we’re talking about. We could then posit 2 unconscious robots having this very same conversation. What does this seem to say about The Hard Problem? To me, it seems to say we've created an Impossible Problem.

    ReplyDelete
    Replies
    1. Language is pretty powerful and general. It covers (through words) just about everything you know and have seen and can do. We'll get to it in later weeks.

      Yes, the only way we can talk about consciousness (feeling) -- or anything else -- is in words. But organisms without language feel too, and they can do a lot of the things we can do (and many we can't).

      We evolved a language-prepared, obsessively verbal brain. It's almost impossible for us to see any event or scene without subtitling it with a verbal narrative. But that's just us...

      The hard problem is not hard because of localization but because of explanation: We cannot explain how or why the brain generates feeling (though of course it must).

      By the same token, we could not explain how or why a T2 or T3 or T4 or T5 generates feeling.

      Yes, I think the hard problem is impossible to solve ("Stevan says).

      Delete
  6. “Instead of producing a programme to simulate the adult mind, why not rather try to produce one that simulates the child’s mind? If it were then subjected to an appropriate course of education one would obtain the adult brain.”

    Turing is discussing how to build a machine that would imitate a human being so successfully that you wouldn’t be able to tell the difference between the machine and a human being. This machine would be said to pass the “Turing Test.” Turing suggests that the best way to build this machine is to program a few basic things into a machine, rules that would allow the machine to learn, and then to teach the machine! He thinks this approach is better than programming everything into a machine that it would need to know to pass the Turing Test.

    The idea of a learning machine passing the Turing Test raises some interesting questions. First, would the machine need sensorimotor capacities (ex: the ability to see, move, smell) in order to learn enough things to pass the Turing Test? My sense is that Alan Turing isn’t certain, at least when he wrote this paper. Turing writes that the machine “will not, for instance, be provided with legs so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes.” He seems pretty set on the fact that the machine cannot move, repeating later that the machine “has no limbs.” However, at the end of the paper, he writes that one approach for teaching the machine would be “to provide [it] with the best sense organs that money can buy, and then teach it to understand English,” and affirms that this approach should, at the very least, “be tried.”

    What if the machine needs sensorimotor capacities to learn enough to pass the Turing Test? If this is the case, and if it is also true that the only way to program a machine to pass the Turing Test is to program it to learn, the implications would be pretty interesting. This would provide a strong argument that cognition is not merely computation. In other words, programming alone would not be able to explain how you and I do everything that we can do (write poetry, do math, avoid getting hit by buses ….). Our ability to sense and to move must have something to do with it!

    The point of building a machine that passes the Turing Test is to explain how and why we can do what we do. But if the machine that passes the Turing Test is a learning machine, will we have really explained much? Turing writes that “an important feature of a learning machine is that its teacher will often be very largely ignorant of what is going on inside, although he way still be able to some extent to predict his pupil’s behaviour.” We might have explained a bit about how we do what we do (ex: we have some basic rules “programmed into us” when we are born that allow us to learn stuff), but will we really know much about how we learn?

    ReplyDelete
    Replies
    1. I too was struck by Turing's getting caught on the physical dimension of things. I suppose that one way to have a computer learn would be to send them out into the real world, armed with built-in sensory-motor equipment and, as Turing mentions, not teased too much upon attending school; but surely there are more economical alternatives? If we want a computer to think and don't have any plans for them to manipulate materials in any way, would it not be possible to simply simulate the real world such that they are able to learn or deduce simple propositions? For example, say we want the computer to learn, as a child does, that it is in general a bad idea to carry containers upside-down if we want the content of the container to stay put. Could we not give the computer the formulas governing the forces of gravity and particle interactions, as we would encode in a physics engine of a simulator, as well as the goal "get a bowl of soup from point A to point B", and expect it to derive "things fall down", "liquids in containers will fall out of upside-down containers", "it is undesireable for liquids to fall out of containers", and "to achieve goal, carry bowl right-side-up"? We could go one step further and have the computer learn that eating is important, and hence that if it wants to eat, it should keep its soup bowls upright. So I don't see the physical/aphysical contrast to be too big of a concern, given enough computing power.

      Delete
    2. Though, coming back to this after taking a look at the Harnad paper; "a computer-simulated robot cannot really do what a real robot does (act in the real-world) -- hence there is no reason to believe it is really thinking either" - a TT-passing robot appears to be required to operate in the real world, achieving formal equivalence. I am not sure I agree with this as I do no think that manipulating physical materials ought have a special place in determining TT-equivalence. After all, we are of course willing to allow the physical components of a robot to be different from those of a human - why not the physical components of what it manipulates?

      Delete
    3. Jessica, I think you've managed to invoke T3 and the symbol grounding problem. If you did that before reading 2b, bravo!

      But of course learning and the capacity to learn are an essential part of the TT. How could Ethan (or any of us) be passing the TT if we were not learning all the time? You can't even have a coherent, continuing verbal interaction without that.

      Delete
    4. Dia, no, a virtual robot in a virtual world is not passing T3 (or T2). But it's an open question whether a real robot (or a real child) could grow up in a virtual-reality world, simulated for its real senses. (Maybe, if the VR designers manage to think of all the essentials in advance.)

      By the way, the way dogs catch frisbees is not by computing Newton's law. More likely their brains have an internal dynamical system that obeys Newton's law, just as planets do. Neither dogs nor planets "computes" Netwton's law. Physical dynamics is not computation (and not just because it's continuous rather than discrete).

      Delete
  7. When explaining the different parts contained in a digital computer, Turing states that "most actual digital computers have only a finite store. There is no theoretical difficulty in the idea of a computer with an unlimited store. Of course only a finite part can have been used at any one time".

    It would have been interesting for Turing to expand more on the possibility of a computer with an unlimited storage. Indeed, as a psychology student, I have often seen claims that the capacity of the human mind (especially in terms of long-term memory) is infinite. Thus, a proper imitation game on part of a digital computer should, logically, require a storage that is, too, unlimited.

    In this line of reasoning, I do not think that a computer with a "finite store" could truly pass the Turing test. Turing's comparison between a computer's storage system with the "human computer's paper" implies that, one day, the computer would eventually run out of paper! Obviously, this is not what happens with humans; we forget information as we progress in our life. We have some ideas on what happens at the neural level, but we are still far from reaching a definite answer. In my opinion, once we will truly understand how connections between neurons truly work, cognitive scientists will be able to program a computer with unlimited storage.

    In the meantime, a computer may pass Turing's imitation game. But will that be a guarantee that it will always maintain its "cover" in the future if it has a limited storage? Computation cannot be equal to cognition if the imitation game is bound to fail (even if it take, let's say, decades before that happens) because the digital computer will be limited by its lack of storage.

    ReplyDelete
    Replies
    1. Neither computers nor our brains have unlimited storage. Nor do they need it. (And we pass T2, T3, T4 and T5 without it.)

      (The Turing machine's tape is input, not memory.)

      Delete
  8. Computing Machinery and Intelligence – by A.M. Turing

    “It will not be possible to apply the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes…. The example of Miss Helen Keller shows that education can take place provided that communication in both directions between teacher and pupil can take place by some means or other.”

    The general claim provided in this quote emphasizes an important detail of the Turing test which purports that the method, or function, by which a machine computes information is not necessary to explain cognition so much as a system being capable of replicating human behavior. According to Turing, theoretically, a digital computer capable of infinite storage is not limited in its computational power and, irrespective of whatever lag time occurs during the computing, it may provide an output indistinguishable from any human.

    On a quick side note, Turing further argues against those who criticize that the infallible computations of a machine differentiates it from a human. In this case, an interrogator performing the Turing test would easily distinguish between a human’s difficulty and machine’s ease at responding to an extremely complex arithmetic problem for example. Turing simply argues that a machine programmed to participate in a Turing test would not attempt to give right answers to an arithmetic problem, but would integrate mistakes into its response that are likely made by a human participant. This is all to say that erroneous thinking can be programmed into a machines processing capacity and establishes a reason why right or wrong answers to ANY question does not define a systems cognitive capacity.

    This however is where I believe the principle of falsifiability comes in and explains why the Turing test is not a theory that explains cognition. According to Turing, the digital computer is expanded theoretically to account for unlimited storage of information, as well as the capacity to perform (universally) any task computed by a discrete-state machine. This explains the enormous computation power of a digital computer, providing a theoretical explanation for why and how it can mimic human behavior. In this case, the input and processing capacity of the computer will govern and produce an output no matter of its level of correctness (as is the case for a human asked a particular question). So, if a machine is asked “what does it taste like when you bight into an apple”, it will most likely access its vast storage of semantic symbols, syntactically connect the relevant items, and produce an output that semantically is very interpretable by a human interrogator. However, although the machine produces an output indistinguishable from a human, it says nothing about whether the computer actually knows the feeling of taste. It has merely found a way to connect a vast storage of syntactic information. This follows somewhat in the line of Roger Penrose who might have argued that a machine, purely based on logical programming can described something sweet vs something salty, but might not have any intuition for what the taste of salty is vs. something sweet.

    ReplyDelete
    Replies
    1. Lastly, I believe aspects of this quote distinguishes a major difference between the so-called T2 level (email-based) and T3 (robotic-based) level of Turing testing. The T2 level relies primarily on the processing capacity of a machine, and depends on syntactic manipulation of its input, resulting in an interpretable output. However, the output seems to be limited to semantics alone, and not to any real sensorimotor transformation that may occur. In a sense the limited approximations of dynamicity by computing, does not actually highlight that it can reproduce dynamic states. This is where the relevance of an eye and a leg come into play. A robotic system is more likely capable of interacting in ways that define what humans “can do” as opposed to just semantically describing what humans “can do” (the latter primarily resulting from an email-based Turing test).

      Delete
    2. Yes, a lot hinges on sensorimotor (i.e. robotic) input and output. Symbols can describe anything, but only senses can show it.

      And to understand words, you have to know their meaning. Otherwise verbal descriptions are meaningless.

      This is the symbol grounding problem, and we will get to it soon.

      The same applies to computational simulations; they are just like verbal descriptions: if you don't understand the symbols -- or they are not hard-wired into some interaction with the outside world -- they are just meaningless squiggles and squoggles. A simulated waterfall is not wet. And a simulated planet in a simulated solar system does not move -- even though it can be used to predict the position of a real planet in our solar system... as long as someone either understands that the squiggles and squoggles mean planets revolving around the sun, or hardwires their output to a telescope whose position the computations can control, so it is aimed at where the simulation says Mercury should be right now.

      For the same reason, a simulated T3 robot in a simulated world is not moving, nor is it passing T3. Except for the algorithm it is executing, the computer on which the T3 simulation is running is doing the same kind of thing as when it is calculating weekly payrolls.

      The only difference with T2 is that there it's symbols in and symbols out and symbols in between. If the computer is passing the T2 successfully, the person understands what the computer is saying. But the computer does not. (And Searle next week Searle will show us why.)

      Delete
  9. “A machine can never “take us by surprise”

    In this reading by Turing I am not convinced by Turing’s objection to a variant of the Lady Lovelace argument: the surprise argument. The programs that run on such machines (no matter how sophisticated or close to the workings of our minds) are the result of the work of another man’s brain. Even as far as we are from elucidating how our brains do what they do, (and contrary to Harnad’s argument of “whether or not the machine is man-made is irrelevant”) a program running on such machine in the real world, is designed by the brain of this programmer, and whatever the machine does to surprise Turing is as much the result of the workings of the mind of that programmer as what the programmer might miss when designing it. The surprise factor is still man to man and not machine to man. On this line of thought I am driven to conclude that whether or not this machine gives any insight into how do we do what we do will shall be based on its level of independence, as Turing asserts it “leads us back to the argument from consciousness” and as we have studied this is still unresolvable.

    Also I wish to start a conversation about something that would be extremely unethical and inverse to the current direction of the conversation, what would we learn from taking a human being and raise it to act like a machine? What constrains would we apply in this case? Could we argue that “consciousness” or our “soul” is something that we learn and absorb from the environment rather than being God given or whatever your beliefs tell you about its origin?

    ReplyDelete
    Replies
    1. I can write an algorithm that takes inputs and does computations that I haven't the time to do. I give it a problem, and, although I wrote the algorithm, I am surprised by the answer it gives me next day. I did not know the answer because I did not do the computations. Even though I wrote the algorithm, I still did not do the computations, and I am still surprised.

      (If this still doesn't trigger your intuitions, here's a more sobering example: suppose you wrote a computer programme that did extremely accurate health statistics. By analyzing genetic data from a cheek smear, data on health history, eating habits, exercise, air quality, etc. etc. it can compute with great accuracy when a person is likely to die. You wrote the programme, you feed it the data on yourself, it takes a few weeks for it to run the computations, and then it tells you you will die today. You are surprised, even though you wrote the programme.)

      So much for Lady Lovelace, whether on the subject of surprise, or error, learning or creativity: Yes, a computer programme can exhibit every one of those things, even though it is just a set of rules that someone wrote.

      And none of this has anything to do with the hard problem of consciousness. Nor with the question of whether a computer could pass the TT.

      The purpose of the TT is to design a machine that can do everything we can do so we know how we can do what we can do. That's reverse-engineering cognition. A cloned human being would pass the TT, but it would not tell us how we can do what we can do, because we did not design it. Raising a human as a "machine" would also not tell us how we can do what we can do. (And besides, what is a "machine"? And what does it mean to raise a person to act like a "machine"? Turing may have had Aspergers: was he acting like a machine? A machine is just a causal system, so we are all machines. But there are many kinds, and we want to know what kind we are -- the kind that can do everything we can do, and feels.)

      Having a "soul" just means being able to feel.

      On gods I have no authoritative information...

      Delete
  10. “We may now consider again the point raised at the end of §3. It was suggested tentatively that the question, "Can machines think?" should be replaced by "Are there imaginable digital computers which would do well in the imitation game?" If we wish we can make this superficially more general and ask "Are there discrete-state machines which would do well?" But in view of the universality property we see that either of these questions is equivalent to this, "Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?””

    Turing begins his essay by asking if computers can think, and near the end of his discussion of the imitation game and digital computers he concludes that “The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.” In the sense that machines will be able to do what humans are capable of doing; and that human’s won’t second guess whether or not the computers are thinking.
    In the quotation above Turing is equating thinking to a process of being similar to the executions a human can produce. I believe it is possible to admit that robots have been able to prove that they are capable of doing things humans can do (such as playing chess), and for them to be much better than us. However what I find troubling is that even if the computer is “thinking” (in the sense of Turner’s understanding of equating it to production), I as a human would imagine that it’s “thinking” occurs on a level where it rules out all it’s possible situations to get the the best possible one. What is however neglected in Turning’s definition of “thinking” is a self-reflexive kind of thinking, where one makes a choice and then thinks back on the choice they have made (for example in chess- moving one’s king into check mate position and then in the end when the other player makes a move they think to themselves that they shouldn’t have moved there). Reflexive thinking is therefore something that was not picked up by Turing’s argument.

    Similarly, when Turing discusses this idea of a program being able to learn; it is learning on the bases of codes and not from reflexive thoughts. I wonder, if a program (robot) can have memory storage of past failures would it be possible for it to think reflexively and to make a choice based on its past decisions rather than by the multiple general codes it runs through (which tend to be superhuman cognitions)?

    ReplyDelete
    Replies
    1. Reflections on "Self-Reflexivity"

      "Self-reflexive" thinking is still just squiggles and squoggles unless it feels like something to be doing it (but then it may as well be just ordinary thinking!). It is only the fact that it feels like something to be doing it that would guarantee that a T2-passer was really thinking.

      Ditto for a grounded T3 robot. But because of the other-minds problem, we can never know for sure whether a T2-passer or a T3-passer or a person other than oneself is thinking. All we know is that we can't tell them apart. So we're no more likely to be right (or wrong) about one than the other.

      That's the gist of the TT; and the message is that that is the best that cogsci can hope to do: solve the "easy" problem of explaining how and why we can do everything we can do (including learning). (That includes "self-reflexivity," whatever that means, but it does not guarantee feeling.)

      By the way, don't feel bad about self-reflexivity. It's what everyone always thinks of first, including me... (See especially the embarrassing passages about "auto-referentiality.)

      Delete
  11. "Why not try to produce one which would simulate the child's? .... How can the rules of operation of the machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. This is quite true. The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity"

    I personally found this to be the most interesting aspect of the whole paper, because of the way in which it presents the possibility that more so than symbol manipulation, cognition may be ultimately no more than high-level learning. I think that the possibility of creating a paradoxically sophisticated machine which is still primitive insofar as it has the ability to learn and improve is an exciting prospect, especially because that puts programmers especially in a very god-like situation - how many initial, time-invariant rules do you provide the machine with, and how much room do you leave open for growth and learning?
    It is not quite clear from the whole of Turing's paper, but since its publication it has come to be equated not only with the question of can computers think but also with the idea of how thinking computers will shed light on how humans think. If a learning computer were successfully built, which could simulate very closely the intellectual development of a human child, I would say that would be incredibly illuminating of the phenomenon of human cognition, since it is steeped so much in the idea of learning, punishment and reward, growth, and potential. I think that using this kind of logic, it doesn't make sense for programmers to try to create an 'adult' computer; there seems to be something a little more 'natural' or 'organic' in the idea of creating a 'baby' computer which could, with the help of additional, instructive programming along the way, as well as clever pre-programming of certain elements necessary for growth, might yield more successful answers to our questions about computation and cognition.

    ReplyDelete
    Replies
    1. We already have lots of computer programmes that can learn, including learning to re-write some of their own programme. All neat computational powers, but nowhere near the TT, either child or adult.

      And whether the TT-passing candidate starts learning as a child, or later in the day, it's clear that learning is an essential part of even being able to partake in a simple 2-way conversation, let alone the whole TT.

      It was always implicit in Turing's paper that creating a TT-passer should reveal to us what thinking really was. We already know what computation is. And clearly not all computation is thinking. But is all thinking computation?

      Delete
  12. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

    I find Turing's initial argument intriguing. The concepts of "machine" and "think" are very complicated and tend to change rapidly as the trends in psychology, neuroscience, and computer science changes. He is right that we cannot simply ask the question: Can machines think? A machine could be something as simple as a stapler or as complicated as the hardware in a computer. Obviously, something mechanically simple, like a stapler, cannot think. But a computer, put inside of a robot that simulates artificial intelligence might. So what do we mean when we ask, can machines think? In what way can we capture "thinking" in the subgroup of machines that could be capable of thinking? I think Turing's imitation game captures some of these aspects but not all.

    Is fully imitating humans the same thing as "thinking"? I find that in acting as a human, the machine need not "think". All it has to do is follow commands that are programmed in it. If a machine follows the intricate program and is able to pass the Turing test, does that it mean it can "think"? I find this a bit troubling. If the machine passes the Turing test, then it is thinking, what does this tell us about the brain? That we are just running on the same programs as the machines we built? What does this tell us about the act of decision making and free will?

    ReplyDelete
    Replies
    1. Putting a computer in a robot, even a T3-passing robot does not mean the computer can think -- only that computation is part of a dynamical system that can think.

      And the idea is not to simulate or imitate thinking but to generate it. Real thinking. By generating the capacity to do everything a thinker can do.

      A computer programme is a set of rules for manipulating symbols. Whether the programme is in our DNA, or in our brain, or in a computer or in a robot does not matter. It also does not matter whether the programme was written by a person or it grew on a tree.

      What matters is whether the programme can pass T2 (verbal TT). It is certain that just a computer running a computer programme cannot pass T3 (robotic TT). Does that matter?

      Next week we will hear Searle's argument for why a computer programme running on a computer that is passing T2 would not be thinking.

      So computationalism is wrong (according to Searle -- and I agree).

      So if not a T2 computer, then what? I think it requires a T3 robot, with its symbols grounded in its robotic capacities: Only a T3 robot could pass T2.

      T3 tells us little about the brain. But, more important, the brain tells us little about how to pass T3 (or T2). T4 would have to explain it all, brain function too. But do we need T4? Or is T3 enough?

      Free will is a special problem. Ask me about it in class...

      Delete
  13. Turing (1950) states that his original question "’Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’"
    This proposes something interesting to me: Are thinking and doing well in the imitation game the same thing? I imagine that by creating a machine that could simulate a human’s responses in the game, one would hope to gain insight on cognition (how humans do what we do) and learning. But, it seems to me that the easiest way of passing the test would be to have the machine simply behave like a human; the internal processes do not have to be exactly the same, but the outcome of those processes must be similar enough so that the results of the game are indistinguishable from when there are no machines playing. Turing also talks about the Executive Unit of digital computers “which carries out the various individual operations involved in a calculation”. Another way to look at this question is, do the executive units of the human computer and the digital computer act in the same way, or is it just that the outcomes of the two different computers are the same?

    ReplyDelete
    Replies
    1. To make a computer (or anything) capable of emailing with a person (for a lifetime if need be) indistinguishably from a person is the "easy" problem, but it's not so easy as that! Try it...

      Delete
    2. I have a clarification question here: if we exclude those instances where we cannot predict a computer's answer because our human's calculation capacities for example are limited (and we would need another computer to predict the answer), can we predict everything a computer will do if we know exactly how it is built ? If so, I agree that internal processes do not have to be the same to get the same outcome, but since a computer is made by a programmer, the programmer knows what makes the computer give the response it gives. So we know the internal processes the machine is carrying out, and the aim of the game is just to see whether the internal processes we have programmed are sufficient to produce a human-like writing behaviour.

      Delete
  14. It seems that Touring is arguing less for machines’ ability to think as a human does, than he is affirming the tests’ capacity to constitute sufficient evidence for intelligence in computers. For instance, in argument (4) Touring writes “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it.” But he goes on to say that when a computer can adequately imitate the dialogue of a creative thinker - that should be reason enough to speak of a machine as intelligent. He is really setting aside the abstractions of what “actually” goes on in the background of human artistic thinking, and manages to retain a very grounded, and straight forward thought experiment. The idea of creativity comes back again in (6), when he is critiqued with the idea that machines cannot act surprisingly. To this, Turing responds with something equivalent to “My machines DO surprise me,” and in the context of the imitation game a surprise is a surprise, despite any arguments about to whom (the engineer or the machine) the surprise is due. I believe that the straight-forward nature of this thought experiment is what makes for such an encouraging foundation for artificial intelligence.

    ReplyDelete
    Replies
    1. Justin, could you please put in a real photo? I can't identify you and credit your skywritings without it.

      Turing is not just talking about imitating thinkers but about doing what thinkers can do, indistinguishably from real thinkers to real human thinkers. That is not an imitation game but the reverse engineering of cognition.

      The problem of consciousness (feeling) is not localizing it but explaining how and why the brain (or any thinking system) does it.

      Delete
  15. Conversations on the Turing Test usually involve arguments drawing on mathematics, engineering, computer science: there are lots of interesting things to say about these, but I would like to try out a different line of thought. If we interpret the test as a standard for our theories about mechanisms of cognition, then we are entirely taken with the question “How do we do the things we do?” This is interesting enough, but it is only half the riddle. The other half is “How do we recognize that something has been done?” The genius of the Turing Test I believe is that it acknowledges that human judgment is the only criterion for humanhood (or human performance, the point is the same): no other measure of success, be it survival, creativity, etc. will do. Better still, the criterion of humanhood is other humans’ judgment. The Turing Test is convincing because it imposes on machines the same criterion for humanhood that we get, i.e. other people’s judgment. This means exactly what it says: I know I am human, not because I think/feel it, but because others recognize me as such (evidence for this is the collapse selfhood that accompany long periods of solitary confinement). Human performance and recognition is a two-way street and as such, the interrogator is also being tested: her own sense of humanhood is on the line.

    If we recognize the validity of the test, then the other question is “Can computers succeed at least in principle?” To argue against this, we need to find a human performance which, in principle, cannot be modeled or approximated to an arbitrary degree by any discrete state machine. This would address directly Turing’s thought on the issue. (See comment on Harnad (2008) about whether Church-Turing Thesis already answers this question.)

    Of course, we can, with Harnad, relax the discrete state constraint and allow some dynamical processes to do some of the “computing” (implicitly, that is, as is done in “embodiment” and “situatedness”). This does not affect the validity of the test, but it does go beyond Turing’s claim about performance being computable.

    ReplyDelete
    Replies
    1. We all have a pretty good idea of what normal humans can do. Passing the TT requires being able to do that, and do it indistinguishably to real humans from the way a real human does it.

      According to the Church-Turing thesis, computation can simulate just about anything -- airplanes, molecules, hearts, pendula, spiders, robots. But simulating something does not mean generating it. A simulated waterfall is not wet; a simulated airplane does not fly. It is simply computationally equivalent to a waterfall or airplane: it can be interpreted as a waterfall, or it can predict (or even explain) the dynamics of a waterfall; it can even be wired into a vritual reality simulation that looks (or even feels) like a waterfall to human senses.

      But if what a thinker can do is simulated computationally in this sense, then the candidate may no more think than the simulated waterfall is wet, or the simulated airplane flies.

      So the Church-Turing thesis ("computation can simulate just about anything") is not the same thing as Computationalism: cognition is computation.

      Thinking is whatever it is. We all know what it feels like to think. Because of the other-minds problem, we cannot feel what or whether anyone else is feeling. So we can only guess that others can think from what they can do. The TT requires the candidate to be able to do whatever a real human thinker can do. Then, either the TT-passer really thinks, in which case it feels like something to be that TT-passer, and we have explained thinking, or it does not feel like anything to be that TT-passer, and it is not really thinking. So we have not explained thinking. But we can never know either way. That's the power (and weakness) of Turing indistinguishability.

      This is true whether the TT-passer is a computer passing T2 purely computationally, or a robot, passing T3 in a hybrid way. But Searle shows that a computer alone is not thinking even if it is passing T2 purely computationally.

      Delete
  16. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain” (Turing, 1950).
    Turing, in the section describing learning machines, suggests that it may be better to design a program that would resemble a child’s mind, which could then be subjected to education. He offers this as an alternative to trying to create a program that resembles an adult’s mind. Turing proposes this because the adult mind, in its current state, is composed of three components: the initial state of mind (at birth), the education to which it has been subjected, and other experiences. If one tries to design a program to imitate the adult mind, it may not include these components. I found this idea similar to Locke’s tabula rasa whereby the human mind is a “blank slate” at birth and is only formed through the child’s extensive interactions with the environment. Locke also believed that human capabilities, such as intelligence, are inborn, which I think Turing would agree with. Turing expresses his hope that there would be so little mechanism in the child brain that it could be easily programmed, which could also be likened to Locke’s tabula rasa. If subjected to a sufficient amount of socialization and formal education, the program could advance from a child’s to an adult’s brain. I am skeptical of Turing suggestion, as it seems unlikely that one could design a program to progress through the human developmental stages from child to adult. Also, I do not believe that a computer program could interact with the world in the same manner that a human child can. Thus, I think that it would be better to design a program that would emulate the adult human brain with a focus on including the three components that Turing suggests.

    ReplyDelete
    Replies
    1. I also have difficulties imagining how a machine could interact with the world, in the sense that the experience it has with the external world would influence its behavior. However, I guess one could come up with a programme, for example, mimicking a loss memory that is characteristic of certain diseases that come with age. The machine would have more and more difficulties answering questions involving the use memory. One could probably programme a machine to start exhibiting Alzheimer symptoms or another disease's symptoms from a certain point of time, so that the interrogator could think he is talking to a real person since he would perceive developmental changes.

      Delete
    2. Dear both: it's not about programming a machine to pretend! It's about designing a machine that generates our full performance power, indistinguishable from us, for a lifetime.

      Delete
  17. "What is important about this disability is that it contributes to some of the other disabilities, e.g., to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man." (p. 51)

    Throughout the paper, I could not help but be reminded of how fuzzy the boundary is between what is considered "like me" enough to be attributed the same feeling, thoughts and rights and what is not. Indeed, if one cannot be sure that anyone else but himself can think/feel, then the attribution of thinking/feeling becomes a function of how much “like me” any other being is determined to be by the self. Although this boundary seems, at first, to be an obvious one (i.e., humans are more like me and thus thinking/feeling while inanimate objects are not), the paper reminds us that in fact the boundary is not that clear.

    For instance, in the above passage, Turing overtly implies that there exist an important difference between black and white mans, such that that a white man cannot share with a black man the same kind of friendliness as he would with another white man. This line of reasoning, which clashes with today's ideas, is just one example of just how this "like me" boundary is influenced by current ideas and thus, may not be separated from the biases of the time/place in which it is experienced.

    ReplyDelete
    Replies
    1. This idea that our demarcation between us and not us is an arbitrary one is very interesting to me. Although not specifically a cognitive science topic, it has its applications, and I find it even relates to the systems reply. How do we in fact decide where our brains ends/begin (I mentioned this in another skywriting about the extended mind thesis), and how do we decide what constitutes a computer, or the parts that make up a computer?
      I can think of an assembly line as an analogy. People make up the parts of this giant machine that builds- let's say cars. Is the machine the greater production and assembly line? Or is it the smaller parts, individually? And in this analogy, do the people count as machines?

      As a side note, I am still unclear about the difference between a machine and a computer? Are the parameters that make a computer a computer stricter than what make a machine a machine?

      Delete

  18. …“ If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic.”… “It might be urged that when playing the "imitation game" the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man.”

    Though Turing dismisses the notion that the Turing test should be reversible ie. If a machine can pass as a man, then a man should be able to pass for the machine, I think it deserves considered bit more. Suppose we agree with Turing, that the best strategy for the machine is to try to give answer that a human would give, would the machine have to account for the “slowness and inaccuracy in arithmetic” in order to really prove that it is a human? For me, I say yes. Though I don’t know if the types of machines Turing had access to could have been programmed to account for the human tendency for inaccuracy or other distinctly “human” ways of responding. Perhaps my standards are more appropriate for our current time period in computing. This leads us to what Harnad discusses, the idea that a robotic Turing machine is needed that can imitate what human minds DO.

    ReplyDelete
    Replies
    1. No need for people to pretend to be machines. All you need is a let of email pen-pals you never see or meet, some of them really people, some of them machines. Get to know them, well. And see if you think any of them are not really people. If they really have full T2 power, you should be unable to pick them out.

      Delete
  19. This article by Turing cleared up a lot for me in terms of how we are to test whether or not a machine can attain human-like intelligence and the ability to think the way we do. I understand how the Imitation Game is suppose to test whether or not a machine is able to trick an interrogator as well as a human is able to. I just don’t quite agree that this is the best means in order to determine if a machine has human cognition. The main objection I have to the test is something he calls the Lady Lovelace’s objection. This objection claims that a machine is not able to originate anything independent of what the computer programmer has placed in it. This seems like quite a valid problem associated with the Imitation Game because in order to trick the interrogator the man will be continuously changing his method of deceiving him, and thus creating original ideas for deception. He will base the method he uses on the subsequent questions the interrogator uses and will determine which method to use from there. The Lady Lovelace objection seems to hit the mark that a machine will not be able to do this. It does not seem likely that it will be able to come up with a completely unique method if others fail. Turing seems to suggest that this is mainly a problem of programming and will be solved in the future. He argues that once we are able to program a machine similar to a child’s mind then all we need to do is teach the machine. My problem with his solution is that it has yet to happen. We cannot say that the issue is closed simply because we think we will be able to do it in the future. As Turing says, “The reader will have anticipated that I have no very convincing arguments of a positive nature to support my views.” This statement is completely true, he does not show any positive evidence that we will be able to create a “Learning Machine”, other than to say that advances in technology have made the storage capacity of the mind attainable for machines. Until we have actually proved that we are able to program a “Learning Machine”, I do not believe we can say that the Lady Lovelace objection is solved.

    ReplyDelete
    Replies
    1. The TT is not a trick, it's meant to really generate full human capacity, lifelong.

      There are already computational models that can learn, and do new things. (See the replies to the other commentaries.)

      Delete
  20. " a machine undoubtedly can be its own subject matter ", "by observing the results of its own behaviour it can modify its own programmes as to achieve some purpose more effectively".

    All this is true but it doesn't imply that machines are thinking, if we are talking about the same kind of thinking we are doing. If the machine was really thinking then it is only the machine that could really be sure of it. In this case If thinking would be limited to computation then yes the machine in question would be thinking because in the end all it is doing is following a set of rules which allow to it to do what it is really doing. On another hand when Turning talks about a machine "can be its own subject matter", it implies that the machine must have a sense of self and see itself as an entity apart from the rest, which also comes back to the problem that we can't be sure of that unless we are the machine itself.

    In the next paragraph, turning talks about the fallacy "that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it", which i think he is right to address as a fallacy. However, this doesn't mean that machine can achieve the same kind of surprise people can. Turning claims that "original work" can be done by the machine, if " a seed planted in him by teaching or the effect of following well-known general principles. Machines shouldn't be able to create a new theorem or even have a new invention of their own because they lack imagination. Even when we talk about science, a lot of famous scientist have arrived to their findings through their imagination, which planted an idea in their mind that was then proved through scientific explanations. For example Albert Einstein or even Kary mullis who invented the PCR by creative insight.

    ReplyDelete
    Replies
    1. See the replies to other commentaries about "self-reflexiveness" and whether a programme can do something new, unanticipated by the programmer.

      (The idea to generate generic human capacity: leave creating Einstein for later...)

      Delete
  21. Turing (1950) says “If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table. Constructing instruction tables is usually described as “programming”. To ‘program a machine to carry out the operation A’ means to put the appropriate instruction table into the machine so that it will do A” (Pg. 438).
    This instruction highlights the fundamental operational difference between humans and computers; that we do not know how we are programmed. We understand our abilities and limitations, we understand what we know and what we remember, but we have no real idea of how this comes to be. With the extremely limited understanding of the human brain that we currently have, it’s impossible to say we understand our brain well enough to create computer programs analogous to the workings of a human brain. Everything we program into a computer is an extremely simplified version of that occurs in the human brain, so much so that it’s essentially meaningless to compare them. When you ask a computer to compute a complicated question of long division, it inserts the inputs into algorithms and provides you with an output result (whether this output result is correct or incorrect depends on what you have programmed it to do, and is irrelevant here). When you ask a human being to compute a complicated question of long division, on the other hand, they may remember their fourth grade math teacher, or picture the scientific calculator they used in middle school. So many tangential anecdotes play a role in memory and consciousness, and make the human brain both superior in terms of complexity, and inferior in terms of efficiency, to any written computer programs.

    The failures, flaws, and glitches that we try desperately to prevent from affecting our computers are, meanwhile, a defining factor of the human brain, and perhaps what may prevent computers from passing the turing test no matter how advanced technology gets, and how much “randomness” one may program into the computer. Turing (1950) here speaks of programming computers using “appropriate instruction tables” (pg. 438); However, the human brain often uses inappropriate instruction tables, and this is characteristic of human consciousness. If you were to program a machine with “the appropriate instruction table” this would certainly not include a bunch of missteps and side tracks for the computer to take before reaching the correct answer. And yet this is often how the human brain operates. It seems foolish and perhaps impossible to create a computer that is as flawed as our brains are while also remaining as powerful, and yet without this, the power of consciousness will be lost in the translation to “instruction tables”.

    ReplyDelete
    Replies
    1. You are right that asking people how they do what they can do (introspection) won't give much help in designing a system that can pass TT. It will require creative thinking by cognitive theorists, just as in other branches of science and reverse engineering. (And the brain certainly won't tell you how it does it either.)

      On the question of "error," see the replies to other commentaries,

      Delete
  22. In my previous experience with the Turing Test, I had always considered it a two party game, but Turing's inclusion of a third party removes (in my mind) some of the fundamental issues of the two player Turing Test (TT). In the two party TT, it is not made explicit that the computer is attempting to trick the examiner. This is a flaw, as conceivably a non-lying computer would give itself away instantly, even if it cognized as we do. Secondly, it provides a stronger form of equivalence between the state of the computer trying to trick the examiner, and the state of the person trying to trick the examiner, if one was trying to trick and the other wasn't, they could have exact opposite forms of cognition going on, where one took the opposite of what it thought to be true, and the other only took truths. By positioning the man and the computer both as lying, the TT as outlined in Turing 1950 gives the machine a fair chance, relative to the two party TT, where the computer - in having to lie while the person does not - is disadvantaged.

    ReplyDelete
    Replies
    1. See the other commentaries and replies, and see also reading 2b: The TT is not a trick. It is an attempt to reverse-engineer (real) human capacity, all of it.

      Delete
    2. I don't disagree that the goal of the TT is real: a true reverse engineering of human capacity. What I was trying to say was that the framing of the problem as a trick is important. I like to think that I am somewhat aware of what I am, at least physically I have a sense of how I am constructed in terms of cells and tissue, etc. If we are asking a computer to think like us, it would conceivably have a similar sense of self. Now, if we were testing the computer, certainly having it think that it is human should not be a requirement for what we consider thought. In fact, it is more human-like to have a vague sense of what it is and where it has come from. By framing the problem as "imitation" or a trick, the computer does not have to, in addition to thinking like us, blind itself to what it really is. It is only able to answer a question like "are you a machine?" in a way such that it would not betray itself if it is told to trick the examiner, otherwise any self awareness will ruin the test, which is the exact opposite of what we want if we are attempting to reverse engineer our capacity, which includes self awareness.

      Delete
  23. “Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” pg. 8

    This statement glared at me and immediately triggered some reflection on today’s world. We definitely have not reached this point in the scientific sense of the word “thinking”. We don’t have robots or machines that are indistinguishable from humans and we still place more value on the human mind than the machine. That said, in a colloquial sense, Turing is right that today’s society would not blink in response to the claim that one’s iPhone or other gadget is thinking. Technology has a tight grip over our daily lives, and I think that most people would admit to relying on machines as an extension of their own brain power. A bulk of our knowledge and mental processes live on the internet and in “apps”, and we can summon this information almost instantly. We think, press a button, see a computation and proceed seamlessly to incorporate the machine’s output into our feelings and thoughts. Where does this idea of extension fit into Turing’s general definition of machine thinking? Machines have always existed to help us do more, but how have recent advances and the prevalence of personal devices traversed new territory and exhibited more characteristic “thinking”?

    ReplyDelete
  24. The objective of Turing’s “Computing Machinery and Intelligence” is to answer the question “Can machines think?” After concluding that this question is ambiguous, he reformulates the question to be more easily received. The question then becomes “What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” I don’t completely agree with this reinterpretation. I do not see these two questions as being equivalent and synonymous. If a computer can fool a human being into thinking that he/she is conversing with another human being, I do not see how that translates into “Can machines think?” If the answer to that question is yes, there are machines that can successfully mimic human beings then we have simply answered that exact question. All we know is that we can build a machine that can fool an average person into thinking that they are conversing with another human being.
    Another aspect of the paper I found unsettling was the argument on punishment and reward. Turing states, “The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it. These definitions do not presuppose any feelings on the part of the machine…” Relating the latter to my previous skywriting, I do not see how a machine could learn through patterns of reward and punishment if they do not feel. Rewards and punishments are mediated by emotions so how could a computer learn if it doesn’t have emotion?

    ReplyDelete
  25. I was surprised to find myself beginning to understand and agree with Turing's point of view by the end of this reading. In the discussion of Professor Jefferson's argument that machines cannot equal brains until we can prove that they can in fact feel emotion, I instantly became aware that professor Jefferson was not catching on to some fundamental and important points that are important to consider when it comes to evaluating and categorizing consciousness, and its supposed behavioural outputs. When considering the solipsist theory, brought up by Turing, we can not scientifically prove the conscious thought of any one or any thing in the environment, thus making machines as equally probable of conscious thought as our own friends and family (though learned and innate empathy might make the true consideration of this feel unnatural). That being said and all things being equal, it is not absurd to project our own conscious experience of thought onto a lifeless machine if its behavioural output can be manufactured to match our own. Perhaps, in a more rational world, there would be a distinction between our own personal experience of thinking, and a separate definition of thinking applied to all other complex stimuli in the environment, 'alive' or not. Whether or not this would be beneficial to society is another question, and brings us back to Turing's discussion of the Theological objection and the 'Heads in the Sand' objection.

    During his discussion of the mathematical objection to the imitation game, Turing states: ' The short answer to this argument is that although it is established that there are limitations to the Powers If any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.' This made me wonder if what we believe to be extremely complex beliefs, cognitions and analyses of art (such as that of Picasso) are just a large combinations of single inputs and outputs of information, not all that different from 'Yes' or 'No' questions and answers. If this is so, it becomes increasingly conceivable that computers may in fact be programmed to do all that we do. This idea becomes even more relevant with Turrin's discussion of Laplace's views: 'It will seem that given the initial state of the machine and the input signals it is always possible to predict all future states, This is reminiscent of Laplace's view that from the complete state of the universe at one moment of time, as described by the positions and velocities of all particles, it should be possible to predict all future states.' Perhaps the functioning of the human mind is not that much more complex than that of machines?

    ReplyDelete
  26. I understand that the point of the truing test is to see if a computer could be designed that could be the reverse engineering of human cognition. Initially, when reading the text, it seems to me that a machine, built up of algorithms, would always respond the same way (give the same output) to the same input. For example, Tommy who need to pick up his mother’s shoes, will only take the “post-it” down once the shoes have been brought home, yet in a real scenario, there are many different reasons why Tommy could take the post-it down without having picked up his mother’s shoes. However, Turing addresses this problem by linking it to his argument on consciousness, which he answers by saying: “The criticism that a machine cannot have much diversity of behavior of just a way of saying that it cannot have much storage capacity”. Throughout all this, I fail to see how the Turing machine could however imitate human spontaneity and free will.
    In his section on Informality of Behavior, Turing mentions: “To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible, With all this I agree.” But he links this to learning, however free will is not just the ability to learn. It is a part of consciousness yet at the same time much bigger than that. Free will to me seems to be the ability to make your own, sometimes spontaneous, choices. How would Turing explain the notion of free will? Would it be possible to have a robot that could be programmable for free will as well? The robot doesn’t necessarily need free will to pass the T2 Turing test but what about T5?

    ReplyDelete
    Replies
    1. T3 can do anything a human do. That includes learning anything a human can learn. What (besides a feeling that accompanies, and feels like it precedes and causes our doing) is free will?

      Delete
  27. “ Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.”

    I think Turing shows fantastic intuition in his proposal to model a child’s brain, though perhaps for the wrong reasons. By asserting that a child’s brain has “rather little mechanism, and lots of blank sheets”, he seems to be support Locke’s now defunct “tabula rasa” view on human nature. However, let us be generous and assume that by “mechanism”, Turing was not referring to the design of the brain but rather to knowledge and experience acquired by it. Under this conception, we can separate “what is learned” from “what learns”, a rather useful distinction if we wish to avoid our robot becoming nothing more than a glorified Wikipedia.

    From a design point of view, programming a child that can “mature” (through self-reprogramming) into a TT passing machine would provide much greater insight into the nature of cognition than one with built in facts, for the simple reason that the acquisition, maintenance, and selective retrieval of these facts forms an essential component of cognition. If we made a supercomputer that could brute force its way into passing a TT via collecting all the stored data in the world (estimated to be 295 billion gigabytes as of 2011) and statistically determining the response most likely to fool an interrogator, would that really be intelligence? Perhaps it would be, but it would tell us nothing other than obvious fact that using more information leads to better guess. In this case, the information processing is much more information than processing, and lacks that multitude of heuristics humans use to intuitively process information a certain way.

    A child machine, on the other hand, could (and should) be programmed with no world knowledge. The only way it would be able to gain information would be to interact with the world, as we all do. Even without much world knowledge, it is readily apparent that a child is an intelligent being, and so too would a child machine need to be intelligent if it one day hopes to pass the TT.

    ReplyDelete
  28. Turing (1950)

    Closer than we think? The Turing Test proposes that if a individual cannot determine whether the subject A they are interacting with is a computer or a person, then the digital machine will have achieved a type of cognition. While I do not know of any such machine, I am sure we all have come closer than we think to a machine capable of “cognizing”. One example is playing any type of game against a computer. I will use Scrabble as an example. When we play this game against a computer, the computer is attempting to combine the letters in its hand to form a word. When it is my turn, I do the same. Eventually, we each produce an output that may or may not be the ideal output given the particular combination of letters. Of course the computer could be programmed to always maximize the points, however if I select to play against an intermediate level computer, it will make the same type of strategic errors a human may make. This exercise does not encompass the breadth of the Turing Test, however it provides a tangible example of how such a test could exist. Another example is Siri, the iPhone’s friendly personal assistant. Siri can find information for its user, “perform” tasks such as opening an App, and “remember things” for the user at a later date. Siri learns about the user by analyzing their search history and their preferences, and personalizes search returns for the use. This process sounds very similar to what a person does when they meet someone new. They remember the interactions with this new person, and try to enrich future interactions with more personalizing, for example by recommending a restaurant for the new person’s favorite type of food. Although Siri does not pass the Turing Test, Siri is an interesting example of the type of computer that could eventually pass this test. When I read Turing’s paper, I thought that the only computer that could pass the test would need to be new, and different, somehow beyond the computers we have now. Upon reflecting about the scope of the power present-day computer have, I admit that perhaps the foundation already exists for the computer that will one day pass the Turing test.

    ReplyDelete
  29. “Can a machine think?” is Turing’s original question, which he later re-states as, “are there imaginable digital computers which would do well in the imitation game?”
    As Turing points out, while the question “can a man think” is an interesting one, it is also not a very useful one. How is “thinking” measured? We can never really know. Turing brings up this idea again in the Argument from Consciousness section. “the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking.” I think that Turing is saying that, if a machine can do well enough in the imitation game to convince us that its mind is no less human than we are throughout an entire lifetime…that is that we have no more reason to think that the machine can think than reason to think that the machine cannot think (like the example in class about whether a robotic friend can feel or not), then that is enough.

    Turing also brings up an interesting point, although he disregards its importance in addressing the questions of whether or not digital computers can perform well in the imitation game. “May not machines carry out something which ought to be described as thinking but which is very different from what man does?” I agree that it is irrelevant to answering the question of whether a machine can do what we do, but it was interesting for me to consider that…although how a machine performs an action or behaviour may not be the same mechanism by which we do it, does it necessarily mean that it is invalid? I would think not. So, perhaps a machine can do everything we do…but we can’t say that it thinks the WAY we do…that does not mean that it cannot think IN SOME CAPACITY.

    ReplyDelete