Saturday 11 January 2014

(2b. Comment Overflow) (50+)

(2b. Comment Overflow) (50+)

16 comments:

  1. REPLY TO JOCELYN WU, part 1

    I urge anyone who is faint-hearted not to read this! It's not required for the course, and it might give you a headache!

    If you're really interested in the problem of "knowing," see the Gettier Problems.

    Soft Knowing. In a nutshell, there's a soft meaning of "knowing X" in which you (1) believe X, (2) X is true, and (3) you believe X for the "right reasons" (not by luck, or wrong reasons). This soft sense of knowing is called "justified, true belief." (It's flawed.)

    Hard knowing. And there's a hard sense of knowing -- the kind that Descartes talked about, where not only is X true, and you believe it's true for good reasons, but you can be certain that X is true.

    Necessary truths. Only two kinds of truth meet this strong criterion: one kind is the things that are proved to be necessarily true in mathematics, because if they were false, that would be self-contradictory. As a super-simple example of a necessary truth X is the so-called "law of non-contradiction": "it cannot be true that it is both true that 1 + 1 = 2 and not true that 1 + 1 = 2." You can be sure of that X. And so you know it. It's not just a justified, true belief.

    But that only works for the formal truths of mathematics and logic.

    Descartes' Cogito: There's another example of hard knowing and that is the Cogito (or, as I prefer to call it, the Sentio): When I am feeling X, I can know for sure that I am feeling X. (X may not be true, but it's certain that whatever I'm feeling, I'm feeling it.) If I feel a toothache, it may not be a toothache, it may be referred pain from my jaw, it may be phantom tooth pain (I may have no tooth). Or it could even be that I have no body, or that there's no outside world, it's all a dream or a hallucination. But when I'm feeling whatever I'm feeling, I can doubt that I'm feeling THAT.

    Goedel's Proof. Now, back to your question about knowing: There is a proof in mathematics by Goedel that shows that in arithmetic you can always construct a statement that is true but unprovable from the axioms (because the statement says about itself "I am true but unprovable" and it is provable that it is unprovable, and that therefore the statement is true, but unprovably true (from the axioms).

    Provability/Computability. I won't go into the Goedel proof, but some philosophers (e.g., Lucas) -- and even a great mathematician, Penrose -- have argued that Goedel's proof (besides showing that there are unprovable truths in arithmetic) shows that cognition cannot be computation, because you can know that the Goedel statement is true, yet its truth cannot be proven (which is the same as saying that its truth cannot be computed from the axioms of arithmetic). So since we know it's true, we must know it in some non-computational way.

    Computability vs. Computationality. This argument is incorrect because although the truth of the Goedel statement cannot be computed, there's no reason that the mental state of knowing that it's true can't be a computational state: That computational state need not be a proof that the Goedel statement is true. It need only be the state of knowing that the Goedel statement is true.

    ReplyDelete
    Replies
    1. REPLY TO JOCELYN WU, part 2

      I'm not a computationalist, but I see no reason why the mental state of knowing something to be true -- whether soft knowing, as in the Gettier problems, or hard knowing, as in the necessary truths of maths -- cannot be a computational state. All proofs are computations, but not all computations are proofs. The truth of a statement that says of itself that its truth is uncomputable from the axioms of arithmetic -- when it is indeed provable that its truth is uncomputable from the axioms of arithmetic -- is necessarily unprovable from the axioms of arithmetic. But the state of knowing that it is true is not necessarily noncomputational. It could come from a simple computation: this statement says "I am not provable," and it is indeed provable that it's not provable, so the statement is true, but not provable (from the axioms of arithmetic). That's not self-contradictory. It is true. And computably true (but not from the axioms of arithmetic).

      Note that I didn't say that I myself believe that the state of knowing the truth of the Goedel statement is computational; I just said that it is not necessarily noncomputational.

      False Beliefs. Now if you are still reading this, I have one more point to add about Jocelyn's point. She notes that we can also have false beliefs -- where we feel sure we know something is true, but we are wrong. That feeling of knowing is yet another thing. It's obviously neither soft nor hard knowing; in fact it's not knowing at all. It just feels like knowing, much the way referred jaw pain or phantom tooth pain can feel like a toothache.

      The 'Hard" Problem. Again, I am not a computationlist, so I don't really believe that the state of feeling that you know for sure that something is true is a computational state. But again it's not necessarily true that it cannot be a computational state. Explaining how and why any state is a felt state is the "hard" problem. It's just as hard to explain how and why a computational state is a felt state as to explain how and why a dynamic state is a felt state. (And those are the only two options.)

      The secret of the universe. The example I always give in class (from William James, via Bertrand Russell) is the person (probably William James himself) who when he got high on laughing gas, know the secret of the universe, but when it wore off, he always forgot. So one time he made up his mind that while he was under the influence he would write it down. And so he did. Then, when it wore off, he rushed to see what he had written. And it was "The smell of petroleum pervades throughout."

      Delete
    2. REPLY TO JOCELYN WU, part 3

      Intuition. In other words, a piece of nonsense had felt to William James like it was the secret of the universe while he was under the effects of the laughing gas. Now Russell used this example to warn mathematicians that although their intuition may sometimes help guide them, they can't really be sure that they're really true until they've proved it.

      Surprisingly, Penrose tried to cite mathematical intuition for the opposite purpose -- as further evidence that cognition is not computation, because a mathematician may "know" intuitively that a theorem is true even though he cannot yet prove it. (Well, maybe mathematical giants of the stature of Turing, Goedel and Penrose never have intuitions that turn out to be false -- maybe only mathematical pygmies do -- but a mathematical intuition still does not become a Cartesian certainty unti it is proved on pain of contradiction: "hard" knowing. Until then it's just a feeling. And, like feelings of toothaches, intuitions can be wrong about everything except that they feel like they're true.

      So, Jocelyn, this has nothing to do with having to "programme gullibility." Part of being able to do anything that a human thinker can do is being able to be wrong -- and able to believe you are right. And, again, the TT is not about tricking either the TT candidate or the people interacting with the TT candidate. So even if computationalism were true, and feeling was a computational state, the computational state of feeling that X is true does not require X to be true.

      Most likely, though, feelings are at least in part pharmacological states rather than computational states.

      (Anyone who go through all this and really understood it -- rather than believing they understood it -- will surely get an A in this course -- but you do not need to know all this for this course!)

      Delete
    3. I think a lot of my confusion resulted from thinking of the Imitation Game as a game of trickery rather than a question of how to reverse engineer human cognition. The lecture really cleared it up!

      I suppose in ways, "knowing" is similar to "feeling" and faces the same problems as the representation of feeling.

      Delete
  2. I was just reading Kevin Poulin’s defense of Turing’s “game” reference when it hit me: it has to be a game. Our hypothesis is that program x computes human behavior. It would follow that the computer (T2 or T3, I don’t think it matters) is thinking. But why is it thinking? If it’s a human we can answer that question: motivation. But if it’s a machine? What incentive does it have to converse with the interrogator? The question “why is the machine doing so and so,” cuts through the indistinguishability because there is a fundamental difference between human acts and machine acts. Motivation causes the former. And so we ought to say the machine we’re testing is motivated to win, and therefore playing a game.

    But if we put this aside, I’m pretty convinced we can make robot, with a soul. In fact, it would be a testament to how far we’ve come. The soul is that thing we point to when we want to point to the best of human accomplishment. More than our intuitions (feelings), our faculty of analysis (symbol manipulation) has allowed us to bootstrap ourselves to whatever moral grounds upon which we stand today.

    I’m not saying this to be cheeky, rather I genuinely think that reason is not only enough, but the most of what makes us human.

    Side-step for a moment: refusing to secede that machines can think for lack of “empirical [evidence] about what computers will eventually be shown to be able to do” is what Harnad calls a “Granny Objection.”(14) Behold, a Granny Objection from Steven Harnad himself.

    “But does it subsume star-gazing, or even food-foraging? Can the mcahine go and see and then tell me whether the moon is visible tonight and can it go and unearth truffles and then let me know how it went about it?” (6)

    Harnad says (Steven says? what’s protocol here?) “mental states (feelings)... are moot because of the other-minds problem.” The above behaviors (my particular favorite was the moon-gazing) depend on feelings.

    But lets suppose we’re dealing with a T2 model which for want of sensory-data cannot perform the above behaviors; suppose the T2 machine fails the test.

    Does it matter? If it can conduct philosophical discourse, write ethical treatise, give cognitive-behavioral-therapy? And perform all the rule-determined behaviors humans do? Is it too much to say that the regulation we’re taught in grade school is a kind of program that allows us just to “operate” and follow taboos? Don’t get me wrong, I’m not mocking the program that has been set up. It’s been thousands of years in the making, and we’re still not all vegan. The question is: Does our “humanity” come from our intuitions, or from our analytic capacities? The ideal of justice is sometimes depicted as a blind woman holding a scale. T3 may be able to ground the symbols with which it performs computations, but T2 can get us the kind of human progress we want and still fail the Turing Test.

    But does the T2 think? It thinks in the same way the behaviorists figured the rest of us do. There’s input, there’s output; homunculus controls gears.


    ReplyDelete
    Replies
    1. “Reverse bioengineering”
      Heidegger wrote in his Being and Time, “only when the ‘Sum’ is defined is the cogitationes comprehensible.” (paragraph 46)
      Instead of asking “can machines think,” ought we ask what makes us human?
      We don’t really care if machines can think, or at least Allen Turing doesn’t. We do want to know, however, what causes human behavior. Basic question of Psychology.
      “Thinking is as thinking does.” By creating something that successively approximates human behavior, we will have created a cause of human nature, and therefore thought. So I guess, I could say: don’t worry about the program; just figure out how to get a computation similar enough to ours.
      To build and replicate the function of human-nature, (and this was one of David Marr’s anagnorises) we need to know what function we’re after. If there’s no computation, what’s the point of making a program? What’s the program for? And that’s exactly what the Turing Test answers. “The Turing Test is about finding out what kind of machine we are, by designing a machine that can generate our performance capacity, but by causal/functional means that we understand.”
      I hope that wasn’t too circumlocutious.

      Delete
  3. "even though we are all machines in the sense of being causal systems"

    I know this sentence is just one sentence of the whole paper but it really got my attention. I agree with the fact that yes we are machines as long as we refer to the part of ourselves that makes us causal systems, but i think that this is just one part of what makes us human. It's true that nowadays this analytical part of our minds and consequently our behavior is predominant; however, our brain has also a way of processing information holistically.

    By doing so we can escape the fatality of having to respond to something because this or that has caused me to behave in this way. We have the ability to bypass this system by using this capacity of our minds to process information holistically instead of linearly. This mode of information processing explains in a way our intuitive capabilities, our creativity and a lot of other processes that don't depend on the linear processing of information.

    ReplyDelete
  4. Although Professor Harnad earlier cites the use of "imitation" in describing Turing Testing as unfortunate in its "connotations of fakery or deception", I find his later analysis of the Theological Objection to, in fact, justify it: "The real theological objection is not so much that the soul is immortal but that it is immaterial. This view also has non-theological support from the mind/body problem: No one -- theologian, philosopher or scientist -- has even the faintest hint of an idea of how mental states can be material states (or, as I prefer to put it, how functional states can be felt states). This problem has been dubbed "hard". It may be even worse: it may be insoluble. But this is no objection to Turing Testing which, even if it will not explain how thinkers can feel, does explain how they can do what they can do."

    The same hard problem that prevents us from using Turing Testing to ascribe human-like states of feeling to any which machine that exhibits human-like mental capacity should also prevent such us from calling such a "rigorous empirical methodology for testing theories of human cognitive performance capacity (and thereby also theories of the thinking that presumably engenders that capacity)" anything but a game of imitation—after all, we cannot presume a machine's exhibiting a material state to necessitate the simultaneous presence of the appropriate mental state, if at all, but merely whether it has reached a state that is satisfactory to its/our purposes or not (emulating a human). Were Turing testing to be determinate of anything more than imitation, it would be a game to never knowingly be won.

    ReplyDelete
  5. The Turing Test clearly must be a method “for reverse-engineering human cognitive performance capacity” but the objection to the term “imitation” is odd. In Turing’s own description of how a machine would perhaps try to behave more human, he advocates for trickery on the part of the machine. Coming up with an incorrect answer even though it would be a simple calculation for the machine, requires some sort of program telling the machine to go against the general algorithm it is used to running on this type of problem. The machine is in fact imitating human behavior. It was created in order to do so, in order to deceive the interrogator into believing it was human. Creating a cognizant being without the knowledge of what we perceive is human behavior, would be impossible and a fruitless endeavor. In Turing’s 1950 paper, I see no proposal for a rigorous method of testing, rather it is just a litmus test to see if the machine can pass for human.
    Meanwhile, the issue of the other minds problem is still troubling for me. If we are trying to create another mind, mustn’t we understand it to the point that we must know it exists and how it creates? Behavior is not enough to get to the center of the issue. More thought is required for this one on my part.
    Last, I completely agree with the note on ESP. I truly wondered why it was in the paper in the first place and gave me some serious doubts on the credibility of Turing’s scientific pursuits.

    ReplyDelete
  6. "An experiment must not just fool most of the experimentalists most of the time! If the performance-capacity of the machine must be indistinguishable from that of the human being, it must be totally indistinguishable, not just indistinguishable more often than not."

    Consider the following scenario: the turing test is set up except the machine is replaced with a human. I doubt the likelihood that the interrogator would correctly guess that a human is speaking to them 100% of the time. As such, the goal of creating a totally indistinguishable machine may not be realistic or well suited to the problem at hand. I believe it makes more sense to aim for a distribution of guesses that closely resemble the distribution of guesses an interrogator would likely make when interrogating another human.

    ReplyDelete
  7. Turing may have stuck with the T2 version of the test, in which he is satisfied with a computer with verbal competence, instead of a fully performing robot, because of the difficulties in recognizing nonverbal abilities without giving the robot a full human appearance. To test a robots nonverbal abilities, the player would need to evaluate the robot without ever seeing, because since we can't imitate skin, as soon as the robot was spotted the test would be over.

    It is possible to run the game with the robot behind a curtain, for example: if the human and the robot were both asked to put on a marionette show, however there are artifacts of appearance that would need to be dealt with: the robot would have to move its fingers to pilot the marionette in a fluid human way, however if the motions were too perfect or too complex it would again seem inhuman. Perhaps there are tasks in which the aspect of appearance is minimized, but in T2, where only verbal competence is tested, there is a medium (email) in which only verbal information is passed. This greatly facilitates the test, and perhaps some technology analogous to email, but scaled up to something nonverbal, would be necessary to test nonverbal abilities without appearance giving up the game.

    ReplyDelete
  8. Harnad explores the detailed reality of a machine capable of passing the Turing test in his 2008 paper “The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. A machine capable of passing the Turing test would be indistinguishable from any other human based on its answers to the equivalent of email communication. Is this enough for us to give the machine credit for ‘thinking’?

    Harnad brings up a good point when he says, “With the Turing Test we have accepted, with Turing, that thinking is as thinking does. But we know that thinkers can and do more than just talk. And it remains what thinkers can do that our candidate must likewise be able to do, not just what they can do verbally.” The machine proposed by Turing is quite limited when you start asking, “If I can ask any question, how is it going to answer me adequately if it’s never been in the outside world?” How would this machine know how to describe any sort of feeling, like how the bitter cold wind makes your eyes tear up when you are standing on top of Mont Orford? Would this machine just deny having the chance to experience one such particular sensation? If the interrogator continued down that line of questioning, I imagine the machine would keep giving the same answer – no, I haven’t had the opportunity to experience that feeling for myself. Eventually this complete lack of sensory experience would seem fishy to the interrogator and, in my opinion, make it increasingly difficult to believe the email correspondent is indistinguishable from another person.

    ReplyDelete
  9. Two points to comment on here:

    ''A virtual-reality simulation [VR] would be no kind of a Turing Test; it would merely be fooling our sense in the VR chamber rather than testing the candidate's real performance capacity in the real world''

    There are some problems here in defining the ''real world.'' We must ask how we navigate the real and the virtual in the first place? When we are experience or navigating virtual space, eg. the internet, we know it is virtual because our senses tell us so. Looking at a video online only satisfies the visual and auditory aspects of the event/experience the video is representing, and even that not fully by any means. But if a virtual reality can adequately ''fool our senses'' to the point where we can't differentiate where can we draw the line? One possible solution to this is the historical status of the thing. Knowing where things come from and the way they have behaved and where they have been before the moment you experience them allows us to know. It is not just the way something is behaving that tells you what it is (i.e. virtual or real) but the way it has behaved or arisen in the past.

    The second point is to Harnad's endorsement of Turing's rejection of randomness as free will and his differentiation of autonomy from randomness.
    ''But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world - which is something a T3 robot has a good deal more of than T2 pen pal.''

    I wonder where and how Harnad makes the distinction between autonomy and randomness outside of an intuition that our supposed free will in the world is anything but randomness that is attributed meaning. For me, randomness and autonomy only differ in that one has meaning (autonomy) and the other doesn't (randomness). Both are unpredictable as to what will happen next and defy any regular repeating patterns. With something as lose as meaning differentiating the two, it seems the distinction is arbitrary and based on a feeling/intuition that human unpredictability has meaning, that it is teleological.

    ReplyDelete
  10. Turing's paper is quite misleading in how he phrase TT as an imitation game. I think instead of the game part (which many people, I too, focused on), it's emphasis is actually on imitation: The machine is not trying to use the same mechanism as the brain. It can use whatever algorithm, as long as it will ALWAYS give you the impression that it is a human. In this paper, Prof. Harnad suggested what TT actually should be: A lifetime pen pal that you never doubt is a human turns out to be a machine. Thus the machine cannot just fool the judge for 10 minute, but for a lifetime.

    "So too does the question of whether a TT-passing machine would have any feelings at all (whether free or otherwise)."

    TT does not answer the question if a machine will ever gain feelings. And, as suggested, TT is the best we can do. We cannot go any further to investigate if a machine has feeling or not, because it is impossible due to the other mind problem. Internal mental state is only observable by the feeler itself. Even if we look into the electric wires (or brain) and poke around. The best we can get is a reaction "Ouch!", but not the feeling itself. One could still be feeling without any reaction.

    ReplyDelete
  11. I agree that T4 and T5 are not worth looking into (yet) because they place focus on an aspect which is dismissible. The form of the machine has no bearing on what it is capable of. T3 is the best level for reverse engineering cognition because it adds sensorimotor capabilities and therefore gives it the ability to respond in a more human way. T4 is preoccupied with the structures being identical to human beings whereas T5 requires the structures be composed of materials indistinguishable from human beings. T2 is underdetermined because it cannot describe experience or explain how it can execute an action. T2 is too limited.

    ReplyDelete
  12. I enjoyed reading Harnad's critique of Turing's paper. Harnad places many of Turing's arguments the current context of cognitive science. One important point Harnad makes is the hierarchy of Turing Tests. T0 is where a machine can locally perform an arbitrary task indistinguishably from a person. T2 is indistinguishable performance in a verbal task (like emailing) but with limitations in that it can't perceive environments with sensorimotor capacities (i.e. touching, hearing, smelling, seeing, tasting, and maybe feeling). T3 has those sensorimotor performance capacities. This is where Harnad thinks the Turing Test needs to aim for when assessing whether a machine can "think". A T4 robot has indistinguishable performance capacity and it has the same internal structure and functions as we humans do. T5 is a robot indistinguishable from humans even at the molecular level!

    Harnad argues that T3 is the level that the Turing Test should aim to evaluate because our sensorimotor capacities are really what give meaning to our thinking. Meaning is grounded in our sensorimotor experiences and " our verbal abilities may well be grounded in our nonverbal abilities."

    Harnad further supports the point as he exposes a fatal flaw in T2: "Would it not be a dead give-away if one's email T2 pen-pal proved incapable of ever commenting on the analog family photos we kept inserting with our text?" T3 would be able to pass the TT, but T2 would not!

    Another important element to Harnad's paper is in pointing out the other minds problem - the question "Can machines think?" is deemed irrelevant by Turing, but Harnad argues that instead it's undecidable by outsiders.

    ReplyDelete