Saturday 11 January 2014

10e. Harnad, S. (2012) Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue



The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

35 comments:

  1. I think it's getting harder and harder to write Skywriting commentaries without being repetitive as the course goes on. I found this paper a succinct and clear summary of the Turing Test as it relates to the Hard and East problems of cognition. I wanted to ask a bit about the following section:

    "Questions arise: (1) Communicate about what? (2) how long? (3) with how many
    humans?
    The answers, of course, are: (1) Communicate about anything that any human can
    communicate about verbally via email, (2) for a lifetime, and (3) with as many people as any human is able to communicate with."

    I am aware that the TT is a bona fide reproduction of a human capacity - that is, the capacity of cognition. And if we want a Turing machine to pass the TT in any respect, it should be T3, with sensorimotor capabilities allowing it to ground symbols. However, ought we not be looking as well at what the machine has "experienced"? A T3 machine that was just built yesterday would not have the experiences of a human; it would not have learned how to employ its grounding capacities to build up a repertoire of experience that could be drawn on in a conversation. Although it might seem banal, it is not entirely trivial to be able to ask a computer about the best restaurant its ever been to, where it went to school and what it studied, and other distinctly "human" experiences. Nor can we expect that an absence of all such experiences would not at all influence other aspects of the conversation or speech patterns. If we argue that this is not that the TT is designed to evaluate at all, then what are we roping off when we speak of human cognition? A baby has extremely limited cognizing abilities; it is only through living its life (what we deem as an "ordinary life", anyways) that it develops the cognition patterns we see as being "human." A child raised feral might not pass the TT themselves; so in order to "do all we can do", a computer must also have experienced all (or most) that we can experience. (To a computationalist, this can be simulated - which seems to simplify matters.)

    ReplyDelete
    Replies
    1. Time: Real and Virtual -- Memory: Faithful and False

      For Ethan -- our class T3 robot -- to be passing T3, as he is doing, he does not have to have a real past. He could have a virtual past (but that virtual past would not be just computational). He could be only behaving as if he had really had Penny Ellis as his 3rd grade school teacher, because that's the way he was built (not just programmed, built). And if Ethan (though T3) really feels, he would really feel as if he had had Penny Ellis as his 3rd grade school-teacher. If that was just an illusion that we had built into him, then he would certainly feel as if he was telling the truth, even if what he remembered had not really happened. Just a case of False Memory Syndrome.

      Of course, to pass T3 Ethan would have to have enough real memory for what he had said to us as of this year, since we met him.

      One of the pieces of metaphysical superstition that people fall into is to imagine that in order to be what we are at this moment, we had to go through a real-time history. In fact, we did; so in most cases that would be true. But there's nothing metaphysically magic about real time. So just as an instantaneous clone or copy of me would feel as if he hadlived my past, but in reality did not, an instantaneous Ethan could be created with a fictional past, though it feels real to him (if he feels).

      So there's nothing magic about real time. All you need is the end-state that was either generated by real time or just generated a moment ago by the T3 designer.

      And the nature of feeling is such that you cannot feel the difference between real memory and false memory, not even an instant after something has happened (as I tried to show in my story about the Xmas trip to the gas station). The only way to settle the matter is with an objective video. The only Cartesian certainty is that you are feeling right now (if you are) and that it feels like what it feels like...

      (You see, there's still something new to say, Dia; though repetition is good too, until everyone gets it; once no one needs it any more, the goals of this course have been met!)

      Delete
    2. I certainly don't think of time as being metaphysically special (or "really experiencing" something being special) - hence my comment about a simulated past being sufficient. My question remains - pure capability is not enough, since a human that might have been able to cognize had it lived life, except it's been raised in a suspension completely deprived of all sensory experience and is hence unable to cognize in a way that we would recognize, and so would definitely fail TT. Analogously (since we are not speaking about human's ability to cognize here), someone might have coded a computer that has the same "potential", but until they also figure out which programs to run (which experiences to make the computer experience) to generate the target endstate the computer, which might be "able" to cognize just fine, will continue to fail TT. Doesn't this make TT a SUFFICIENT but not a NECESSARY condition to proving cognition?

      Delete

    3. If the computer in your analogy could run these programs and still generate the target endstate but fail the TT, then doesn't that mean it is not cognizing? We cognize and cognitive science aims to reverse engineer that property; thus far, TT is that standard used to determine if it 'cognizes'.

      I still think TT is a necessary condition to proving cognition, but maybe not sufficient. If we could create some robot that would do everything that we can do, then it should be able to pass the TT. However, as stated in the paper:

      “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition. The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”

      So to be able to cognize like humans, we would need some way of determining whether the robot can feel – which comes to the hard problem, and it may or may not be solvable.

      Delete
    4. Dia, let's separate what's impractical from what's impossible. It's of course impractical to give a T3 robot a completely fictitious virtual past. The best way to give it a past is through a real history. Ditto for grounding itself: The best way to ground its symbols is through direct trial-and-error experience.

      But it still remains true that if you have a T3 robot -- let's say one that has reached its current grounded state through its real time history and learning -- you could duplicate its structures, functions and current states in an identical T3 robot, which would then have the same forward-going capacity. And this is true whether you imagine that most of its structure and function would be computational or dynamic. The end-state is the end-state. Duplicate it and it's still the same end-state, with the same capability. Ditto if you manage to design an end-state with an equivalent but different fictitious pseudo-past.

      There is no "proving" cognition (except maybe the Cogito, for me). The TT provides evidence of cognition, and in fact T3 is the evidence we go by with one another. So the TT simply notes that we can't ask for more evidence from a robot than we have for one another.

      But that doesn't mean that T3-capacity is either a necessary or a sufficient condition for cognizing. (Humans as well as animals may be cognizing without being able to pass T3.) Ethan (T3) is just the best we can do for contending with the other-minds problem and explaining cognitive capacity (unless you think T4 would be better -- but then you have to say why!).

      Vivian, if Dia's robot fails T3 it fails, and that's all there is to it. All bets are off about what end-state the robot may or may not have been in: it fails T3. (Dia still keeps talking about computation, but I have to remind you both that T3 cannot be just computation.)

      Determining whether the (successful) T3 (e.g., Ethan) feels is not the hard problem but the other-minds problem. (Important to remember that.) And passing T3 is the best we can do for that. But the hard problem would be there, and unsolved, even if a god guaranteed to us that Ethan feels: We still could not explain how or why. We could only explain how and why Ethan can do what he can do.

      Delete
    5. I think I've gotten the other minds problem mixed in with the hard problem. The hard problem is explaining how and why we feel, which is not solvable because there is no way to make some robot that we know can feel BECAUSE OF the other minds problem (so we'll never know if it really does feel)? If some higher power could guarantee that robot Ethan feels, and we have reverse engineered this robot, wouldn't there be something in this feeling-robot-Ethan that would be different from the non-feeling-robot-Ethan? Wouldn't that be an indication that we have solved the hard problem, since we have overcome the other-minds problem?

      Delete
    6. Causal explanation

      Vivian, even if a god guaranteed that this T3 (or T4) robot feels and that one does not, thereby sparing us from the other-minds problem, that still would not solve the hard problem of how and why the one that feels, feels. Yes, we know the difference between the T3 (or T4) that feels and the one that doesn't, but that doesn't tell us how or why that difference makes that difference. It's the cause, but we still don't understand the causality.

      We always knew (because T3 and T4 are both underdetermined -- but especially T3 -- and because the TT is a test of "weak equivalence" [same outputs for the same inputs]) that it was possible that some T3's might feel and others might not (or that all, or none, might feel). (We don't worry about that because it's hard enough to solve the "easy" problem of reverse-engineering just one successful T3, and the other-minds problem prevents us from knowing whether or not it really feels.)

      But even if we could tell apart the feeling T3s (or T4s) and the zombie T3s (or T4s), all TTs are based on differences in doing. So whatever differences there turned out to be between the feeling and zombie T3s (or T4s), those differences would still be differences in doings -- internal doings in this case, which would make even the difference between a feeling T3 and a zombie T3 into a difference that is more "T4-like," because it is internal doing rather than external doing (behavior).

      Either way, however, we would not be able to explain how or why those extra internal doings cause feeling. We could only say that they do indeed cause feeling, somehow, for some reason, not how or why. Since the feeling T3s (or T4s) and the zombie T3s (or T4s) would both be functionally (causally) equivalent, we still could not explain how or why the god-certified feeling ones would have an adaptive advantage over the zombies, nor what causal function the feeling serves, other that just being there!

      And it would still be a mystery how and why the internal difference that correlated with the god-certified feeling would cause feeling. (Don't forget that we already know that the brain causes feeling, but not even T5 plus heterophenomenology tells us how or why.) So we could certainly say, truly, that the internal difference between the zombie T3s (or T4) and the feeling T3s (or T4s) must be what causes feeling -- but we still would not be able to explain how, or why.

      Perhaps this is a good time to remind you all again that there is a perfectly good solution to the hard problem: psychokinesis: Feeling is an independent fifth force in the universe, alongside electromagnetism, gravity, and the leptonic and hadronic forces.

      Trouble is that all evidence to date indicates that this solution is wrong. There is no fifth "psychic" force. The other four forces causally account for everything. There's no causal room for more forces.

      So feeling in biological organisms seems to be doomed to be causally superfluous, hence causally inexplicable, even if we know (with some divine help) what correlates (hence must cause) it. This is what makes the hard problem so hard.

      Delete
  2. “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel…It may or may not feel.”

    This seems like a MAJOR problem to me!!

    Quick recap: cognitive science is interested in the “easy problem”: how and why can organisms do what they do? And it attempts to answer this problem by reverse-engineering cognition (aka building a robot that passes T3, which means it is so good at doing what the can do that it is indistinguishable from a human for its entire lifetime). Cognitive science doesn’t tackle the hard problem: how and why do organisms feel?
    Harnad notes that at T3 robot may not feel. If the T3 robot cannot feel, this raises two questions for me

    (a) Is the T3 robot providing an explanation for cognition? Cognition is felt cognition—it feels like something to understand English, to see the colour red, to play chess. If cognition is felt cognition, then the answer must be a resounding NO. Which means that the Turing Test won’t be able to tell us definitively if we have provided a causal explanation for cognition.

    (b) Doesn’t this mean that a T2 robot could pass T2 without being a T3 robot? Remember that a robot passes T2 if it is indistinguishable from a human in an email exchange. Harnad suggests that T2 is the wrong level of the TT because of the symbol-grounding problem: a computer without sensorimotor capacities could never be able to understand an email because the words wouldn’t be meaningful. We need to interact with the world via senses for words to gain meaning. Meaning and feeling are tied—it feels like something to understand a word! But if it is possible that a robot could pass T3 without feeling, then we don’t need meaning or symbol grounding to pass T2!

    What these questions suggest to me is that it isn’t so easy to separate the hard and easy problems. Throughout the course so far, we have assumed that the capacity to do what we can do goes together with feeling.

    ReplyDelete
    Replies
    1. Grounding and Feeling

      Jessica, good points. Let me take them one at a time:

      (1) There are things we cannot know, because of the other-minds problem. (There's no point objecting about those, because there's nothing we can do about it.)

      (2) One of those things we can't know is whether there can be zombies: We can never know (without divine help), because of the other-minds problem. ("Stevan says" there can't be zombies.)

      (3) That is why reverse-engineering Turing-indistiguishable T3 capacities is the only method open to CogSci (augmented by the internal neural "micro-capacities of T4 or even T5, if we wish).

      (4) T2 capacity is clearly not enough, because we are capable of so much more than just verbalizing (and because of the need for grounding symbols) but it's still possible that T2 is a strong enough test, because nothing could pass T2 unless it was grounded, and hence could have passed T3, if we had tested it. (But just grounding is not meaning unless it is felt.)

      (5) The only T2-passing that would definitely not be good enough would be T2 passed by computation alone, because of Searle's Periscope, which penetrates the usually impenetrable other-minds barrier, using the the implementation independence of computation, and shows that in that special case (only), the T2-passer would not be understanding, because it feels like something to understand, and Searle would not be feeling that something.

      (6) But apart from that, the other-minds barrier would remain intact, so we could not know whether any other kind of T-passer felt. In other words, whether or not there can be zombies, there's no way of knowing whether or not a T-passer feels (except in the special case of T2, if it could be passed by computation alone).

      (7) ("Stevan says" that T2 cannot be passed by computation alone, because of the symbol grounding problem.)

      (8) But other T-passing goes, because of the other-minds problem. And we will not only be unable to know, because of the other-minds problem, whether or not feeling kicks in for some T3 or T4 or T5, but even if feeling does kick in (and even if we could know when and where), we still could not explain how or why it kicks in, because of the hard problem.

      (9) (Feeling probably kicks in very early in phylogenesis, already there in clams, way before human T3 level -- but we cannot Turing-test clams.)

      (10) In a dynamical physical and biological world of doing, there is no independent causal role for feeling except as a fifth fundamental force of nature -- alongside electromagnetism, gravitation, the leptonic force and the hadronic force -- but all evidence indicates that there is no fifth "psychic force." Everything is full explained by the four known forces.

      (11) It (sometimes) feels as if we do things purely because we feel like it. The fact that it feels as if we do things purely because we feel like it is not an illusion.

      (12) But that we really do things purely because we feel like it probably is an illusion (like the phantom tooth ache): Feeling is not a force, even though it feels like one.

      Delete
  3. “I want to enlarge on just one thread in all he has done: The Turing Test set the agenda for what later came to be called ‘cognitive science’ -- the reverse-engineering of the capacity of humans (and other animals) to think.”
    I just wanted to briefly comment on this statement because I think that it is a bit of an exaggeration; I do not think that cognitive science is solely based on the reverse-engineering of the capacity of human thinking. A lot of cognitive science is completely disjointed from this notion (whether purposely or not). For example, a lot of cognitive science examines the performance capacities of humans (such as psychometric studies on attention or memory) without actually trying to explain why these phenomena occur, and definitely not to try to reverse-engineer the capacity of humans to think.
    “The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”
    I agree and think that the successful TT-passing model will be both computational and dynamic. That being said, I do not see how we could create a successful TT-passing model without also including the capacity to feel, and especially if we want to successfully reverse-engineer human capacity. It may be the case (and I think it will be the case) that we will only be able to create a machine that is explaining our ‘doing’ capacity. Realistically, I do not think that it will ever be possible to reverse-engineer the human capacity to feel, partly due to the other-mind’s problem; the “hard problem” may in fact be the “impossible problem.” Nevertheless, if we can at least create a TT-passing model that can explain all of our ‘doing’ capacity, I feel that cognitive science will have made a huge advancement which will allow for other cognitive capacities to be discovered and examined.

    ReplyDelete
    Replies
    1. Tomasz, you are right that CogSci first needs to find out what we can do, before reverse-engineering how and why. And that includes our psychophysical capacities.

      The only hope for designing a mechanism that generates feeling is if only a feeling mechanism can generate our doing capacity. But even then, we can't know whether that's true, nor whether it feels; and if we did know it feels, we still could not explain how or why.

      Delete
    2. I wonder if we could be able to BEGIN to explain the why using a similar argument as for Universal Grammar. It was proposed that it was a biological advantage to have UG and so it managed to stay in and spread throughout the gene pool of offspring. Could we not suggest the same thing for our capacity to feel? Wouldn't it make sense that it was an adaptive feature for survival?

      Delete
    3. In reply to Corinne: but how would feeling be an adaptive feature for survival? If we correctly manipulate objects in the environment and react appropriately in response to certain stimuli in order to survive what need is there for us to feel that we are doing that? It's interesting that you brought up Universal Grammar because that is a tool that we can use to communicate and collaborate with each other and thus help our species survive and reproduce. However we could argue that our species could survive using language purely computationally (when I tell you not to pick the poisonous berries, you recognize those symbols and act accordingly). What use is there in being able to feel and experience the fact that I just told you not to pick those berries? It's a shame that we do not have a clear, scientifically proven reason for why we feel because that would answer countless questions about life and how we work.

      Delete
    4. That is an interesting point and the only thing I can think to reply is motivation. This too opens up another door of mysteries but to answer your specific question, I believe there is a possibility that feeling is a biological advantage because we need to feel to develop certain skills. To take a behaviorist stance, if you did not derive pleasure from eating something, why would you continue doing it? If we never felt anything, we would never do anything. But it feels good to eat something when you are hungry so it acts as a reinforcement. The same way feeling sick when you have eaten something you shouldn't have will deter you from eating same thing in the future.

      Delete
  4. I found this article such a good summary of the course so far.
    I have a brief question. Suppose we were able to build a robot that felt, and suppose we were able to believe that the robot felt, how do we know that we have in fact reverse engineered the same mechanism as our own?
    In philosophy of mind the idea of "multiple realizability" I think really just means that the same outcome can be produced in multiple ways. This is precisely what I mean, even if we succeeded in reverse engineering feeling, would it provide insight into how we feel? Or would it just provide a mechanism that produces feeling?

    ReplyDelete
    Replies
    1. Stephanie, you are asking about underdetermination. All (nontrivial) causal theories are underdetermined by evidence, whether it's atomic physics or reverse-bioengineering. But if the evidence is challenging enough, coming up with even one causal theory that explains it all will be a big enough accomplishment. And if there's several rival theories that all predict and explain all possible data, then only the gods can know which one is right!

      Another way to put it is that "multiple realizability" is a form of underdetermination (and so is weak equivalence and even implementation-independence, in computation).

      The only added twist is that if you get the wrong theory of the atom, nothing is at stake if you explain all the data. We have no idea what, if anything, we might have left out. But if T3 (or T4) doesn't feel, we know exactly what we've left out.

      Delete
  5. “The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”

    This article is a very good summary of many of the topics covered throughout this semester. The main point that Harnad develops in this article is that the easy problem – explaining how and why we can do things – will be solved, but that this is not the complete story for reverse engineering cognition. We are still missing explaining how and why we feel – this is the hard problem of consciousness.
    The other minds problem is that we will never be able to know if something or someone other than ourselves can feel. However, empathy is a feeling I get often because I believe so strongly that other things are able to feel. I feel as though if we get too discouraged by the other minds problem then we are left with nothing. As a feeling being I have intuition and this intuition tells me that other beings are able to feel in the same way I am.
    Other scientific disciplines have many philosophical problems to face that are similar to the other minds problem and yet they can still progress. It seems that if we are going to get anywhere with cognitive science, we need to trust our intuition that other beings can feel. Yes – we will never know if we have fully reverse engineered cognition completely, but at least producing a robot with sensorimotor capacities that is able to do everything humans are able to do is a step in the right direction. We should not assume that it can feel, but since this is a problem we will never be able to solve, then maybe we should move past it.

    ReplyDelete
    Replies
    1. Danielle, you are quite right that the other-minds-problem should not stop us -- and it doesn't. We do perceive that others feel (i.e., we do have what you call having that "intuition"); it's called "mind-reading," we probably evolved it, and we do it all the time, successfully, with one another. It's the very same thing as Turing-testing, and this is also the reason Turing-testing is the way to test whether we've successfully reverse-engineered the mind.

      But the other-minds-problem is not the "hard problem," which is not the problem of knowing whether other organisms feel, but explaining how and why they feel -- rather than just do. That should not discourage us either, since we have not yet even solved the "easy problem" of explaining how and why we can do everything we can do (and that's probably all that CpgSci can ever hope to do.)

      I wonder what you mean, though, about other sciences having their equivalent of either the other minds-problem or the hard problem: Maybe underdetermination of theories by data -- the fact that more than one theory may be able to explain all the data -- is a bit like the other-minds problem: You can never be sure you've found the right theory. But even so, what would be the counterpart, in other sciences, of the thing that you have failed to explain if you found the wrong theory, namely, in CogSci: feeling ?

      And what's the equivalent of CogSci's hard problem in other sciences? It seems to me that the other sciences have only the "easy problem," and underdetermination. There is no other phenomenon, like feeling, of which we can be sure it really exists, but we cannot explain, causally, how and why...

      Delete
    2. Yes, that is all true. I believe that the main issue I have is that (as you pointed out) we have not solved the easy problem yet. Therefore before worrying about the hard problem, I think we need to take the tasks one step at a time. Maybe once we solve the easy problem, the solution to the hard problem will reveal itself or show a method that gets us closer to it – but until then the speculation seems redundant and that we are getting ahead of ourselves.
      By saying the other sciences have the equivalent of the other minds problem, I was mainly referring to Heidegger and how the sciences have not explained the nature of existence yet. Most of the sciences are reducible to physics and yet physics cannot be sure of the existence of anything. Just like Descartes has shown – we cannot be sure of anything except our own existence, so how can we be sure that there is world in which physics can study? This is very philosophical but I guess what I’m trying to say is that other sciences have exactly the other minds problem but in a broader sense – instead of being unsure of the existence of other minds, they are unsure of the existence of a world altogether.

      Delete
  6. This article would have been a great resource for my midterm had I been eager enough to look a few weeks beyond where we were in the course. Alas, this is not the case.

    Descartes' "Cogito" finally made sense to me due to the explanation provided in this reading.
    "I cannot doubt that what it feels like is what it feels like right now."
    Since this idea is often coupled with the incomprehensible idea that I might not really be a human, or live on a planet that circles a sun (I can't know for sure what is real), it was a bit tricky to move past that huge question mark of a concept onto the one exception stated above. If I can't be sure of anything, then how can I be sure of what I'm feeling?

    There was a study done at MIT that looked at human-machine interaction. Children about age 7 were asked to hold each of the following things upside down for as long as they wanted: a barbie doll, a furby toy, and a live gerbil. The furby toy has sensors in it, so when it is upside down, it says it's scared. The children had no qualms with holding barbie upside down for any length of time, and they quickly became uncomfortable holding the gerbil upside down for more than a handful of seconds. What about furby, the thing that's telling you it doesn't want to be upside down? The kids held it upside down for less than barbie's time, more than the gerbil's time, and most important, furby's time upside down was closer to the gerbil's time. The children felt that furby was alive, -ish, that he felt pain, maybe, and it felt wrong to hurt him, if he could even hurt.

    The hard problem reminds me of an asymptote - we may never reach the answer, but we will damn well try to approach it as best as we can. What happens when we develop artificial intelligence that passes the Turing Test? Where, or how, do you start to evaluate feeling? We want it to be a product of the physical environment, to exist outside of ourselves, obeying cause and effect. Until then, it is our own feelings, like the feelings of the children in the MIT study, that inform us about the feelings of outside beings. We can rationalize all we want about why it's no big deal to squish an ant, yet the feeling in the pit of my stomach is grinding my reasoning to a halt. Maybe the code of feelings is not analagous to an empirical, evidence-based mindset.

    ReplyDelete
  7. This article was very satisfying to read because of how wholeheartedly I agree with it just as after I read Searle's Chinese Room Argument. I have been struggling to wrap my mind around how cognition can be computation so it is a comfort to know that Turing himself likely did not agree with this. I even discussed this point in my midterm (that cognition cannot be computation) but I am wondering about the language I used. We have that it feels like something to do or know something. We know what it is like to know English and we know what it feels like to see red. Can this be considered awareness? Is there a difference between feeling and awareness?


    ReplyDelete
  8. This article was a very nice summary of most of the course, and I do agree with the central premise. The one thing that is still fuzzy for me is the concept of " sensorimotor dynamics " as an alternative to computation. I think I get what they do, they are the bridge between the external world and computation. They are an analog brain/world interface that serves to ground referents so that symbols used in later computation actually refer to something other than different symbols. Literally, they are the interactions between your body, your perception, and the environment.

    However, I can't shake the feeling that sensorimotor dynamics are more or less peripheral because I just can't imagine neurons doing anything other than computation. Categorization, which Harnad considers the basis of cognition, requires detection of invariant features, which in turn requires the ability to abstract away features from a perceptual scene in the first place. can abstraction really be done by anything other than computation? what exactly would an analog abstraction process look like?

    I'm not a computationalist by the way, I really am just confused about this all and the more I think about this the more confusing it is. Every time I feel like I'm close to (really) understanding any of this It just dissolves into my hand.

    ReplyDelete
    Replies
    1. “Categorization… requires detection of invariant features, which in turn requires the ability to abstract away features from a perceptual scene in the first place. Can abstraction really be done by anything other than computation?”

      Perhaps, I am misinterpreting your question, but I do not believe abstraction is solely done by computation. Inputs must first be received by the T3 robot via sensorimotor capacities. Then, if the robot is trying to find invariant features in the inputs it received, it can do so by a different mechanism.

      “The one thing that is still fuzzy for me is the concept of “sensorimotor dynamics “as an alternative to computation.”

      I don’t think that sensorimotor dynamics are an alternative to computation. Rather, I see sensorimotor capacities necessary so that T3 can do what it needs to do. To pass T3, a machine would need to do anything that a human can. This includes doing things verbally and doing things in the real world (such as categorizing objects). To do things in the real world, T3 needs to receive inputs via sensorimotor capacities. Just like how T2 needs written inputs (for example through email), T3 needs inputs as well. The inputs that T3 receives from the real world can be manipulated in whatever way necessary (which could be computation). So, sensorimotor capacities allow the robot to receive the inputs necessary to do the computations to categorize objects.

      Delete
  9. It could be that it's quite late at night, or perhaps I've read too many Turing papers, but right now what's on my mind is the set-up of the hard problem - how and why we feel. How different are these two questions, how and why? I don't think they necessarily get at the same essential inquiry. For example, when a child asks "Why is the sky blue?", and adult who will try to give an honest, scientifically-supported answer will actually be answering the 'how' version of the question - i.e., they may talk about the composition of the atmosphere and how light particles are constituted in such a way as to render the sky blue. But that wouldn't actually answer the question 'why' - which is possibly why certain children will tend to continue asking "why" - repeatedly. We all must have had this experience, where we keep answering this 'why' question to the point of absurdity, until nothing really makes sense anymore - could it be that we may be getting too close to the unanswerable when we try to answer 'why' we feel? I'm not saying it's absurd to ask 'why do we feel?' but maybe it's the kind of question we can only answer in the 'how' form (and even that may be impossible).

    ReplyDelete
    Replies
    1. Hey Esther,

      I really like this inquiry, and have often wondered about it myself. After giving it a little bit of thought, I always return to the notion of cause and effect. When looking at the 'how' questions, it seems like the subject of inquiry, namely feeling, plays the role of the effect. In this case, a researcher looks for a causal mechanism in order to explain, scientifically, what processes lead to a particular phenomenon, such as feeling. On the other hand, when I inquire into the nature of 'why', one seems to put the the subject (in this case feeling) into the causal position, and asks what purpose, or effect, might the subject have, considering it exists, and probably has one.

      When considering the difference between these two question, they become easier to distinguish a part when placed in a scientific context: When we consider the design and usage of the human eye, we might first ask how does it come into existence. This might require a throw explanation of genetics and cellular development. In contrast, if were are to ask why the eye has come into existence, an evolutionary explanation for its selective advantage in human survival might suffice.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Hey Adam,
      the cause and effect way of looking at things could help clear it up, but on the other hand you could keep asking why - it's easy to show a chain of events / causal mechanism, but also I think possible to keep pushing the envelope when it comes to questioning the original cause of something - e.g., if you're going to give an evolutionary explanation for the eye, you could also ask 'why' for the conditions that gave rise to the evolutionary context, and when given the answer to that, keep asking why - I'm not sure, maybe you could keep asking why all the way back to the Big Bang, and then I don't know where you would go after that. The more I think about it the more I empathize and understand children who keep asking why - as adults we're satisfied with the answer to this question at a certain point, depending on our field of study or interest I guess, but I think there's no limit to how far we can take it.

      Delete
    5. About the article:
      Well, “Alan Turing and the “Hard” and “Easy” problem of Cognition: Doing and Feeling” just summarized the course.

      To Esther and Adam:
      I've found myself in the same position, why at all is it that we feel? I probably won’t succeed, and I am sure Prof. Harnad will argue that what I think only classifies as a “just so story.” But feeling seems to me a way of permitting bonding among human beings, but not exclusively between human beings, it’s also possible between human beings and animals (even if our society has managed to unjustly objectify animals), I fail to see how bonding, and having empathy for someone else can only classify as performance capacity. When I think more about it, I think any question about ‘why’ is tantamount to asking ‘why do we exist at all’. Unlike Adam, I don’t think “an evolutionary explanation for [something’s] selective advantage” will ever be sufficient. If we finally accept, that the feeling problem belongs to existential questions that philosophers have failed to explain, that no one can explain, then we can agree with Prof. Harnad as he has persistently stated that this question is simply impossible to answer.



      Delete
    6. Esther I didn't see your answer before I posted mine, but I agree.

      Delete
  10. This essay gives what I see as kind of a wrap-up of this course so far. We are trying to figure out the causal mechanism that allows us to do what we do. Turing came up with a way to test something to see if it acts enough like a human acts that it fools all of us. The idea that this could be done by symbol manipulation (computation) alone was shown to be ridiculous by Searle because the symbols need to have meanings or at least have something they refer to.
    All this though, Turing's test does not prove that something that passes it is capable of feeling. I myself am not comfortable even considering the fact because at the end of the day, the only thing that we can be certain of, according to Descartes, is that I cannot doubt the existence of my own thoughts and feelings because even that doubting is thinking. It feels like something to feel, and that's kind of all there is.

    ReplyDelete
  11. This paper provided a really clear, understandable review of many of the topics we covered in this class. It also answered several questions I was still stewing on, and clarified some things I hadn’t fully understood, especially about the symbol grounding problem in particular. This straightforward description of the symbol grounding problem as “what is missing to make symbols meaningful”, accompanied with the presented examples, highlighted to me some misunderstandings I had about the symbol grounding problem, namely how universal of a problem it is in language, and how much it is a problem of categorization.

    As a side note, maybe it would be interesting to present this paper to students at both the very beginning and very end of the course? Sort of like the introductory and conclusion paragraphs of a very very long essay. I think referencing this paper throughout the course would have helped me to better understand how different concepts fit together and related to each other, and the sort of “kid sib” explanations presented in this paper would have been a clarifying accompaniment to some of the more complex papers assigned throughout the semester.

    ReplyDelete
  12. “The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”

    It may be both computational and dynamic meaning this machine would have all our know-how coming from our computational abilities and our sensorimotor grounding. This means that this machine will have the ability to know what something means but will it really be able to feel what knowing what something means feels like? Personally I don’t think it would be possible for this machine to feel. I don’t think that feeling comes from the ability to perform as we are able to perform I think that it is there or it isn’t and we can’t just create something from scratch and produce feeling especially when we don’t even know how and why do we feel. The how is more important then the why in this case.

    ReplyDelete
  13. I agree that Turing is not a computationalist, even though the Turing test as described in his paper, when taken the wrong way, might seem to suggest that we can explain cognition using only computation and that feeling is not necessary for cognition. I definitely read Turing that way initially. However, Turing doesn't exclude the possibility of feeling, which leaves room for Searle's conclusion. If we build a Turing Test-passing machine (T2), the possibility is open that that machine might feel, or that even that it must feel in order to pass the test (as of yet, we don't know).

    In this paper, Harnad ties it all together. Most likely, symbols have to be grounded in sensorimotor capacity/categorization ability, so that the T3-passer can actually talk about the entire scope of things that a human is capable of talking about in the same way. I'm guessing that Turing did not have in mind that the T2-passer might need to have sensorimotor capacities, but that possiblity is certainly left open by his formulation.

    "The successful TT-passing model may not turn out to be purely computational; it may
    be both computational and dynamic; but it is still only generating and explaining our
    doing capacity. It may or may not feel."

    I was confused about this point for a while—I figured that if we create a T3-passer that happens to do everything we can do as well as feel, we must have explained how and why we feel as well. But because of the other minds problem, we can't know whether it is feeling, or if it is feeling what we are when we do the same things.

    ReplyDelete
  14. "What is thinking? It is not something we can observe. It goes on in our heads. We do it, but we don't know how we do it. We are waiting for cognitive science to explain to us how we -- or rather our brains -- do it."

    So the hard problem only applies to feeling, right? Not thinking. I have faith that thinking will be explained by science more and more, within the framework of the four forces (gravitational, electromagnetic, strong and weak nuclear). Neuroscience has made many advances over the last 50 years, and imaging studies, although they rely on neural correlates for their conclusions, have great promise. But then putting the other minds problem aside, why is it impossible to also explain how we feel?

    I think the whys of feeling and thinking are the harder problem. And as there are probably multiple reasons for many things in life, I don't think we'll find one definitive answer to this question.

    ReplyDelete