Saturday 11 January 2014

10d. Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer.

Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-BrainerArtificial Intelligence in Medicine 44(2): 83-89

Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just “functed.” But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test, which is to explain all of our performance capacity, but without explaining consciousness or incorporating it in any way in our functional explanation.




43 comments:

  1. "Now exactly what does it mean to be "programmed" to love?"

    There is a distinction between being "programmed to love" and being "programmed to be able to love" and being "programmed to be able to feel." I'm not sure what the driving point of the distinction between the three is in this paper - it is mentioned that "programmed to love" (one specific person) is being set aside because of issues of free will - but isn't that the crux of the issue? If we had the ability to program a brain - biological or digital - to love a specific person, might this not indicate that we understand the details of its ability to love to the extent required that we can make it feel love towards any other entity? This might require us to first be able create an entity that can love in general, but a test of our abilities would be to manipulate this entity's "code" to cause them to display love towards another given entity (although we are limited to judging its performance in this matter, of course, as per the other minds problem). Conversely, we might be able to program something to feel, but for some reason, "love" is exempt, an emotion it is not able to feel. There are humans for which this is the case, why not robots? Would this mean we "almost" passed the Turing Test? Or do we see every emotion as being equally important to reproducing human capacity to cognize? Is every emotion vital to our being human and hence required in a TT-passing robot?

    ReplyDelete
    Replies
    1. Feeling vs. Doing

      Dia, I think you have not yet quite got the point.

      Forget about love in particular, and about either programming or dynamically designing a robot to do or feel anything in particular:

      To see the real problem, any feeling will do, including "loving sweets" or "feeling warm."

      The point is that if you stipulate that you have built a T3 robot that can love sweets or feel warm, you've bypassed the other-minds problem (by fiat) and begged the question of how you know that your T3 feels (rather than just behaving as if it does).

      And that's still without touching the "hard problem" of explaining exactly how and why it feels -- for which i hope by now everyone realizes that the right answer is not: "I know how and why my T3 can do everything it can do, because I'm the one that reverse-engineered and then built it!". The problem is that feeling is not doing. If feeling were just doing (dynamics), there would be neither an other-minds problem nor a hard problem: Feeling would be an observable property of the world, like moving -- maybe even an independent physical force, like gravitation or electromagnetism. Then the hard problem would just become part of the easy problem.

      But feeling is not observable (except by the Cartesian feeler), so there is an other-minds problem.

      And feeling is not an independent dynamical force, so there is a hard problem of explaining its causal role.

      Delete
  2. Why do we not simply do? Why must feeling accompany doing? This is the hard problem and I think the film AI gets at part of this with the difference between the old and new generation of robots. The old generation of robots simply do; they are limited to this by their programming (or at least that's what I intuit from the clip, and what I remember of the movie when I saw it years ago.) There is no problem of other minds in these robots. Humans in the show feel no real connection to them, nor have a problem killing them.

    However the little boy robot does feel. They chose not to kill him because they think he might be a human, and therefore might feel. From his reaction, and behavior the crowd assumes he feels. This raises moral concerns, whereas destroying old feeling-less robots does not.

    A question and comment on the following excerpt:
    "A counterintuition immediately suggests itself: 'Surely I am conscious of things I don't feel!' This is easily resolved once one tries to think of an actual counterexample of something one is conscious of but does not feel (and fails every time). The positive examples of feeling are easy, and go far beyond just emotions and sensations: I feel pain, hunger and fear."

    Right now I'm hungry, and I know what this feels like. I'll eat, and I'll feel satiated. This is another positive example. And at the same time, I'll know I'm not hungry by how I feel. I'll know to stop eating based on this. Is this not a counterexample of something I am conscious of, but not feeling? That I am not feeling hungry, but conscious and aware of this?

    ReplyDelete
    Replies
    1. What it feels like to not feel anything at all...

      Seeing red is different from seeing green. And seeing red is the same as not-seeing green. These are just categories and their complements, positive and negative instances of one another. (Green is not-red and red is not-green.) No problem.

      But now try this with the category feeling itself. We all think we know perfectly well what it feels like to feel (anything)' red, not-red, green, not-green; and that feeling red is not-feeling green.

      But what does it feel like to not-feel anything at all? Does the analogy "Well, it's something like what it feels like to not-feel green when I am seeing red" -- but is it?

      With all of the other X/not-Y examples, not-feeling Y is always the same as feeling X: That's the way we feel the absence" of X. (I won't complicate the picture by pointing out that every positive feeling, X, is in fact "negative evidence" not just for Y, but for every other feeling (red isn't just an instance of not-green, or not-seeing green, but also not-blue, not-yellow, not... and (why not?) not ice-cream too!

      Funny kind of negative evidence, but never mind. The analogy does not work for "what it feels like to not feel anything at all," It doesn't even work if you try it by imagining feeling less and less, till you feel almost nothing, and then what it feels-like to feel nothing would just be that one last step -- if you could take it. But that's a bit like being a tiny bit pregnant: If you're pregnant, you're pregnant -- 100% pregnant (just early in gestation).

      Well consciousness (feeling) is like that too. You can feel this or that, this and not-thatl you can also feel more or feel less; but you can't feel nothing at all. When there's nothing at all being felt, you're not there! There's nothing at all being felt inside a stone, or a computer -- or (I hope) a plant (to take a living example). The stone, computer and plant are not feeling what it feels like to not feel anything. They are simply not feeling...

      Delete

  3. “ “TT”. There would have been ways to make “AI” less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make -- but that film would have had to be called "TT" (for Turing Test) rather than "AI" or "ET," and it would have had to show (while keeping in touch with our "cuddly" feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about "robots." “

    Spielberg’s definition of the TT does not align with what we have learned in class. He seems to state that knowing whether or not other systems really do feel would constitute TT – but to us, that would be the other-minds problem. TT is not able to test whether or not something else feels, but merely the capacity to do – like the old robots in the movie, which can do but cannot ‘feel’.

    On another note, assuming that this boy feels and that we do not have interference from the other-minds problem, I think this movie highlights the fact that feeling is what makes us human. Being able to feel gives us different motivations, making each of us unique. If we could simulate those actions without having felt them, would the simulations from each of us not be the same?

    ReplyDelete
    Replies
    1. What Makes Us Human...

      Vivian: "I think this movie highlights the fact that feeling is what makes us human."

      If there is one profound moral lesson that I hope you will all take with you from this course for the rest of your lives it is that feeling is definitely not what makes as human -- because non-human animals feel too. Feeling is what makes us conscious, which means sentient, which means feeling: All feeling animals, human and non-human, are sentient. And feeling means you can be hurt. So if it is wrong (inhumane) to hurt a feeling being needlessly, then it is inhumane to hurt a feeling non-human being needlessly. And what makes human beings human (besides language) is not being sentient, but being humane toward sentient beings.

      Vivian: "Being able to feel gives us different motivations, making each of us unique."

      For what it's worth, each individual feeling organism is "unique." (So is each individual non-feeling living organism, and each individual non-living organism and non-organism or object.)

      But what is (almost) unique about human beings (besides language) is that they can be humane (though I believe that other animals can also be humane, and not only toward their own young, or mates, or kin, or conspecifics, but toward other species too).

      The cult of "uniqueness," though, is undoubtedly uniquely human... (It would be a lot better for all other species, and for many humans too, if we cultivated out humaneness rather than just our uniqueness.)

      Vivian: "If we could simulate those actions without having felt them, would the simulations from each of us not be the same?"

      I'm not sure what you're asking here, but if it's about the difference between acting happy and feeling happy, I think we would all agree it's not the same (though sometimes going through the motions can get some of the feelings started too).

      The other-minds problem stands, however: Acting as if you feel something (but not feeling it) is not the same as feeling that something. -- And if it were all just acting and no feeling, then you'd be a zombie...

      Delete
    2. The first part of the article summarizes key points that we learned throughout the class. Now when we think of the boy robot programmed to love, it makes me wonder if we could program a robot to respond as if it were feeling. However, because of the other mind problem, we could never know if the boy was actually feeling or if he was just acting out the reactions humans have when they feel. Now, if we were to encounter a robot that would act out all the human emotions, I would assume that that robot does actually feel, the same way I assume other students in the class feel because I know they are human and I can identify and associate the emotions they project. But again, within all this, I indeed still don’t know how and why that robot feels (and if he feels). Watching the clip from this movie made me think of one of the first questions Harnad asked us in the class which was something along the lines of: If we have a robot (Ethan) and we know that it is a robot but we also know that it is indistinguishable from a human being to our naked eye, then would we kick it? This started our discussion on feeling, where most people answered that if the robot could feel, then they wouldn’t kick it. In the Xenophobia section of the paper, Harnad and Scherzer write: “But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker's premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter”. The important question now is not: is he/she/it human but rather can it/she/it feel. I think Harnad makes a very good point of this in answering Vivian’s first comment, we need to focus on the fact that something feels and not if they are human or not.

      Delete
  4. “Correlation and Causation. First, let us be sure to separate feelings from their functional correlates (Harnad 2000): We feel pain when we have been hurt and we need to do something about it: for example, removing the injured limb from the source of the injury, keeping our weight off the injured limb, learning to avoid the circumstances that caused the injury. These are all just adaptive nociceptive functions.”

    Pain becomes an overall felt state after having been physically (or socially though I won’t be dealing with this here) hurt. When cells become damaged (ex:skin cells) a message is transferred through nociceptor neurons to the spinal cord which will retract one’s hand (if in the case of burning your hand) and the message will be passed on to the somatosensory cortex (and I am assuming to other parts of the brain) where one will have the overall felt state of pain. The way I have come to understand pain is that it is an after effect which allows you to process everything within the circumstance and to learn from it. However the nociceptor alone was the one to convey the message to the immediate response of doing and to pass the message to a centre that transfers messages to account for feeling.
    Because my finger will have a burn-soar it will hurt and I will not be able to use it, but is that pain actually what made me learn to avoid the circumstance of what caused me to have this pain?
    I think that in that moment of a correlated nociceptor attachment to the image of me putting my finger on the stove is where I learn to never touch another stove again. However, is that all that goes on? It can’t just be the function of nociceptor adaption, because those nociceptors are not capable of distinguishing what is and is not a stove. All they can do is recognize the point of damage to send a message to help create a fundamental role of correlation with other things that are happening in your brain (such as seeing your finger touch what you have come to know as a stove).

    I am still stuck on this dualistic perspective: Does a neural network code for feelings, and do those coded feelings have an effect on function?
    EX:(1. function of nociceptors firing to a specific thing- and attaching to other neural communications (ie. nociceptor neuronal message that codes for: skin cells burnt + Neural messages that code for colour, motion, etc. : stove was touched)); (2. neural communication results in an understanding/ ‘feeling’ (when I touched the stove it hurt my finger)); (3. feeling can therefore have an effect on function (I’m scared to get burnt by touching a stove again, therefore those neural circuits will code for not allowing other neural circuits to open: not touching the stove)).
    A robot will be able to not touch a stove because it has been programmed to not touch a stove, and will know to say that “s/he will burn himself if he touches it”, but a robot will not have learned it through experience.
    (This was brought up in the third week of class I believe, but I still did not catch the gist of the counterargument)

    ReplyDelete
    Replies
    1. “Everything just described can be accomplished, functionally, by merely detecting and responding to the injury-causing conditions, learning to avoid them, etc.”

      Well what about a social pain? If I lost my mother in a crowd when I was really young I would feel scared and would start to panic.The injury caused is the fact that you are alone and you don’t know what to do. And you learn from that experience to never loose sight of your mom. But if a robot was in that case would they ever learn from such an experience of being detached from a mentor?


      “All those functions can be accomplished without feeling a thing; indeed, robots can already do such things today, to a limited degree.”

      ***Professor Harnad, is it possible to post an article about this please?***


      “So when we try to go on to explain the causal role of the fact that nociceptive performance capacity's underlying function is a felt function, we cannot use nociception's obvious functional benefits to explain (let alone give a causal role to) the fact that nociceptive function also happens to be felt: The question persists: how and why?”

      But it is true that nociceptors do have a causal role in one part of a bigger piece of being able to feel. Why are we focusing on only one aspect rather than a connection of things that are occurring at one given time??
      [EX: What about the example of phantom limbs? It is explained that a nerve pinching is what affects the feeling of pain one feels of their limb that no longer exists, however when a mirror is put in from of them to provide the image that their limb is there their feeling of pain goes away. (http://www.medicinenet.com/script/main/art.asp?articlekey=88097)
      Does this example not show that nociceptors are not enough to provide for one’s overall felt feeling??]
      Why can’t the ‘how’ question just be a more complex mechanism for humans to which we haven’t understood yet?

      Related to the ‘why’ question:
      Harnad & Scherzer (2008) mention that it is “very likely that all vertebrates, and probably invertebrates too, feel”.
      If consciousness is equated to feeling, are there are different kinds of feelings/consciousness that exist in the word? And perhaps just as different living beings have affordances that mark their categorical perceptions, do they too have affordances to the way they feel?
      And then why are only animals taken into account and not trees and plants??
      And then relating this to the ‘how’ question again, could it be another kind of mechanism that is independent of brain’s function that accounts for feeling?

      Delete
    2. Demi, I expect we learn to avoid tissue injury much the way we learn everything else, by classical conditioning (unsupervised learning), by trial-and-error induction (supervised learning) or instruction. No causal role for feeling there, just detection, correlation, feedback, feature detection.

      The question is not whether organisms (or robots) can feel this or that, more or less, but how and why they can feel at all. (Plants have no nervous systems and hardly behave, so it is very unlikely that they feel (but there is some controversy about this.)

      A feelingless robot can learn to not-touch the stove. It can also learn (or be designed, as we are) to seek its parent if lost in a crowd. I've appended some recent robotics references (but don't expect too much).

      You asked "could it be another kind of mechanism that is independent of brain’s function that accounts for feeling?"

      I can't imagine what you mean? Telepathy? Or dualism? ("Stevan says" it's almost certainly the brain that causes feeling: the hard problem is explaining how and why.)

      Li, K., & Meng, M. Q. H. (2015). Learn Like Infants: A Strategy for Developmental Learning of Symbolic Skills Using Humanoid Robots. International Journal of Social Robotics, 1-12.

      Brooks, R. A. (2014). The role of learning in autonomous robots. In Proceedings of the fourth annual workshop on Computational learning theory (pp. 5-10).

      Steels, L. (2003). Evolving grounded communication for robots. Trends in cognitive sciences, 7(7), 308-312.

      Delete
    3. “I expect we learn to avoid tissue injury much the way we learn everything else, by classical conditioning (unsupervised learning), by trial-and-error induction (supervised learning) or instruction. No causal role for feeling there, just detection, correlation, feedback, feature detection.”
      In Jessica Magonet’s 10d. skywritting you wrote “(11) It (sometimes) feels as if we do things purely because we feel like it. The fact that it feels as if we do things purely because we feel like it is not an illusion.
      (12) But that we really do things purely because we feel like it probably is an illusion (like the phantom tooth ache): Feeling is not a force, even though it feels like one.
      The question is not whether organisms (or robots) can feel this or that, more or less, but how and why they can feel at all. (Plants have no nervous systems and hardly behave, so it is very unlikely that they feel (but there is some controversy about this.)”

      I really am so stuck with this idea, the fact that we feel is not an illusion; but the fact that we do things because of a feeling is an illusion. So is feeling being assigned a kind of epiphenomenalist explanation? So just like steam from a steam engine, it is something that is there and apparent, but it is not something that has an effect on anything?

      I am so stuck because humans (having the capacity of creating/using symbol systems or for some other reason) were capable unlike other species to conceptualize the idea of an afterlife. Through behavioural doings/constructions we can show that humans had a ‘fear’ of death/the unknown and this has lead them to produce things such as burial housings, pyramids, sacred texts and the list can go on. Is that feeling not the cause of their behaviour? What has then caused such mortuary practices to come into play? I understand that the creation of these things is at a level of doing, but what is that specific trigger point to making those creations?? Is that not feeling? Or is it just a mechanical coping system that just comes about?

      Harnad (2012) mentioned that explaining the ‘how’ and ‘why’ of feeling is really hard because “doing alone already does the job”.
      And I guess getting down to the nitty gritty of all these questions above is: do these two different processes ‘doing’ and ‘feeling’ really not have an interaction at all?

      Delete
    4. “The question is not whether organisms (or robots) can feel this or that, more or less, but how and why they can feel at all.” (Plants have no nervous systems and hardly behave, so it is very unlikely that they feel (but there is some controversy about this.)

      This brings it back to the “correlation and causation” / “feeling versus functing” dilemma. If we ignore the other minds problem and instead attribute that a nervous system is the basic tenant of being able to feel, we are left with differences in what is capable of being felt. Some animals grieve to the loss of a loved one, and other animals may not do so. However, even if some animals do not feel pain with the loss of a loved one, they do all share the feeling of pain if their body is sliced in two. Although this is not a question of this/that, more/less would this difference in feeling not be a clue to understanding feeling?? If we ask the question of how or why, shouldn’t we start at the very basic point- where the animals does not show grieving of a loved one to explain ‘how’? And wouldn’t we try to explain ‘why’ at the basic point of being able to observe through an animal that does grieve?
      What I can’t quite conceptualize is multiple T3s interacting only with each other; how would their inter-robotic interaction be the same or different as biological animal species? And can’t we use the example of positive evidence in this case? If T3s do not have or show the dynamic property of feelings and if they were secluded and made to form a community together, how would their interaction (through various doings of/in life) be different from the animals that have and share feeling?


      A feelingless robot can learn to not-touch the stove. It can also learn (or be designed, as we are) to seek its parent if lost in a crowd. I've appended some recent robotics references (but don't expect too much).

      Thank you for the postings, I have peeked but not read one in full.


      You asked "could it be another kind of mechanism that is independent of brain’s function that accounts for feeling?"
      I can't imagine what you mean? Telepathy? Or dualism? ("Stevan says" it's almost certainly the brain that causes feeling: the hard problem is explaining how and why.)

      Not telepathy or dualism, but just something that hasn’t been discovered yet.

      Delete
    5. Demi, I think you have not yet quite grasped the full extent of the problem of the causal role of feeling.

      There's no doubt that we feel. And there's no doubt that it feels as if we do things because of the way we feel (and not vice versa).

      But the question is: what is the real (not just the felt) causal power -- and hence the functional role -- of feeling?

      Because, on the face of it, it looks as if the T3/T4 causal mechanisms of doing (once we have reverse-engineered them) can do -- and are doing -- the whole job, without any extra causal contribution by feeling, or any need for it. (We don't even have any way of knowing whether T3/T4 really does feel!)

      Yes, we feel, and feel that we are doing (some of) what we do because of what we feel -- yet our doings are completely accounted for by the ("easy") mechanisms of our doing, and there does not seem to be either any room or any need for anything more, to explain the causes of our doings. That makes feeling itself seem superfluous. And that's why finding a causal explanation of feeling is so hard.

      It's alas no use to point out that we make burial mounds because of feelings. It feels as if we do all sort of things because of feelings. But when you look more closely at the causal mechanisms of what we do, you find evolutionarily adaptive traits, learning mechanisms (another evolved trait) and whatever shared cultural practices we (as social T3/T4 robots) have evolved or learned (burial practices, religion, wars, and democracy being among them).

      There is no doubt (despite the other-minds problem) that the accompanying feelings are there; and no doubt that they feel like causes. But how and why do feelings cause doings, rather than just being, like doings, themselves the effects of our evolved and learned T3/T4 doing mechanisms -- but problematic effects, because their causal role seems redundant, superfluous?

      It doesn't help to call feelings "epiphenomena." That just confuses kid-sib by re-naming them without explaining them. We already know there's a hard problem about explaining their causal role...

      (As an exercise, and to show how hard it really is, it might help if you put the problem of explaining the causal role of feeling in a very explicit, kid-sib way:
      "The reason a feelingless mechanism is not enough to enable us to do X is...")

      Delete
    6. Okay I think I am getting a little bit more of a hold on this problem. Looking back on the commentary on Dennett on Chalmers there's a particular response that made it a little more clearer in addition to your helpful and thought provoking insight to my commentary. "That they correlate (feelings and function) is an interesting fact. Explaining how and why is another matter."
      In this way the two happen to co-occur without being able to know any causal power between feelings and function. Therefore although I do feel that there is a causal power, this feeling is introspective and falls into the same mistake that introspection lead us in to studying human cognition; that it is not possible to deduce a causal mechanism in this way.

      I'm very looking forward to tomorrow's class!!!

      Delete
  5. "In the case of the reverse-engineering of life itself, it turned out that no extra "vital" force was necessary to explain all the structural and functional properties of living matter. It is no longer even apparent today why anyone would ever have imagined that there might need to be a special life force, for there was never really any "life/matter" problem. The structure, function and I/O (Input/Output)performance capacities of biological systems are all perfectly objective, observable, and explicable properties, like all other physical properties. In contrast, with the "other-minds" problem, we each know perfectly well what it is that would be missing if others did not feel at all, as we do: feeling. But “living” has no counterpart for this: Other systems are alive because they have the objective, observable properties of living systems."

    I am not sure I follow the line of argument here. the paragraph just before this one argues that that a while back it seemed necessary to have things we now know are not necessary. How is it that we can be sure that feeling isn't just one of these things for which, when we do find the mechanism, we realize needed not any supra-natural cause?

    ReplyDelete
    Replies
    1. For feeling, unlike for living, there is the Cogito: We each know that we feel. The hard problem is to explain how and why. Feeling is what the Turing explanation of doing leaves out; there is no counterpart of this in the explanation of life.

      Delete
  6. “Freud slipped on this one, with his incoherent notion of an unconscious mind -- a zombie alter-ego. Unconscious knowing makes no more sense than unfelt feeling. Indeed it's the same thing. And unconscious know-how is merely performance capacity, not unconscious 'know-that.'”
    I completely agree with Harnad and Scherzer describing Freud’s concept of the unconscious mind as a “zombie alter-ego.” I never thought that the unconscious was something that was even remotely relevant or meaningful as a psychological concept; I also really do not understand why we spend so much time learning about it in psychology. I feel that the unconscious was an accepted concept in clinical psychology for so long to allow clinicians to have the dogmatic authority to diagnose their patients without really having any tangible evidence in support of that diagnosis. If we agree that to be conscious of something means to be aware of something (which in turn means to feel something), then there is really no point in studying the unconscious because there is really no way of proving that it exists. We cannot even describe what it is like to feel, but we know what it is like to feel; it is something (I believe) is unique to all living things, including plants and animals.
    “The real question, then, for cognitive robotics (i.e., for that branch of robotics that is concerned with explaining how animals and humans can do what they can do, rather than just with creating devices that can do things we'd like to have done for us) is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: We can't, and hence we shouldn't even bother to try.”
    I also completely agree with this point made by Harnad and Scherzer. I do not think that there is any way to reverse-engineer a robot that would have the capacity to feel. This might sound pessimistic, but it is difficult enough to try to explain what feeling is and what it feels like to feel. Also, because of the other-mind’s problem, we would never actually know whether a reverse-engineered robot was feeling (or whether they were feeling in the same capacity as we feel). Thus, as the authors state, I really do not think that there is a point in trying to build a robot that could feel because I think that this is impossible.

    ReplyDelete
    Replies
    1. Animals with nervous systems feel. No one knows whether plants, without nervous systems, feel. (My own quess/hope is not.)

      We can't know whether other organisms (or robots) feel, so we can't design them to feel and test whether they do. We can only design them to be able to do what we can do.

      But the hope is that a T3 robot, if it could be built, would feel (because there is in fact no way to have full T3 capacity without being able to feel -- though we cannot say how or why (hard problem) nor even whether (other-minds problem).

      Delete
  7. “1. Are models of consciousness useful for AI? No. First, consciousness is feeling. Second, the only thing that can be ‘modeled’ is I/O performance capacity, and to model that is to design a system that can generate that performance capacity. Feeling itself is not performance capacity.”

    I have to agree with Harnad on the uselessness of discussing consciousness when reverse engineering cognition. First of all, as he points out, consciousness is an aftereffect of cognition. It is in no way the causal mechanism of any functionality of a person so it has little implication on Cognitive Science’s question of why we do what we do. Additionally, the other minds problem seems to invalidate the study of consciousness as a science at all. I can only know that I am feeling, while I can assume others are feeling as well because they behave and look similar to me. This assumption can hardly be grounds for implementing consciousness in AI. The discussion of racism and what a robot is, drives the point home. Truly racist individuals who do not see members of other races as “like them”, have no empathy or belief in their consciousness cannot really be disproved. To them, and everyone else really, the only living thing that can feel, without a doubt, is themselves. Racism and xenophobia goes to show that no turing test will ever prove a machine to have consciousness when not every individual can recognize that another human being with equivalent functions and appearance (besides for maybe an accent or different skin shade) can really feel in the same way as they do. The byproduct of cognition that is consciousness is better dealt with by poetry and art rather than AI.

    ReplyDelete
    Replies
    1. And by laws. Without them, there's no hope for the victims -- by which I mean all feeling beings, not just humans...

      Delete
    2. “Feeling itself is not performance capacity. It is a correlate of performance capacity.”

      I, too, agree with this—actually, this sentence is what stands out and makes the most sense to me when it comes to defining the hard problem. Not so much that we need a definition of feeling (after all, we all know what it’s like to feel) but I find that this section of the article very clearly presents the role of feeling and how it falls in line with more concrete ideas we have previously discussed, like performance capacity. A ‘squishy’ kind of concept like feeling is a little easier to wrap my mind around when I can associate it with something less squishy, like performance capacity. It makes it easier to say, “oh, okay, that’s where feeling comes in, and now we don’t have to worry about trying to model it.”

      “What Is/Isn’t a Robot? So, what is a "robot," exactly? It's a man-made system that can move independently. So, is a human baby a robot? Let's say not, though it fits the definition so far! It's a robot only if it's not made in the "usual way" we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it's still growing from "naturally" man-made cells, or clones of them.”

      I tried to make a similar argument to this when answering the midterm question about which TT level is the right level…it feels (hah) like people assume robots are indeed just beings with metal organs, and those non-human flesh organs are one of the key characteristics that sets them apart; however, it is still important to recognize that many “real” humans have titanium knees or bionic hearts. As Harnad points out, we obviously still accept people with medical devices and transplants as human, as conscious, and as one of us. If we can do this, then why do we even pause to consider this question from the early weeks of class: if there were a T3 robot among us, would you kick him to see how he feels? You don’t kick someone with a knee replacement, so why would having more non-human-celled organs justify kicking? If consciousness could be captured by AI, I think it would provide a much clearer idea of how to stop racism and xenophobia—sadly, reverse engineering can’t capture that hard problem.

      Delete
  8. "A counterintuition immediately suggests itself: “Surely I am conscious of things I don't feel!” This is easily resolved once one tries to think of an actual counterexample of something one is conscious of but does not feel (and fails every time)".
    If I try to work through this: I am conscious that my dog is sleeping next to me. I do not (physically) feel my dog sleeping next to me. I do feel however the feeling of knowing that the dog is sleeping next to me...I feel that I see him, smell him, hear him breathing?
    But what about things that I am not aware of and do not feel?
    Is 'awareness' synonymous with 'consciousness' (which is synonymous with 'feeling')? Is understanding the same as feeling?
    If so, then the question is how and why do I feel/understand? And how can we make robots feel? And finally, what advantage would feeling certain processes actually provide over just doing them? In other words, why are we at an advantage feeling rather than just producing the correct behavioural outputs?

    How awfully overwhelming. Like in the last post, I find myself struggling to understand how we might ever arrive at answers to these questions. Could feeling be necessary for learning (and therefore categorization) ? I was about to ask whether feeling might be a more 'efficient' way of learning and understanding, and therefore categorizing, but then I remembered that we have learned that speed/time is not the issue. Moreover, in my own life, I would say that feeling is actually a disadvantage in many ways - for instance in confusing my priorities, distracting me, swaying my behaviour in doing things that might be less desirable.

    ReplyDelete
    Replies
    1. Stephanie, yes, consciousness = awareness = feeling, and I suggest avoiding the other weasel words and just use feeling because it will keep things much clearer.

      I am conscious that my dog is sleeping next to me = I feel my dog is sleeping next to me = it feels to me like my dog is sleeping next to me. Whether I see him, smell him, touch him, or someone chatting with me by skype just told me "your dog is sleeping next to you" (and I believe him), either way, I feel my dog is sleeping next to me.

      If I turn and look, and it turns out that it's not my dog I smelled, but my neighbour's dog who came in for a snooze, or my skype partner made a mistake and mistook a cushion for my dog, either way, I did feel like my dog was sleeping beside me -- and now I no longer feel that my dog is sleeping beside me, and I realize that when I had felt it, I had been mistaken. That realization feels like something too.

      Not only does it feel like something to see (or to believe, rightly or wrongly, or to imagine) my dog sleeping beside me, but it feels like something to understand when my skype partner tells me that my dog is sleeping beside me.

      How to design a robot that feels? Design one that passes T3 (or T4), and hope. That's the best you can do, because of the other-minds problem.

      What is the advantage of felt doing over just done doing? That's the hard problem!

      Learning? Yes. But why felt learning?

      Categorization? Yes, but why felt categorization?

      When feeling confuses you, obviously it's a hindrance. But usually you do things, and the right things, because you feel like it (believe, know, perceive). Why not just do them, without the feeling? It's an internal mechanism that's doing it all for you anyway (as we learned with introspection and Penny Ellis).

      But before you get too disappointed about not being able to solve the hard problem, remember that we're nowhere near solving the easy problem (of explaining doing) either -- so there's plenty left to do...

      Delete
    2. ''How to design a robot that feels? Design one that passes T3 (or T4), and hope.''

      Assuming that we feel and that a human with synthetic organs can still feel is an argument against the fact that a robot would need to be T4 or T5 to have feelings. As a matter of fact, if humans with synthetic organs can feel, then hardware is not relevant for feeling, right?

      Delete
    3. This comment has been removed by the author.

      Delete
    4. To Marion:
      I don't know is that is an argument against needing "to be T4 or T5 to have feelings." What I understand is that the best we can do is to model performance capacity (and as Prof. Harnad said) we are not even there yet. So, having a T5 cell by cell "robot" equal to us would not help, with it we would just assume that due to its similarities to us, it feels like us, just the way we already do with people (The 'others mind' problem). Thus performance capacity is the best we can do, and yes that implies hardware is not important, only software matters, and then we will see if we can find feelings, although probably, we won't really know then either, still because of the Other Mind's problem. Furthermore, I assume, even if performance capacity turns out to be enough to generate feeling, that probably won't answer 'why', so the Hard problem wouldn't be resolved either.

      Delete
    5. As Prof. Harnad mentions: "feeling is not performance capacity. It is a correlate of performance capacity."

      Delete
  9. "To think something is to feel something"

    I'm still not completely clear as to why feeling can't be an emergent property of the underlying physical counterparts? The discussion seems to lead to a fork where either you believe feeling exists within physics and we just haven't figured out how it does so yet or you are willing to assert that feeling exists outside of physics within some dimension we cannot understand emperically. If we are inclined to believe the later, then all we are left with is faith/conviction that feeling is separate from physics and that we'll never understand it objectively (i.e. can only be fully understood first hand by the peson). There is so much left to be discovered and learned about the human nervous system that I think it's early to make any assertions about consciousness and feeling. Perhaps I'm missing the point of the argument? As pointed out in the reading, I think it's fruitless to speculate so much on whether machines can be conscious and feel. The more interesting question (as mention in the Appendix regarding Spielberg's "AI") is how would we react if we did encounter AI in the future that attempt to convince us that they feel?

    ReplyDelete
    Replies
    1. "I'm still not completely clear as to why feeling can't be an emergent property of the underlying physical counterparts?"

      My impression was that feeling can be an emergent property of physical counterparts (in fact, it's most likely so), but that doesn't say anything about how or why it exists as an emergent property.

      Is there some sort of evolutionary purpose or causal mechanism that feeling exploits? How did evolution shape our bodies such that they support feeling? In building a robot that passes T3, will we have to use these same 'feeling' mechanisms?

      Delete
    2. What if there isn't a particular reason for why feelings exist? What if 'feeling' is just a spandrel that came along because of all of those physical counterparts? What if feeling evolved as a by-product of a system that can do the things we (and animals) do? Undoubtedly, feeling does have some adaptive advantages, like being able to feel pain to avoid injury, but not all feelings seem to have as much of an adaptive advantage...in that case, maybe if we build a T3 robot that really can do everything that we can do, it will have these same 'feeling' mechanisms.


      The only thing is, 'feeling' isn't just what humans have, animals have it too...so maybe we're not looking for the right thing when we look for feeling. Animals don't have language, but they can feel. So, if we solve T3, maybe we should be looking more at the commonalities between all feeling organisms to find this 'feeling' mechanism...

      Delete
  10. “The positive examples of feeling are easy, and go far beyond just emotions and sensations: I feel pain, hunger and fear.”

    After reading the 10c article I now find myself fixated on this idea of feeling pain. On the surface it seems to be a pretty obvious example of ‘feeling’ and of consciousness, but in practice it just makes the whole definition of ‘feeling’ much more complicated for me, because I keep thinking of the clear neurological distinction between the sensory-discriminative and affective-motivational aspects of pain. If someone has a traumatic brain injury that prevents them from experiencing the affective-motivational (or ‘emotional’ if you will) aspect of pain, do we say they aren’t experiencing pain? In this case the sensory-discriminative aspect of pain (which tells a person about the location and nature of the pain) is still present, but there is no distress experienced by the pain. So, while they’re obviously experiencing the pain in some way, can we say they’re feeling the pain, even though they aren’t feeling it in any emotional sense? Harnad and Scherzer conclude this paragraph by writing “To think something, too, is to feel something.”. In this sense, a clear argument could be made that a person lacking the ability to feel the affective-motivational aspect of pain is still ‘feeling’ the pain. However, they’re not ‘feeling’ it in the way that you or I would. Maybe this just circles back in the end to the other minds problem of how do we really know if anyone is feeling pain, but in this case I feel like it’s still more obvious because we can see clearly in neurological studies that the affective-motivational aspect of the pain is not present, and therefore a clear distinction between sensory feeling and emotional feeling is exists.

    ReplyDelete
  11. "But that is precisely what makes the mind/matter problem such a hard (probably insoluble) problem: Because we cannot explain how feeling and its neural correlates are the same thing".

    The reason I came to McGill was to study cognitive science, and the reason I wanted to study cognitive science was to know how we get from webs of brain cells with action potentials to me sobbing over a piece of music for no knowable reason. What is going on in that big fuzzy middle section when it all comes together? The closest I came to reaching an answer was going from the bottom up: physics & chemistry, then molecular biology and how the neuron works, and then parts of the brain related to behaviours. Still there is a huge gap between having a surgeon poke my motor cortex to have my arm rise, and feeling my arm rise while knowing it is a surgeon manipulating my cortex, and knowing it isn't voluntary. Finally someone actually acknowledged this mystery of feeling, consciousness - a big scary question that reminds us we don't know what the hell we are doing here, in Canada, Planet Earth, The Milky Way, The Universe.

    I was amazed to learn how little we know about the human brain. The best resources we have are case studies of lesions, and correlations of stimuli with neural activity traces thanks to neuroimaging. It's "amazing" when we scan advanced meditators and notice signal differences between them and non-meditators - obviously something different is going on! We knew that already by watching their behaviour! We know that the brain is not designed to have strict localization, and we know that it is not an equipotential organ. What we get are a lot of correlations where we aren't sure of anything. X is 'associated' with Y, in 7 out of 13 studies. Thanks.

    Neuroimaging, and neuroscience, is not the holy grail of answers like I thought it would be. It's more of a murky soup of iffy, hesitant maybes. Even if one part of my brain lights up every time I see a picture of a loved one, it says nothing about how the action potentials there became feelings, or even if those action potentials are capable of making feelings. This degree has not answered my man question, but it has definitely bluntly shown me how much I didn't know that I didn't know.

    ReplyDelete
  12. "I also feel what it is like to see blue, hear music, touch wood, or move my arm"

    A musing, though a subpoint: Personally, I find that when I think of feeling I automatically think of emotions, pain, etc...things that are all actively involved in the traditional definition of feeling. All these other components of feeling recall the idea of adaptation that arises in perception courses—namely, that over time we become “immune” to the sensation of the stimulus because our body stops paying attention to it. Since we are constantly being exposed to the colour blue, we forgot what it feels like to see blue—our body has stopped paying attention to the other feelings. Perhaps the organization and modulation of feelings thus works in a similar framework?

    "More subtle, but no less feelingful (hence no less conscious), is feeling what it is like to think (or to understand or believe or know)..."

    Even tougher is to wrap your head around the idea that it feels like something to think. So, we feel something (like what it feels like to see blue), and then think about that feeling, and then feel what its like to think about what its like to feel. It's easy to get lost in a spiral of thinking about feeling about thinking...

    Of course, it's also a very interesting spiral to get caught it...

    ReplyDelete
  13. So the premise of this paper is that consciousness is feeling. Somehow that idea bothers me. I always thought of consciousness as the awareness of our ability to feel. But perhaps if we're equating awareness with feeling, that makes no sense. Yet the concept still bothers me and I'm not completely sure why.
    I can agree though with that what we do unconsciously - what we know how to do - is "performance capacity". That's to say that the ability we have to do things and maneuver through life and behave in certain ways, are not part of consciousness. This kind of functionality is all that we would be able to program a robot to do: move around, analyze numbers, etc. There is no way of telling that this kind of robot (that if acting as a human would) would actually have the capacity to feel. But these robots have no been built and are probably never going to be built. Furthermore, if we were capable of such a task, I think it would be entirely unethical to even consider doing it. So I agree wholeheartedly that feeling is not something that we can or should try to build into robots. There is no point in something like that.
    Perhaps my aversion to calling consciousness feeling is because we can manipulate our own feelings (like distracting ourselves from pain) and also manipulate that thoughts of others. Perhaps to me consciousness if just thinking.
    I'm not sure that makes any sense, but all this thinking about it is not exactly easy.
    All I know, is that consciousness cannot be "implemented" and really shouldn't be something we should even attempt to do.

    ReplyDelete
  14. “If we could explain what the causal advantages of feeling over functing were in those cases where our functions are felt (the 'why' question), and if we could specify the actual causal role that feeling plays in such cases (the 'how' question), then there would be scope for an attempt to incorporate that causal role in our robotic modeling.”

    This reading has made me revert back to the issue of what exactly the how and why questions of the hard problem are. I’m assuming the how question that we are interested in isn’t the kind of how question that can be answered with a brain scan showing what areas of the brain light up when we feel like we understand and an electrical and molecular theory outlining what causes those brain areas to be active. Because it seems like it would be at leasr semi-feasible for such a question to eventually be answered for every feeling we feel. It’s clearer to me what the why question is. Why do we feel certain functions/why does feeling correlate with certain functions and stimuli? Why is it beneficial for us to feel pain when we touch something that pierces our skin when these nociceptive functions can be recreated in a robot that doesn’t have the capacity to feel? The quotation at the beginning of this post clarified the how question for me. According to my current understanding, what we are really asking is, “How can we explain the fact that it feels like we cause ourselves to do things?” My gut reaction finds it exceptionally difficult to accept that when I decide to make a fist that feeling is not causing me to make the fist, but intellectually I realize that feelings must be piggybacking on the four causal forces (electromagnetism, gravitation and strong and weak subatomic forces) stated in the paper because there is no evidence that feeling is a force in itself.

    Part of the reason language is adaptively beneficial is because it allows for complex communication between group members. Could feeling be adaptive (partially) for social reasons? Sharing feelings fosters supportive interactions and having a strong social network is beneficial to our mental and physical health. I’m not saying I think this is a plausible all-encompassing explanation of why we feel, this is just more of an observation that the ability to feel enriches our social lives quite significantly.

    Finally – “The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too.” – But we’ll never really know for sure will we?

    ReplyDelete
  15. based on the view of cognitive sciences from today's class and from the readings, i.e., that cognizing involves (not only how and why you do but also how and why you feel). then, it would be difficult NOT to argue that all animals also congnize, correct?

    and where is the boundary? would potatoes, who make high pitch noise which are just not detectable by human ears, possibly also cognizing, i.e., feeling the cutting?

    how would we ever be able to tell that t3-eathan does NOT feel, it seems just doomed to failure?

    ReplyDelete
    Replies
    1. I'm not sure if this is correct, but cognizing includes things that not all animals can do, eg. humans also have language. So even though feeling is part of cognition, it isn’t all what cognizing is (it feels like something to do something, but feeling isn’t everything that humans can do). So even though animals can feel, that doesn’t necessarily mean they can cognize.
      I would say the boundary is that for something to feel, it must have a brain. Perhaps saying a ‘brain’ goes too far, as scallops don’t have a brain. But I would say that has a nervous system and sensory receptors would be where we draw the line at whether or not things as we know it can feel or not. With that being said though, how can we ever know if we have made a T3 robot, whether it could feel or not, if it doesn’t have a nervous system or neurons? Even though we know organisms that organisms that feel have nervous systems that allow them to feel, that still doesn’t tell us how or why they do it.
      “how would we ever be able to tell that t3-eathan does NOT feel, it seems just doomed to failure?”
      That is the thing, we wouldn’t be able to tell whether t3-Ethan can feel. Isn’t that the Other Mind’s Problem?

      Delete
  16. The how and why we feel problem makes me more and more confused after today's lecture and after reading this article. I feel it hard to understand the how and why problem itself rather than trying to answer it. There is no doubt that we have feelings, and there is no doubt for me that feeling has a causal purpose. But in class, it seems essential to distinguish from problems like why feeling is causal, why doing and feeling are different, whether we feel something when we do something, and the hard problem how and why we have feelings.

    At first, the hard problem itself does not confuse me that much because to me feeling and doing are absolutely different and it is almost impossible to test feeling because only I feel what I feel. However, when we talk about pain and feeling, I got so confused. I can understand that we feel pain, and we can do whatever that needs to be done without feeling the pain, and then the hard problem, so why and how we feel it, comes.

    So far, so good. But if we say, we feel pain because otherwise we could not know we got hurt. It seems wrong because from researches we know that feelings come after what the brain's response. What if we say we feel pain in order to protect ourselves, saying to let us know something is wrong here and maybe we should not try this next time? It seems that feelings give us an explanation, or motivation, or a reason to do something and not do something next time. I guess the question right now becomes, why such explanation exists?

    I think here is why that I think I will never get to the answer to this hard problem. It is not about the difficulty of the problem itself. It feels like for all the questions, if I answer one, there is another question coming up, and I can always ask how and why in the next question. The problem is, no matter what answer I give, and what questions that are following, there seems no way to verify - neither the answers, nor the questions themselves. It looks like a dead end to me.

    ReplyDelete
    Replies
    1. "So far, so good. But if we say, we feel pain because otherwise we could not know we got hurt. It seems wrong because from researches we know that feelings come after what the brain's response."

      I originally stood behind the behaviorist standpoint of experiencing feeling in order to encourage or discourage various behaviors. The counterargument frustrates me; it says that this is not necessarily true because we can build robots that simply learn not to damage themselves without ever feeling anything--that you can program certain rules such that one could survive without ever really feeling the way we do. This bothers me because I have so little faith in it being a non-computationalist but perhaps I don't have enough information to support it. But still you are right. The behaviorists do not have an adequate response against the brain perceiving before the mind does or programmed learning (without feeling) in T3 robots, if they don't feel that is.

      I agree with the "can of worms" problem that arises around the hard problem. Even when I was supporting behaviorist thinking, I was using the word motivation to explain feeling but isn't motivation sort of a feeling? You can't really explain feelings with more feelings. Motivation in a more abstract sense, as something that is innate perhaps, is also useless because then we wonder how that got there too and what the purposes for it are. As others have mentioned earlier, it seems like a rather futile effort to try and answer it.

      Delete
  17. "the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move."

    This phenomenon is supposed to explain how feeling is not a fifth force and that it didn't cause this readiness potential to occur rather it was the readiness potential that might be responsible for the feeling of willing the action.
    What still bothers me here is that this readiness potential wouldn't have occurred if there wasn't any willing from the part of the subject. Meaning that wether the feeling of willing occurs after or before that readiness potential, this potential wouldn't exist without the act of willing so feeling the willing might not be a fifth force but there is a certain force that still causes this potential to move. I don't know if I can call that free will, or the power of our thoughts, but it looks as if there is something that is driving and causing our actions.

    ReplyDelete
  18. “The positive examples of feeling are easy, and go far beyond just emotions and sensations: I feel pain, hunger and fear. I also feel what is like to see blue, hear music, touch wood, or move my arm. More subtle: but no less feelingful (hence no less conscious)….”

    In line with my skywriting to 10c, I seem to find an answer here to the difference between “I feel fear” and “seeing purple feels like something.” Harnad and Scherzer argue that the difference here is just a matter of subtlety. If you can be conscious of both, then you can feel both. I am inclined to think that the difference here is more than just a matter of subtlety, but it is hard to explain.

    On the section: “Feeling Versus “Functing”: How and Why do we feel?” Harnad writes “Why are some adaptive functions felt? And what is the causal role -- the adaptive, functional advantage -- of the fact that those functions are felt rather than just functed?” Harnad continues to argue that this question is unanswerable and I would assume if anyone came up with an explanation like: “feeling allows us to feel empathy, to bond and create connections, not limited to human beings but also with animals” he would classify it as a “just-so story.” For example: I fail to see how the bonding that occurs between a mother and her baby is just “functing” and not “feeling.” What am I missing here? Just because there might be a biological and hormonal explanation to this, does it imply that this bonding is mere performance capacity?

    ReplyDelete
  19. "To be conscious of something means to be aware of something, which in turn means to feel something. Hence consciousness is feeling, no more, no less. … More subtle, but no less feelingful (hence no less conscious), is feeling what it is like to think (or to understand or believe or know) that that is a cat, that the cat is on the mat, that 2+2 = 4. To think something, too, is to feel something."

    When I first read this, I thought: ok, so is feeling all cognition is? But then I quickly remembered that consciousness is part of cognition, but many cognitive processes happen while we are unaware of them. That feeling is all there is to consciousness is hard for me to accept - I feel like thinking is also part of it. Can robots think? They can make calculations and decisions based on different inputs. But when we are thinking about what we are feeling what does that mean? If we consider feeling as an input then ok, feeling is all there is to consciousness. But I’m still not convinced that my thinking is not conscious. Maybe I am mixing up the terms.


    "So the problem is not with uncertainty about the reality of feeling: the problem is with the causal role of feeling in generating (and hence in explaining) performance, and performance capacity. Let us agree that to explain something is to provide a causal mechanism for it. The concept of force plays an essential explanatory role in current physical theory. Until/unless they are unified, there are four forces: electromagnetism, gravitation, and the strong and weak subatomic forces. There is no evidence of any further forces. Hence even when it feels as if I’ve just clenched my fist voluntarily (i.e., because I felt like it, because I willed it), the real cause of the clenching of my fist voluntarily has to be a lot more like what it is when my fist clenches involuntarily, because of a reflex or a muscle spasm. For feeling is not a fifth causal force. It must be piggy-backing on the other four, somehow. It is just that in the voluntary case it feels as if the cause is me."

    Why does feeling have to be piggy-backing on the other four?

    I realized that we reach an impasse with the other minds problem + hard problem within this framework. In my mind there is one possible explanation for why we feel: because we randomly developed this ability, and it was beneficial to survival so that’s why we kept feeling. This is within the evolutionary framework.

    But what about another framework?

    ReplyDelete
  20. By asserting that we should not worry about “feeling” and just scale up our Turing devices in the hope that feeling will be there once we successfully design our T3 robot, is that not also suggesting that we should be satisfied with T2 and avoid understanding?

    After all, isn’t understanding just a “feeling of understanding” rather than a function or a doing? If we can’t get around the other minds problem, why do we take Searle so seriously when he says manipulating the Chinese symbols doesn’t feel like understanding?

    Aren’t we just interested in reverse engineering “doing”, and not so much on trying to include ‘feeling like understanding’?

    ReplyDelete