Saturday 11 January 2014

10a. Dennett, D. (unpublished) The fantasy of first-person science


Extra optional readings:
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014


Dennett, D. (unpublished) The fantasy of first-person science
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."

Dan Dennett's Video (2012)



Week 10 overview:





and also this (from week 10 of the very first year this course was given, 2011): 

62 comments:

  1. “Chalmers fervently believes he himself is not a zombie. The zombie fervently believes he himself is not a zombie. Chalmers believes he gets his justification from his “direct evidence” of his consciousness. So does the zombie, of course.”


    Dennett’s statement above is re-iterating the so-called Hard Problem, albeit in slightly different words. We all know what it feels like to feel, but we have no proof that anyone else (human or zombie or robot) is feeling. One way to try to figure out if someone is feeling is to talk to them about feeling, because presumably a feeling needs to be experienced in order to be talked about. However, Searle’s Chinese Room Argument demonstrates that a set of formal rules could be used to generate output without actually understanding the output, which in this case would mean understanding what it feels like to have a certain feeling. So for input “love”, a formal set of rules may dictate using words like “happy, warmth, caring” which are all words that a human might use to discuss love. Making the situation even more difficult is the fact that feelings are particularly difficult to ground, because we can’t just point to them like we might a chair and say “see, that equals love”. Thus, even human-human interactions about feeling are difficult (impossible?) to know with certainty. We just assume that since someone is human, then they feel.

    Dennett’s position is that the same courtesy should be extended to zombies (or T3 robots that we’ve discussed in class). If the zombie says he feels, then he must be feeling since that is the exact same proof we use for believing (knowing) that humans other than ourselves feel.

    I think that both Chalmers and Dennett are really looking at two sides of the same coin. They are both looking for proof of whether a zombie or a robot experiences consciousness, that feeling of understanding something. Both agree that the robot could say that he is feeling, understanding, and experiencing consciousness. Chalmers says that there is no way to prove that the robot actually feels, and that he is not just following a formal set of rules dictating the appropriate response. Therefore, the robot does not feel. On the other hand, Dennett says that there is no way to prove that the robot is just using a set of formal rules, and that he isn’t actually feeling. Therefore, the robot must feel. In both situations, these scientists are taking a leap from empirical facts (that the robot says he feels) in order to make their own conclusions. There is no way to know which position is actually right without understanding consciousness more than we currently do. We need to find a way to show the experience of consciousness outside of just saying what we feel. This of course brings us full circle back to the Hard Problem.

    If I have overextended Chalmers’ position and he is actually just arguing that there is no way to know if the zombie is experiencing consciousness, then we are actually on the same page. (Or actually I am on his page, since these are just his conclusions to which I eventually arrived.) Instead, my argument above applies more to any zealots who just “know” that only humans can feel.

    ReplyDelete
  2. 1. It is not clear whether computation alone could pass T2. Searle just shows that even if it could, it would not generate (felt) understanding.

    2. Yes, feelers can talk about feeling. But talking about feeling certainly doesn't "prove" feeling is being felt.

    3. The hard problem is not knowing whether someone else feels: That's the other-minds problem.

    4. Knowing that I feel is descartes' Cogito. It proves (for me) that there is feeling.

    5. The hard problem is explaining how and why organisms feel rather than just do. (You are mixing it up with the other-minds problem, Jessica.)

    6. "Feeling" includes everything that's felt, not just emotions like love. They include what it feels like to see green, to hear an oboe, to smell lilacs, to touch a rough surface, to feel hungry, to make a movement, to will a movement, to believe it's Thursday, to know you are feeling a toothache (but not that you have a tooth), to understand Chinese, to mean "2+2=4."

    7. What is impossible for just computation (T2) is to connect symbols with their referents in the world: for that you need T3 grounding.

    8. But grounding does not guarantee feeling (or meaning) either: If zombies are possible, a T3 robot could be a zombie.

    9. But even if there cannot be zombies, the hard problem is to explain how and why not.

    10. It is not feelings that are grounded, but words. Whether you are referring to the taste of apples or the feeling of anger, the word picks out the referent but the hard problem is to explain how and why that (or anything) is felt.

    11. If computation could pass T2, then that includes being able to talk about anything, including about feeling (and about the other-minds problem, and about the hard problem).

    12. T3 has to be able to talk about feeling Turing indistinguishably too -- even if there can be zombies and there can be T3 robots, and T3 robots can be zombies.

    13. With other human beings we are safe to assume that they, too, feel. With T3 robots, it's not quite that safe (but safe enough).

    14. But even if a god comes and guarantees that a T3 robot feels, that still does not solve the hard problem (of explaining how and why it feels, rather than just does.)

    ReplyDelete
    Replies
    1. I am a bit confused about feeling--do we need to experience something to feel?

      Say, you have a perfect functional robot and you feed into it, all of your memories (this includes all the feelings that accompany the 1st person narrative). The robot relives your life, essentially, as you. Before this implantation of memories, this robot obviously does not FEEL. It only responds functionally. But how about afterwards? Will this robot be able to feel based on these memories? Would the robot be able to experience the memories the same way you did, with feelings?

      Delete
    2. Week #10 requires more reflection (and introspection) than any of the other weeks... Part I

      Jocelyn, "experience" is a weasel word: It's actually synonymous with feel. (Can you experience something without feeling it? Can you feel something without experiencing it? Both are really just asking: Can you feel something without feeling it?)

      [By the way, this also opens up the (incoherent) Freudian Pandora's Box of the "unconscious mind," which boils down to the incoherent but widespread notion of "unfelt feelings." Think about it...]

      If you have a robot into which you can "feed" ("implant") feelings, you've already begged the question of whether the robot feels (or, rather, you've pre-answered it, positively, by pre-supposing it, as in Spielberg's AI!).

      Time is a very subtle matter here: You can only feel what you are feeling at the instant (Funes-style). If you are remembering a feeling, you are not feeling it, now; what you are feeling now is what it feels like to remember it. But a memory, like any sensation (feeling) can be an illusion or a hallucination. When I feel a toothache, that does not guarantee that something's wrong with my tooth. Maybe it's phantom-tooth pain or referred pain from my jaw. Maybe I have no tooth, or jaw, or head, or body. Maybe there's no outside world. Maybe everything's a hallucination. The only thing that's guaranteed to be true right now (Descartes' Cogito) is that it feels like a toothache right now. Well, ditto for a memory of something I felt yesterday (or a second ago): It feels (now) like I once felt that; but maybe that too is an illusion, and I am mis-remembering. Feeling is always in the here-and-now, even if it feels like it's about yesterday. Déjà vu...

      Try thinking about this again, without the weasel words, and facing squarely the fact that the only sure thing about a feeling is what it feels like, now, not what it is "about" -- whether in the world "outside" you now, or in the past.

      So the only thing you've really said about your hypothetical "feeling-implanted" robot is that it does feel -- because you said so. That's what you pre-supposed.

      By the way, the "1st-person states" and the "1st person narrative" are just more weasel words. "1st-person state" just means felt state. And "unfelt states" are just unfelt states. A rock is in an unfelt state, not in a "3rd person state" (who's the 3rd person -- or the 2nd -- or the 1st?). And if my body is in an unfelt state -- say, I am in delta sleep, hence not there at all -- it's a state of my body, not my "mind," because neither I nor anyone else is feeling it, nor feeling anything.

      Delete
    3. Week #10 requires more reflection (and introspection) than any of the other weeks... Part II

      So "1st person state" is redundant. The "person" is the feeler. And the only feelings a feeler can feel are its own.

      And the "narrative" is simply the verbal description of that felt state. A second or third person can understand that narrative if their own words are grounded in the same (or similar enough) referents and they feel the same (or similar enough). But whether someone else really feels the same -- or feels anything at all -- cannot be known for sure, because of the other-minds problem. The closest you can get is T3 indistinguishability...

      [And "the machinery experienced a jolt at 4:45 am" is clearly just a figure of speech for "the machinery was jolted at 4:35 am." Nothing was "experienced" ( = felt ) by anyone (by any person or sentient entity, whether 1st or 3rd...)]

      Optional reflections for the intrepid: The notion of 1st-person vs 3rd-person states is related to Locke's notion of primary and secondary qualities -- and it is incoherent for related reasons. Locke thought (1) some feelings (what square or round things look like) "resembled" the causes in which they were grounded (their referents), because some things really are square-shaped or circle-shaped, whereas (2) other feelings (like what red or what salty tastes like) do not "resemble" the causes in which they were grounded (because redness is just in the eye of the beholder). Locke called these (1) primary and (2) secondary qualities.

      But a little reflection will show that feelings ("1st person") and their "external" causes ("3rd person") are actually incommensurable, always. Who's to say that the feeling of "squareness" resembles "squareness" whereas the feeling of "redness" does not resemble "redness"? They are both just the causal outcomes of inputs to T3, and the other-minds problem prevents their "similarities" from being "measured" on the same scale, comparing feelings with their external correlates and causes. The only thing that links them is a "narrative," and there's no "Searle's Periscope" to test whether there is any resemblance, or even what "resemblance" would mean -- other than T3 grounding itself.

      To put it another way: all feelings are "secondary" qualities, and the causes in which they are grounded are "primary" qualities. The "hard problem" is hence explaining how and why there are secondary qualities at all...

      Delete
    4. Right now I feel that I don't feel anything at all.

      It seems that the feeling is there--I feel that I don't feel. But then I hit a contradiction. How can I feel that I don't feel? How do I know that I do feel? There's nothing empirical about feeling, so why should I believe that it even exists? Especially if there's no way I know if anyone else is feeling as I do, or for that matter feeling at all.

      It feels like no matter how I approach a counter to the presence of "feeling", I go in a circular argument. If I question the existence of feeling, then I obvious have to be feeling it--bringing us back to the hard problem of why and how. Is there even a way around it?

      Delete
    5. What Does It Feel Like To Feel?

      Jocelyn, nothing circular at all. You know what it feels like to see red. It feels different from seeing green. Well, when you're awake, every conscious Funes-instant feels like something. (Not seeing anything at all feels like something too: it feels like something to be blind.)

      But there is something deeper here, and it's related to that example of the category "Laylek" (positive evidence only) that I pointed out a few times in class. If everything you sample is in the category Laylek, and nothing is not in the category, then maybe it's not really a category at all, or at least not a "normal" category. A category necessarily has members and non-members. That's how you learn (by trial-and-error induction, implicitly or explicitly, or by instruction) the features and rules that distinguish the members from the non-members.

      And that is definitely a problem for the category "what it feel like to feel": Everything you feel is a member of the category "what it feels like to feel." But it is impossible to feel what it feels-like to not-feel: And I really do mean not feel anything at all, not just not feel this but feel that (e.g., red vs green).

      So feeling is an "uncomplemented category." Everything is a member; there exist no non-members (complement). So, rather than circularity, the problem you are having is with how to tell the difference between what it feels like to feel (something) and what it feels like not to feel (anything at all). But it doesn't feel like anything to not feel anything at all. So although that doesn't make the category "feeling" empty, it does make it problematic, perhaps paradoxical (like the self-denial paradox -- "This sentence is false" -- which is false if it's true and true if it's false. Or perhaps more like the "ungrounded" sentence -- "This sentence is true" -- which is true if it's true and false if it's false -- but which is it?)...

      Another reason the hard problem is hard. (But there are many other reasons, most of them having to do with causality and the causal role of feeling -- especially explaining it.)

      By the way, "Laylek" (in Hungarian Lélek) is the Hungarian word for "soul" or "spirit." And of course feeling, and the capacity to feel, was all that soul or spirit (or mind or consciousness, or any of the other weasel words) ever really meant. So, as an uncomplemented category, it was bound to cause problems...

      If this hasn't already scare you off, have a look at this:

      Harnad, S. (1987 Uncomplemented Categories, or, What is it Like to be a Bachelor? 1987 Presidential Address: Society for Philosophy and Psychology.

      Delete
    6. Thank you for pointing out that I indeed did confuse the Hard Problem with the Other-Minds Problem.

      I still don't think either Dennett or Chalmers are getting to the point of the Hard Problem though. Although reverse engineering has been very helpful in the past in helping us to understand how and why something (I say something because we've used this technique not only for humans) does what they do, it seems to me that an impasse has been reached. There are two solutions: give up (be it from frustration, or the acceptance/belief that we will never know how and why we feel), or try to devise a novel technique for solving this problem.

      Dennett and Chalmers seem to go with option C) focus on other things, like a he said/she said argument about feeling. Interesting stuff as well, but not only does it not get to the root of the problem, but I argue that they have reached an impasse as well.

      Delete
  3. “Heterophenomenology is nothing but good old 3rd person scientific method applied to the particular phenomena of human (and animal) consciousness”

    I am slightly confused about heterophenomenology. During week 4, we talked about how looking at correlates between brain activity and what we do simply tells us that a specific part of the brain is active when we do something. Looking at these correlates does not tell us anything about how we do what we do. I see a lot of similarities in heterophenomenology.

    What I gathered from this reading is that heterophenomenology measures “every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected”. Please correct me if I’m wrong, but I think heterophenomenology simply confirms these things exist, but doesn’t help us study consciousness more deeply.

    For example, knowing that specific parts of the brain show more activity when doing math problems simply shows us that and nothing more. Having this knowledge does not help us understand how we actually solve these math problems. Similarly, noting down the when people blush won’t help us understand how we blush or what it feels like to do so, it simply tells us that we blush under certain circumstances.

    Perhaps I misinterpreted something or missed something in the reading, but I am curious to know how does heterophenomenology help us understand how we feel?

    ReplyDelete
    Replies
    1. You are perfectly right: Heterophenomenology is just mental meteorology (weather forecasting): Predicting mental states from correlations. If it includes T3 and T4, it is also predicting and explaining all doings and doing capacity (which is a lot more than just heterophenomenlogy). But predicting when and what we feel is not the same as explaining how or why.

      Delete
    2. “We construct therefrom the subject’s heterophenomoogical world. We move, that is, from raw data to interpreted data: a catalogue of the subjects’ convictions, beliefs, attitudes, emotional reactions…but then we adopt a special move, which distinguishes heterophenomenology from the normal interpersonal stance: the subjects’ beliefs (etc.) are all bracketed for neutrality” (p. 2)

      I’d like to add my thoughts to Reginald’s comment that heterophenomenology doesn’t actually tell us about the how or why of cognitive processes. I agree, it seems that heterophenomenology simply paints a picture of someone’s mental “landscape”, while completely failing to tackle the bigger tasks of explaining them. Dennett seems to promote this approach as an objective way to get information about the mind. He proposes it as an alternative to mere introspection, which we’ve repeatedly determined is not enough to explain anything. Instead of going with people’s homunculus driven accounts of their mental experiences, using metaphors and broad language, heterophenomenology is a way to measure first person experience - supposedly. From what I can tell, Dennett isn’t claiming that heterophenomenology will do anything more than this. He seems to propose it as a first step towards eventual explanations. It maintains “neutrality” as he claims above, which introspection certainly does not.

      It is certainly more objective than introspection, the question in my mind is - can heterophenomenology really measure everything (and then some) that introspection can reach? On page 3, Dennett addresses this question, saying that heterophenomenology can in fact read more about our mental processes than introspection alone. He provides examples of experiments performed in which subjects’ qualitative experiences do not tell the whole story of what was really going on. This is his way of showing that heterophenomenology is more objective and informative than introspection. I have some trouble agreeing with this. Even though subjective reports can miss things and distort reality, it still seems more important to study this subjective experience for what it is than to consult the “neutral” data. After all, the hard problem is all about how we feel. It might be besides the point to study what it feels like side-by-side with what is actually going on.

      Delete
    3. Lila, "heterophenomenology" is just introspection plus T2 +T3 +T4.

      Delete
  4. I would have to admit that this is the paper that, so far, has given me the strongest reaction.

    I have difficult in agreeing with Dennett’s arguments; he articulates extensively his heterophenomenology approach (which he claimed to be neutral and objective), while avoiding the operational definition of the very terms that he employed.

    What is a belief? How “scientific” can any method be if one cannot even define what a belief is (which is probably the most recurrent concept in his heterophenomenology)? Moreover, I cannot understand why Dennett separates beliefs as “true beliefs” and “false beliefs”. I could argue that every subjective experience is “true”; this is your own current conscious experience (and not someone else’s). Although there might be some “false beliefs” (which I don’t see how is it any different from illusions and biases of perception), they are as relevant and as “part of one’s world” than true beliefs.

    (I might be wrong on my last sentence. In fact, I am not entirely sure why is Dennett trying to separate beliefs in “true” and “false”. My guess is that one will gain more neutrality by doing so, but this does not explain much)

    Again: what is neutrality? What does it mean for our subjective beliefs to be neutral? Wouldn’t “bracketing our beliefs for neutrality” rob the very essence of our subjective experience?

    Again part II: What is consciousness and internal states? In his zombie example, Dennett claims that one can have internal states without conscious contents (he makes it all the more confusing when he adds that they might be “pseudo-conscious contents”).

    Thus, Dennett seems to gratify his own heterophenomenology proposition by arguing that Chalmer’s approach has “no methods, no data, no results, no future, no promise”. From what he provided in his text, I do not see how heterophenomenology is any better at answering the hard problem.

    PS: I also don’t see how Dennett’s camp A gains more evidence by only providing arguments against camp B. What if there is a camp C, a camp D, etc.? I don’t believe that the argument is that dichotomous.

    ReplyDelete
    Replies
    1. Beliefs About Feelings

      A belief is something you take to be true. I believe I am typing right now. My belief is (probably) true. I may also have beliefs that are false (but I don't know it). If I believed I was reading a book rather than typing right now, I would be wrong. If I knew it was false, it would not be a belief.

      It feels like something to believe something. It feels like something to believe it's Friday today. It would feel different to believe it was Saturday. Either belief could be true or false.

      Dennett thinks that I believe that I feel, and that that belief is false (or does not mean what it is usually taken to mean).

      I think if feels like something to believe something, and hence the belief that my belief that I feel is false is in fact self-contradictory (and violates Descartes' Cogito).

      It is not true that every belief is true, but it is true that every belief feels true (to the believer). Otherwise it is not a belief, but something else (maybe a lie or a pretence).

      There is nothing especially neutral about heterophenomenology, as between felt and unfelt states. To a believer in feeling (as any sensible Cartesian would have to be), "heterophenomenology" is just mental weather-forecasting: predicting feelings from their neural, behavioral and verbal correlates. To a nonbeliever in feelings, all there is is neural, behavioral and verbal correlations with one another. There is nothing else (feeling) of which they are correlates -- (I feel hot just means I believe I am hot, and that belief is either true or false -- so heterophenomenology, is just predicting verbalizations from their neural and behavioral correlates.

      Both Dennett and Chalmers have the same "evidence" (neural, behavioral and verbal); Chalmers just believes it when told that I am feeling; Dennett thinks it's a false belief. Chalmers has no solution to the hard problem (of explaining how and why organisms feel rather than just do); Dennett thinks there isn't any hard problem, hence no need for a solution, because organisms don't feel, they just do. Or, rather, feeling is just a kind of doing.

      I'm not sure what you think that a Camp C and D might believe. A says we don't feel, so there's no hard problem. B says we do, so there is. What are the other options? Maybe B1 thinks there's a hard problem, but that it can be solved, and B2 thinks there is a hard problem but it cannot be solved. (I am in B2, but with reasons: the falsity of psychokinetic dualism; the sufficiency of the four forces of nature to explain doing (the easy problem); the fact that that leaves no causal degrees of freedom for explaining feeling; and the hunch that feeling will prove to be superfluous in every attempted explanation.)

      Delete
    2. If someone believes that the hard problem is unsolvable, yet maintains that cognitive beings do feel, is there an alternative way, besides providing a causal explanation, to prove that humans feel? It seems Descartes Cogito is enough for a person to believe that he/she feels. But, if team A individuals claims that everyone doesn’t feel, wouldn’t they imply that they, themselves, don’t feel, and consequently not believe in Descartes Cogito?
      I guess a team A individual would argue that Descartes Cogito would be true for any individual, including the pseudo-conscious zombie replicas of any individual. Thus, the illusory notion of one’s own feelings merely result from “machines” that function in a manner that reproduces behavior in the same way that a human can. Support for this perspective seem to carry over to “machines” with cat-like capacities, insect capacities, and so on. These machines produce behaviors and use them to interact with the world, albeit not in the same way as humans. Although these machines presumably cannot formalize the notion that because they feel they must be able to feel, like I can, wouldn’t they still experience illusory feelings of something in accordance with their functioning biological make up?

      On the other hand, it seems like a team B member accepts the Cogito, but it simply won’t get them anywhere! They can’t empirically measure feeling in another individual (unless you think that you can via a 3rd party analysis of one’s subjective expression of their own beliefs, which brings us back to team A), and thus your left with the other minds problem. More so, as Prof. Harnad has mentioned, the physical laws of nature do not show us anything about feeling either.
      Because a team B (I guess more specifically a B2) member accepts the Cogito, but has absolutely no way of discovering how feelings arise, it seems like their perspective is based on an introspective hunch. Of course I mean no offense (considering my own opinion has fluxed between A, B and all of their subcategories on multiple occasions since the beginning of the course), but I guess I wonder how a B2 is any different from an A member, besides the fact that they have convinced themselves of the non-illusory nature of their own feelings? Why can’t the notion of one’s own feelings be an epiphenomenon of a machines processing of physical events? Why bother explaining its causative nature when the physical substance is all we will ever have access to?

      I guess a conclusive answer is unattainable, but so is the answer to whether God exists or not. Right? It seems like a question on the nature of feeling is literally just as profound.

      Delete
    3. Beliefs

      Adam, don't ask me about what the A Team believes because to me they are doubting the undoubtable if they deny the Cogito!

      Besides, for the A Team everyone is already a zombie: They believe we all behave and talk as if there were such a thing as "feeling," but there's no such thing, other than behaving and talking as if there were. That's not an "illusion" (because it feels like something to have an illusion). It's just an empty word, like "phlogiston" -- or my example of an "uncomplemented (no negative evidence) category: Laylek/Lélek (which in Hungarian happens to mean "soul" or "spirit"...)

      The A Team doesn't believe there are feelings, so for them there is no hard problem to solve. The B Team does believe there are feelings, and hence that there is a hard problem to solve. B1 (the optimists) think it can be solved; B2 (the pessimist) thinks it can't (for specific reasons I've listed in other comments).

      Calling feelings an "epiphenomenon" just re-names them; it doesn't explain them. "Illusion" is a weasel-word: It can mean something that feels one way, but reality is another way (like the Mueller-Lyer illusion); or it can be a mistaken belief (like believing there is a "soul"). But, as I noted earlier, it feels like something to believe something (even something that's wrong). So saying that feeling is just a mistaken belief does not make either feeling or the hard problem go away...

      I agree, though, that if there is no hope of ever finding a causal explanation of how and why organisms feel, then there's no point losing much sleep about it. But (1) B2 is just "Stevan says" (so maybe the B1 optimists will turn out to be right) and (2) even if we can never find a causal explanation of how and why organisms feel, no one can deny that it was reasonable to have expected there to be one: feeling, after all, is just another biological trait, isn't it?

      I disagree that the question of (i) whether or not gods exist and the question of (ii) how and why we feel are equally "profound" (or on a par in any other way). After all, there is no evidence at all that gods (or immaterial, immortal souls) exist, whereas every waking moment of every feeling organism's life is evidence that feeling exists... So the (unanswered and perhaps unanswerable) question about how and why organisms feel is surely the most profound question of all (if "profound" means anything at all), whereas the questions about gods and souls are more in the realm of speculative metaphysics (or SciFi).

      I do think, though, that the unanswered and perhaps unanswerable question about how and why we feel -- the hard problem -- is in fact at the root of our belief in immaterial, immortal souls or spirits, as well as in omnipotent, omniscient and omnipresent gods...

      Delete
  5. I get the sense that Dennett is trying to clearly demarcate a few things:
    The underlying mechanism of feeling
    The feeling itself
    The heterophenomenology of feeling - what I understand to be the enactment of feeling, or the ways we express ourselves, behave or describe feeling. This includes both from our perspective and another person's perspective.
    This last part is the part that Dennett attacks (as Jessica pointed out above). He basically argues that it is not enough to rely on behaviour or expression that one feels in order to assess that one is feeling. Moreover, what does knowing IF one is feeling do to help with understanding how and why we feel?
    Yet I have trouble with the way Dennett organizes phenomena. There is an assumption something concrete exists out there that creates feeling. He assumes that there is an 'answer' or 'truth', which exists independently from our enactments, understanding, expression, or 'feeling of feeling'.
    In my mind though, it is impossible to access that, because (and maybe this is something Kant would say) we only have access to a reality that we shape - and so we will never actually be able to access an understanding of 'feeling' that exists in isolation from human conceptions of it. For example, even a neurological basis for an emotion is a conception that we have shaped, and is in no way neutral or 'true'.
    So in asking how and why we feel, can we ever analyze anything other than heterophenomena?

    I am nervous that my use of the word 'feeling' lost the scope of its meaning over the course of this post.

    ReplyDelete
    Replies
    1. Everyone knows what it feels like to feel something -- anything, whether what it feels like to see red, hear thunder, touch a rough surface, move one's arm, feel hungry, tired, hot, angry, etc. etc.. (Some) organisms (mainly animals) feel; rocks (and computers and today's robots) don't.

      Dan Dennett believes that feeling just amounts to whatever we do and say under various conditions (like looking at red surfaces, being present during thunder, etc.) along with the accompanying neural activity. He thinks that's all that feeling is -- not that it's something that accompanies what we do and say (and our neurons do) under those conditions...

      Delete
    2. Does this mean that Dennett would deny that feeling has any causal power? He calls for us to all "leap over the Zombie Hunch" and accept the idea that computation is the only thing happening in the brain. He repeatedly describes how subjective our introspection is and how much of the world we miss when we rely on intuition. Accordingly, it seems like he would dismiss our intuition that our feelings and thoughts have causal power in favor of a mechanical explanation of what is going on in the brain. Is this accurate?

      Delete
    3. Lila, Dan's not the only one who would deny that feelngs have causal power. So would I (because I don't believe in psychokinetic dualism)!

      But Dan would also deny that feelings were anything but "beliefs" (internal true-or-false propositions). I certainly don't agree with that.

      And a large part of this course has been about why thinking can't be just computation.

      Delete
  6. Here is Chalmers’ definition of a zombie (his zombie twin):
     
    Molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely . . .  he is embedded in an identical environment. He will certainly be identical to me functionally; he will be processing the same sort of information, reacting in a similar way to inputs, with his internal configurations being modified appropriately and with indistinguishable behavior resulting.  . . . he will be awake, able to report the contents of his internal states, able to focus attention in various places and so on.  It is just that none of this functioning will be accompanied by any real conscious experience.  There will be no phenomenal feel. There is nothing it is like to be a Zombie. . . 

    It seems to me, based on the course so far and 2 of the lectures of this section that I've watched, that Dennet is simply playing with semantics here. He is switching up the terminology, in order to appear as if he is discussing something else - he is using weasel words. There is an experiential component to being awake. Can the Zombie know that he is awake, without having the experience of knowing that he is awake? It strikes me that this is an impossibility. To report on internal states requires awareness of one's internal states, which entails an experiential component.

    And even if that phenomenal nothingness described of Chalmer's Zombie is accurate, that in itself is a feeling, is it not. In the same way it feels like something to be happy, it feels like something to lack that emotion. Absence equally leads to feeling.

    Later, Dennet states:

    "Chalmers fervently believes he himself is not a zombie. The zombie fervently believes he himself is not a zombie. Chalmers believes he gets his justification from his “direct evidence” of his consciousness. So does the zombie, of course.
                    The zombie has the conviction that he has direct evidence of his own consciousness, and that this direct evidence is his justification for his belief that he is conscious. "

    Unless Dennet is defining these words differently, it seems like further evidence that this Zombie does feel. Because it feels like something to believe, regardless if the belief is accurate or not, or whatever other point Dennet makes.

    ReplyDelete
    Replies
    1. CogSci and SciFi

      Lots of tricky bits here:

      1. Chalmers is talking about T5, and suggesting that there might be things that are T5-indistinguishable from us that don't feel: zombies. (If you want to know about what Chalmers thinks, read the extra readings X1, X2, X3,) What Chalmers is talking about is speculative metaphysics (SciFi): Is a T5 zombie possible? But the "hard problem" is: "How and why are human beings not Zombies?" That problem is exactly the same as "How and why do human beings feel?" The possibility of zombies is of no interest. The hard problem remains, whether or not there can be zombies. And if there can't be zombies, then the hard problem is the same as "How and why can't there be zombies?"

      2. You, Andras, seem to be saying that there can't be zombies. But you have not really explained why (and to do so you'd have to solve the hard problem). A zombie can behave as if he is awake or asleep: That's the (SciFi) premise: that he is T5-indistinguishable from us, in all the things he can do, yet he does not feel. Because of the other-minds problem, no one can know for sure whether the zombie can feel (except the zombie himself -- but if he does feel, then he is not a zombie!). The "zombie hypothesis" is simply the SciFi speculation that there could be a T5 that does not feel. And the rejection of the zombie hypothesis is to say there could not be. But as far as I'm concerned, that's neither here nor there, because the hard problem remains unsolved either way (see 1, above, again).

      3. On "phenomenal nothingness," see what I wrote in other replies about "uncomplemented categories" (like Laylek (Lélek): categories that do not and cannot have negative instances. We know what it feels like to not-feel this and instead feel that (to see green is to not-see red). But we definitely haven't the faintest idea of what it feels like to not-feel anything at all, because that is self-contradictory.

      4. You are right that it feels like something to believe, Andras. So if the zombie believed anything (rather than just behaved as if it believed) then it would not be a zombie.

      (All these questions are much more interesting and substantive at the CogSci T3 level, rather than the SciFi T5 level.)

      Delete
  7. The article about the ethical implications of cognitive science sort of explained to me something that has been staring me in the face all semester: animals feel and so it is cruel to mistreat them, especially if we do not need to and if we are aware of what we are doing. It seems that in providing the scientific evidence that animals feel like we do (a simple argument in the end), you revoke any excuse the reader may have had in participating in the mistreatment of animals.
    I have a couple questions for you Dr. Harnad. Do you under no circumstances eat (or use) animal products? Or if you deem certain raising of animals to be ethical, do you make an exception? To you, what constitutes 'hurting'? Is it anything that is not necessary for one's survival?

    ReplyDelete
    Replies
    1. On Feeling and Hurting

      Stephanie, I was a (hypocrite) vegetarian from age 17 till a few years ago, when I became vegan. I now no longer (knowingly) eat or use any animal products.

      Sometimes hurting is necessary for survival: A victim sometimes has to kill in self-defence. A carnivorous predator cannot survive without killing its prey. The prey cannot survive unless it fights back against its predator. These are conflicts of vital (life-or-death) interest. But they don't cover even 2% of the indescribable horror that our species imposes on innocent, helpless, feeling animals completely needlessly -- just for taste, or some other pleasure, or habit -- every second of every minute of every day, all over the planet.

      Humans are not obligate carnivores. We are opportunistic omnivores who can live completely healthy lives without consuming any animal products at all.

      The fact that we continue the needless horrors, and at a mountingly monstrous scale, is by far the greatest moral shame on our species, and the unrelenting agony of every other feeling species on the planet.

      There's no horror that we inflict on animals that we don't inflict on members of our own species as well, but the profound difference (apart from the numbers) is that we have adopted laws making it illegal to do such things to people, and most people would never do, or condone or support or sustain them -- whereas with inflicting them on other feeling species almost all of us still do.

      That's not just a hard problem but an eternal Treblinka.

            “In his thoughts, Herman spoke a eulogy for the mouse who had shared a portion of her life with him and who, because of him, had left this earth. 'What do they know-all these scholars, all these philosophers, all the leaders of the world - about such as you? They have convinced themselves that man, the worst transgressor of all the species, is the crown of creation. All other creatures were created merely to provide him with food, pelts, to be tormented, exterminated. In relation to them, all people are Nazis; for the animals it is an eternal Treblinka'.”
      ― Isaac Bashevis Singer, "The Letter Writer"

      Bekoff, Marc & Harnad, Stevan (2015) Doing the Right Thing: An Interview With Stevan Harnad. Psychology Today Blog. January 2015.

      Desaulniers, Elise (2013) I Am Ashamed to Have Been a Vegetarian for 50 Years. Huffington Post Living. May 30 2013.

      Harnad, Stevan (2013) Luxe, nécessité, souffrance: Pourquoi je ne suis pas carnivore. Québec humaniste 8(1): 10-13

      Singer, I. B. (1989). The Letter Writer from the Seance and Other Stories. Jon Wynne-Tyson, The Extended Circle: A commonplace Book of Animal Rights, 335.

      Delete
  8. It involves extracting and purifying texts from (apparently) speaking subjects, and using those texts to generate a theorist’s fiction, the subject’s heterophenomenological world. This fictional world is populated with all the images, events, sounds, smells, hunches, presentiments, and feelings that the subject (apparently) sincerely believes to exist in his or her (or its) stream of consciousness. Maximally extended, it is a neutral portrayal of exactly what it is like to be that subject–in the subject’s own terms, given the best interpretation we can muster. . . . . People undoubtedly do believe that they have mental images, pains, perceptual experiences, and all the rest, and these facts–the facts about what people believe, and report when they express their beliefs–are phenomena any scientific theory of the mind must account for.

    An interesting experiment in cognitive neuroscience could help to discover if this is possible. I firmly want to imagine a world in which computational psychology is something that can actually be helpful to disciplines such as psychiatry. For example, to see if heterophenomenological worlds exist for different people it would be interesting to definitively study this. In an experiment, I would have some people who do not believe in this type of ideology paired with people who do believe it can exist. From this point, I would want to test the subjects both individually and together. I would want to use some type of neuroimaging, perhaps rTMS or fMRI would be the most useful. In the test I would have them “smell” things and tell them the wrong information about “where” it came from; listen to noises and create the same kind of fallacy, again with each of the sensorimotor ideas implicated in this “fictional world”. If I was capable of eliciting similar neurochemical changes with the fallacy in all conditions, I might want to conclude that this fictional world does exist in certain mental states.

    ReplyDelete
    Replies
    1. Zombies and Fiction

      Robyn, it feels like something to believe something (e.g., "I feel tired"). It also feels like something to "interpret" what I feel as something ("I am tired"), even when my body's not really tired and I'm just looking for an excuse to stop working. (In other words, I can put a fictional interpretation on a real feeling.) But it is just playing with words to say that it is just a matter of "belief" or of "interpretation" that it feels like something to feel tired: that the feeling itself is "fiction."

      Understanding language is of course a matter of interpretation. But it feels like something to interpret "the cat is on the mat" as meaning the proposition that the cat is on the mat.

      That's what's missing for Searle in the Chinese Room (T2). He does not feel that 貓在墊子上 (māo zài diànzi shàng) means that the cat is on the mat (or means anything at all) -- and not just because the symbols are ungrounded, but because they mean nothing to him, they feel like just squiggles and squoggle. In T3, the symbols would be grounded T3-indistinguishably in their referent categories. But -- if T3 zombies were possible and it didn't feel like anything to T3 to say, hear, mean and understand them -- then symbols would still not mean anything to T3: nothing would. T3 would just be a zombie with the capability of passing T3 (i.e., doing, not feeling).

      Stevan says: "I don't believe T3 zombies are possible. If a robot can pass T3, it feels -- but because of the other-minds problem, and because I cannot solve the hard problem, I cannot explain how or why."

      Actually, with your proposed experiment, Robyn, you would just be testing Virtual Reality (VR): Simulated inputs to real senses of the real brain of a really feeling T5 (human being). If your VR was good, you'd get the same behavior, report and neuroimagery (or even neurochemistry, if you could do it) with VR sensory input as with real-world sensory input. And I wouldn't be surprised that if you gave your subjects an ambiguous smell (whether a real smell or a VR smell) and told them that it came from an haute cuisine dish or from a latrine, you'd get a different pattern of neural activity each way, one like that of smelling food and the other of smelling a toilet (especially in subjects who where highly suggestible hypnotically).

      But so what? It would still feel like something in each case, and that's all that's at issue. The feeling itself would never become a fiction; the input would just be perceived (felt) differently, when coupled ("correlated") with different propositions (which also feel different to understand).

      And even if the feeling itself were "reinterpreted," it would no longer be the very same feeling: If I go to a doctor and say I feel as if I have had a heart attack and he says I am just feeling the symptoms of indigestion (and I believe him), the same symptoms begin to feel different. (But it still felt like a heart attack before, just like it felt like I was seeing an oasis before, even after I am told it was just a mirage or a hallucination.)

      (This, by the way, is not "heterophenomenology": Right now we're doing real phenomenology!)

      Delete
  9. This week’s readings have been both the most interesting and frustrating so far. The old saying of “The more you know the less you know” seems to be true as day this time around, and I must be doing pretty well because I now know I have no idea how to even approach the hard question.

    So In lieu of offering anything substantial, I’ll just critique other’s attempts at doing so. Hopefully this doesn’t come across as parroting Dr. Harnad’s ideas, as I mostly agree with them but am trying to formulate this from my own perspective.

    “Kant: How is it possible for something even to be a thought (of mine)? What are the conditions for the possibility of experience (veridical or illusory) at all?”



    So Kant, If I am am interpreting this correctly, is asking: “what makes consciousness (experience) possible?” A thought requires a thinker, and in order to avoid semantics, lets think of a thinker in the Cartesian sense, i.e something that experiences things (feels). With this clarified, we can say interpret Kant’s question as the Hard Problem: “ how do we feel? Why do we feel? “

    Dennett thinks that Turing is replying to Kant’s question by turning it into: can we build make a machine that thinks?” When really All Turing want to know is if a machine can be made to do what we can do. Dennett presupposes that a good enough do-er is a feeler, but that’s just kind of avoiding the hard question. Of Cog Sci focus on finding out how to do everything we do, but that doesn’t mean that doing so will solve the mystery of consciousness. In fact, Turing makes this claim explicitly In his “Imitation Game” paper, stating :

    “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

    Dennett continues on to his approach for studying consciousness, heterophenomenology, which admittedly as good of an objective approach as we can get. Heterophenomenology, which Harnad correctly characterizes as correlating people’s experience with their physical states, IS something Cognitive science should focus about, but it still doesn’t explain how or why those people are having experience in the first place.




    Another thing I wanted to talk about was the zombie argument, as It’s felt wrong since I first heard it in a philosophy class last semester but now I understand why.

    You can summarize it as:

    1. Zombies are physically identical to us, except without feeling
    2. Zombies are conceivable.
    3. Conceivable things are possible
    4. It is possible to separate mind from matter, therefore dualism

    What’s so egregious about this argument for dualism is that you have to presuppose dualism to even get it off the ground! zombies are only conceivable if think that minds aren’t a property of their physical substrates. It’s just as inconceivable to have a physically identical non-conscious twin, as it is to have water that isn’t wet. Unless you already believe that water and wetness are separable, the premise is paradoxical.

    ReplyDelete
    Replies
    1. Dan Dennett is a (sophisticated) behaviorist: He thinks the brain is only a mechanism for doing. Nothing else to explain.

      No point wracking one's mind about whether zombies are or are not possible. Either way, they do not solve the hard problem. If zombies are not possible, the hard problem is: how and why not?

      I zombies are possible, then the hard problem is: how and why aren't we zombies?

      So forget about zombies and just ponder how and why organisms feel rather than just do (which is all Dan thinks they do anyway...)

      Delete
    2. Hi Nick,

      I don't know, I think zombies are conceivable.

      If you can think about it, it's possible
      and if you can think of a person like, me, except there's nothing between my ears but space (maybe i wear sunglasses)
      Then you've just thought of mind and matter as two different things.

      The cause of my behavior, the me we all know and love, is my brain. But you can't see my brain, and even if you could, would you be able to say I am a property of my brain, like wetness is a property of water?

      It's super trippy because water is wet because of what it is, but I don't feel like me because of my brain. I think Fodor's right, you know, we're all just dualists. What's the alternative?

      To be aware of my brain and not myself?

      Delete
    3. Conceiving...

      Alex, If I don't know (or cannot understand) the proof, I can conceive that we could square the circle, trisect an angle, and prove any theorem in arithmetic.

      If I don't know (or understand) quantum mechanics or relativity, I can conceive that for every object, of any size, we can measure its position and momentum exactly, and that things can move faster than the speed of light (and that I can travel into the past and shoot my own great-great-great grandmother before she conceives my great-great grandmother).

      So much for things being possible if they are conceivable. (I suppose anything is conceivable if I can re-write all the laws of nature in any sci-fi way I please.)

      But I don't think this casts any light whatsoever on whether or not there can really be zombies, in this world -- besides which, it doesn't matter for the hard problem one way or the other (and, I suppose, I can "conceive" that the hard problem is solvable, or not solvable, whether or not it really is...)

      Delete
  10. “Heterophenomenology is nothing but good old 3rd-person scientific method applied to the particular phenomena of human consciousness. “
    Dennett’s theory of heterophenomenology is essentially the belief that there can be a 3rd-person scientific approach to studying consciousness by the use of personal verbal utterances. These utterances can be used as self-reports in order to determine mental states of the individual. This seems to be a very vague way of identifying mental states and seems to bring in lots of unreliability. For example, verbal utterances are not always accurate depictions of a person’s internal beliefs – there is so much variability with verbal utterances that I do not see how Dennett can put so much confidence in them. People vary their tones and the way they describe things – it seems almost impossible to equate an internal mental state to a verbal description.

    “That’s to say, no purely third-person description of brain processes and behaviour will express precisely the data we want to explain, though they may play a central role in the explanation.”
    This point reminds me of the unreliability of the scientific method. Nothing is purely objective because there are always subjective experiences in place. An experimenter will always bring in his or her own subjectivity in order to do scientific measurements or analyses. Not only is objectivity never attainable in scientific experiments, it seems that time is always a factor. We are never able to re-experience something and are therefore never completely sure of that phenomenon after it has been experienced. For example, reconsolidation is a problem for memory processes – any time an individual remembers something they change the memory. Therefore it seems as though heterophenomenology will never be able to explain the difference between someone verbally explaining an experience that has been reconsolidated and the experience as it was when it occurred. Consciousness is instantaneous and so it seems unreasonable to expect someone to be able to explain it after it has already occurred.

    ReplyDelete
    Replies
    1. Besides verbal reports, Dan's heterophenomenology would also include all doings and doing capacities (T2 and T3) and all neural correlates (T4 and T5). The problem is not the unreliability of verbal reports but the fact that none of this can explain the fact that it feels like something to see red, over and above looking at it, saying it's red, and having whatever the accompanying brain activity might be...

      Delete
    2. someone may have already asked this but: if all heterophenomenology has to probe consciousness is "personal verbal utterances," how different is it from the T2 test?

      Delete
    3. Alex, "heterophenomenology" includes all verbal behavior (T2), all other behavioral capacity (T3), and all measurable neural function (T4, T5).

      Delete
  11. "...the unanswered and perhaps unanswerable question about how and why we feel...is in fact at the root of our belief... in omnipotent, omniscient and omnipresent gods..."

    I'd love to hear you unpack that one.

    ReplyDelete
    Replies
    1. Omnipotent, omniscient gods and immortal, immaterial spirits

      Alex, I'd love to unpack it for you, but (to borrow a page from Fermat) there's not enough time, or room on this page. I'll hum a few bars, but ask me again in class:

      1. Gods: zero evidence (other than that when we were very small, we all knew them: our parents).

      2. Spirits: zero evidence (other than the Cogito -- for the fact that I feel -- and T3 -- for the probability that others like me do too).

      It's perfectly natural to ask: How and why is there feeling? But, because of the hard problem, the answer is a resounding "Dunno!"

      So what flows in to fill this blank space instead is Just-So Stories of eternal souls, separable from their bodies, all created and controlled by more powerful super-souls, wiser than them, the gods.

      Unless you really have a taste for paradox (or psychopathy)...

      ...in which case you throw in an extra bit about the gods being all powerful, and all-good, but humans having "free will," so they can be bad, if they feel like it (but that's not the gods' fault) and they'll be punished in the afterlife...

      That should do for a start. I could have sung the Just-So-Stories for ancestor worship, reincarnation, or what-have-you, but I thought I'd stick to the tales from closer to home.

      I"ll add only that -- for some bizarre reason that I really can't fathom -- a lot of people seem to think that these Just-So Stories were somehow optimized by reducing the number of gods to just one, the local tribal deity, and then began to tout monotheism as some sort of virtue in and of itself. Go figure...

      Homage to William of Ockham

      our forebears had it right
      the fewer gods the better
      monody just undershot
      the optimum by
      one

      Delete
  12. “Now faced with these failures of overlap–people who believe they are conscious of more than is in fact going on in them, and people who do not believe they are conscious of things that are in fact going on in them–heterophenomenology maintains a nice neutrality: it characterizes their beliefs, their heterophenomenological world, without passing judgment, and then investigates to see what could explain the existence of those beliefs.”

    Yes, a nice neutrality between “you don’t know what you are talking about” (false positive) and “you really have no idea what’s going on” (false negative). The arrogance of heterophenomenology is that somehow the scientist will always knows more about my own experience than myself. But how is science going to learn about consciousness if it thinks it already knows more about it than those who have consciousness?

    To illustrate this tension, take the Checker Shadow Illusion.
    (http://ngcm.soton.ac.uk/blog/images/seminar/2014-11-13-optical-illusion.jpeg).
    This is probably an example where Dennett would look at my phenomenological report “A is darker than B” and tell me:
    “Ha, you are wrong about that. A and B are the same colour.
    -- But Dan, they are clearly different. Why do you say they are the same colour?
    -- Look, if I fold the image and put A next to B, you can see they are the same.
    -- But Dan, the image of the experiment was not the folded image. You cannot change the conditions and tell me I am wrong. Of course, under these new conditions, they are clearly the same. But a minute ago they were not.
    -- Fair enough, but bringing A next to B shows you that both parts of the image actually reflect the same amount of light with the same wavelength. Therefore, even when the image is not folded, the colour is the same.
    -- Excuse me but what’s that stuff about wavelength?
    -- Well, light is a wave and the colour of the light varies with the wavelength. If the wavelength is the same, then the colour is the same.
    -- No, colour is something I feel. Wavelength is an objective measure that is highly, but not perfectly, correlated with that feeling. So well correlated in fact that you forget it’s just a correlation and in your experiments, you define colour in terms of wavelengths, which allows you to say that I am under an illusion. The one-to-one relationship between wavelength and colour is a theory, and the Checker Shadow “Illusion” falsifies that theory.”

    ReplyDelete
    Replies
    1. Keven, very apt send-up. Bravo!

      Delete
    2. You comment is interesting Keven. It would be wrong to tell someone that they don't feel that A and B are different. However, I still don't get why this disqualifies the idea that there might be a difference between their belief/feeling and what is actually happening to their body. I think that there are some perceptions/experience (I mean body's reception of stimuli, but I don't know how to say it cause both have previously been equated to feeling) that are not conscious, not felt, that things influence our bodies without us being aware of them, or that we may not be aware of the extend of their influence. I don't know, am I wrong?

      Maybe my confusion is due to the fact that I can't understand 'belief of being conscious' as anything other that 'feeling X'. I don't understand why Dennet would argue with someone that they are conscious of something they claim not to be conscious of, rather than arguing that there are things happening in them of which they could(or should) be conscicous.

      Delete
    3. Ok, I get it. Dennet simply denies that the subjects are feeling what they say they are feeling unless the elements causing that feeling are present and exactly corresponding...

      Delete
  13. "The expectation of your expectation is the projection."

    That's how D. Dennet solves the problem. Let me back up.
    D. Dennet says we've got a problem: we experience a world, but our brain is not the world. You cannot point to the brain when you point at anything you experience.

    He says we project the experience onto the world. Feeling is a process of expecting and confirming. This seems like idealism, but its somehow the opposite, realism. Anyway, an example:

    Color's not in the world. We weigh the activation of cells in our retina to get color. There's also infared and ultraviolet light we cannot see, because we don't have the kind of apparatus that affords us with that perception. All of our perception results from the tid bits of reality that we can pick up. Fine. We know we have feelers.

    But it's more than just having feelers. The feelings are the effect caused by your interpreting of the world. You're "afforded" a certain set of interpretations, things are given; you expect certain things from the world. You pick up stimulus, and then your brain judges that stimulus. Our feelings are just confirmations of these judgments. Feelings do not cause judgments (this is the whole inversion thing, which I'm still fuzzy on). judgments cause feelings. The judgments go un-felt. If the neurons responsible for sifting through that stimuli have nothing to say, you can conclude the stimulus is something you've felt before. Some stimuli will never change, will never receive an unfavorable judgement, so to speak. Let me try it like this: neurons weigh inputs. Populations of neurons are set to catch certain kinds of fish in their nets. Catching something you didn't expect changes your judgment. Perhaps we feel this most when we have altered "states of consciousness".

    Admittedly, I'm struggling to understand how Dennet explains away feeling by saying we just judge stuff. It doesn't feel like something to judge; that's entirely objective. But then how does the feeling come in? What does confirming a judgement tell me about what some feeling feels like?

    ReplyDelete
    Replies
    1. Three umpires:
      "I call 'em as I see 'em."
      "I call 'em as they are."
      "They aint nuthin' till I call 'em."

      I guess Dennit's the last ump?

      Delete
    2. It feels like something to judge (believe, understand, mean, expect, interpret, etc.).

      And what calls the shots is your brain, not "your judgement." You feel (hence judge, believe, etc.) what your brain generates (but we don't know how or why it generates feeling).

      Delete
  14. “As I like to put it, we are robots made of robots–we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent.”

    This sentence brought to mind precisely the “system argument” against Searle’s Chinese Room Argument, which suggests that the parts of the system together are intelligent (and can understand Chinese). This reply was refuted because of the fact that if the non-Chinese speaker memorized the keys he would still not actually understand Chinese.
    After stewing on this for a while, it struck me that this is precisely what Turing proposes in his ‘version’ of the question: “How could we make a robot that had thoughts, that learned from “experience” (interacting with the world) and used what it learned the way we can do?” This rephrasing of the question essentially tackles the “system argument” against the Chinese Room Argument, because it bridges the gap between this idea of parts being combined into a collection of parts, and parts being assembled into some kind of whole (in the case of humans we are assembled into a whole, in the case of machines they are presumably assembled into a collection of parts). T

    ReplyDelete
    Replies
    1. Julia, I'm not sure what you mean:

      Dan Dennett thinks mind are made of mindless (i.e., feelingless) parts. We all agree we're made of feelingless cells, and yet we feel. But it's not clear how Dan explains how and why we feel. (It's not clear he even agrees that we feel! He thinks we're as mindless as our parts -- we can just do a lot more.)

      So maybe this is a bit like the System Reply to Searle's Argument, which is that the Chinese Room understands Chinese even though Searle doesn't, because Searle is only part of the Chinese Room.

      But Searle quickly fixed that problem, by memorizing the T2-passing computer programme, thereby becoming the whole System -- and still not understanding Chinese. So he solved the System problem and refuted computationalism, rather than just showing that the mind is made up of mindless parts (which most people would agree was true).

      Besides, I think Dan Dennett might well have argued that the System Reply was true, even of Searle, once he was executing the memorized T2 programme: in other words, Dan would say that Chinese really was being understood by a "System," but that English-understanding Searle was still only part of that Chinese-understanding System. The reason is that, for Dan -- although he would not put it that way -- understanding is just something you do, not something you also feel. Indeed, according to Dan, what we call "feeling" is just a form of doing: "believing," and believing that you feel.

      Delete
  15. "When I put up Turing’s proposal just now, if you felt a little twinge, a little shock, a sense that your pocket had just been picked, you know the feeling too. I call it the Zombic Hunch (Dennett, forthcoming). I feel it, but I don’t credit it. I figure that Turing’s genius permitted him to see that we can leap over the Zombic Hunch. We can come to see it, in the end, as a misleader, a roadblock to understanding. We’ve learned to dismiss other such intuitions in the past–the obstacles that so long prevented us from seeing the Earth as revolving around the sun, or seeing that living things were composed of non-living matter."

    I disagree with Dennet's assertion that we may leap over the Zombic hunch.

    Before I get into why, I think it's important to talk about degrees and types of evidence. I believe that before we can discuss many of the topics discussed in the paper, we must be rigorous in categorizing the different types of evidence discussed. Throughout the reading I encountered a variety of distinctions between differing kinds of evidence:

    Type 1:

    1st person access to feeling. As described by Descartes, we can be sure that it /feels like/ X /when/ we feel X. This to be a very strong form of evidence, in that we can be sure of it and we can investigate it (for example, through meditation or other forms of first-person observation). When discussing this 1st form of evidence, feelings are always true in that a person can be sure a feeling they feel exists momentarily.

    However, the scope of this evidence is limited as we often cannot reliably use this information to make inferences about other feelings or the world around us. Furthermore, it is impossible to prove to anyone except ourselves that this evidence exists so the scientific utility of this evidence may be limited. There is also no way to keep a record of feelings since they exist for only a moment in time and many feelings may occur at once such that any type of record would involve reflection.

    Dennett also makes discusses 'true' and 'false' feelings. This seems to be in reference to whether or not a feeling is congruent with internal or external truths (in fact, would prefer to call these congruent or incongruent feelings). For example, if an agent remembers that they felt sad two moments ago (a 1st form of evidence), this feeling is true (i.e., congruent) if two seconds ago they /actually/ felt sad. Likewise, a person may feel that there is an elf sitting on their desk in front of them, but this feeling will be false (i.e., incongruent) unless an elf is actually sitting on their desk in front of them.

    As far as I can tell, testing the congruence of a feeling is impossible. In order to test the congruence of a feeling we would have to to transmit our feelings to other beings who could instantaneously verify whether or not these feelings are congruent with their own feelings. That is not possible, so we must rely on the 2nd type of evidence.

    Type 2:

    Behaviors that suggest feeling (but are not feeling). Since it is impossible to know whether behaviors (including spoken word) reflect the existence of and describe an actual feeling, the most a first or third person account can describe is what a participant seems to feel like based on the evidence available to the investigator. Behaviors may suggest the existence of particular feelings, though the resolution will necessarily be blurry and potentially incongruent with reality.

    It is possible to test the congruence of Type 2 evidence. For example, we can compare statements about feelings with our own feelings. We can also compare actions and statements with the actions and statements of other beings. If a set of Type 2 evidence, let's call this X, is frequently congruent with our Type 1 evidence and other Type 2 evidence sets, it is often safe to work with X as though it were true.

    ReplyDelete
    Replies
    1. Type 3:

      Evidence external to a human being. For example, the data used in many 'hard sciences' like physics or chemistry. Note that scientific accounts and interpretations of Type 3 evidence are in fact Type 2 evidence as they are assertions about what the author claims to feel after taking into account Type 3 data.

      ----------

      A robot that passes the T3 test displays Type 2 evidence in a manner indistinguishable from a human. Whether or not that robot also has access to Type 1 evidence is the other minds problem.

      Until this point in scientific history, we have employed Type 2 & 3 evidence rather than Type 1 and I would say it serves us well. Unless we develop a method for transmitting feelings between people, all attempts to deal with or consider the Type 1 evidence of other beings will be flawed and pseudoscientific at best. In light of this, we are left with a few options:

      1. Develop tools and techniques for investigating feeling in the first person
      2. When it comes to dealing with others, work only with Type 2 & 3 evidence

      Neither of these, of course, involve 'jumping over' the other minds problem nor do they resort to behaviorism (since Type 2 evidence can include estimations and guesses about feeling and thinking). They instead work around the other minds problem, acknowledging that it cannot be solved (at least not in the foreseeable future).

      The 'zombic hunch' is not a roadblock or a misleading feeling, although it says nothing about the other minds problem (which I believe we should remain agnostic towards). Is the other minds problem, as we envision it now, an important problem of cognitive science or does it just feel like an important problem? What if it is impossible for a robot to pass T3 without feeling? If it is an important problem, can we solve or jump over it? We don't know, and we likely won't know for a very long time.

      Instead, we should consider the evidence in a different way. The 'zombic hunch' is a set of Type 2 evidence that we can discuss and compare to Type 2 & 3, and our own Type 1, evidence. The 'zombic hunch' is evidence regarding what potentially feeling beings might feel about feelings.

      I believe that if we are ever to get at solving the hard problem of consciousness or the other minds problem, cognitive scientists should work to improve their ability to investigate their own Type 1 evidence, while producing and considering Type 2 evidence as Type 2 evidence and nothing more. If one scientist puts forward some Type 2 evidence, and many scientists use well-established methods to test it against their own Type 1 evidence and various sets of Type 2 & 3 evidence, we can say it's more likely to be true than we previously thought. In the absence of a solution to the other minds problem or not, that's good enough for me.

      Delete
    2. Orthodoxology

      Ethan, I think you might be overcomplicating things with the 3 types of evidence (though there is a "3" involved here).

      For our model-feeling let’s pick “looks red.”

      It feels like something to view something that looks red.

      The “heterophenomenologist” can first check whether the candidate says “looks red” when viewing things that are red (as it is called by most viewers).

      Then the heterophenomenologist can check the brain activity (including internal computations, if need be) and identify what’s going on in the brain of the candidate when he says “looks red.”

      For Dan Dennett, that’s all there is: (I) the input patterns, (O) the output patterns (doing capacity, T2/T3), and (P) the internal processes in the brain (T4/T5). There’s nothing else. So there’s no “hard problem” of explaining how and why it “feels like something” to see red. And there’s not even an other-minds problem, because with its 3 kinds of data (IOP), heterophenomenology can do a pretty good job of mind-reading too (predicting and explaining what people will do and say, hence what they "see").

      What about the “zombie hunch”?

      What zombie hunch? There’s nothing other than I, O and P!

      So either everyone is a zombie or there are no zombies. (Talking about “zombies” is like talking about “layleks”: no negative instances.)

      So what do I mean when I say "It feels like something to see red"?

      I just mean that I'm in the P state that occurs when I view a red I. (It would be silly of me to deny it, when I'm in that state, but it does seem a bit long winded to say "it feels like something to see red" rather than just "red looks like this -- and green looks like that.")

      Ditto for "I feel tired." That's P, the state I'm in when I haven't slept in a long time. (Check it out on your heterophenomenological correlation database.) Ditto for "I feel a migraine" or "I feel I've understood the 'hard problem."

      Why do people make a big (behavioral (O)) fuss about some P-states (as in "I'm feeling an excruciating pain!")? Well, it means your body has been injured and you've evolved or learned that it has good effects to vocalize and agitate vigorously for help (except if you're a member of a small, vulnerable, non-social species, in which case it's better not vocalize or agitate or show any signs of being injured).

      So, for Dan Dennett, talk about feelings is either shorthand for the above IOP conditions or it is about fiction.

      I would, though, like to recommend a slight change in heterophenomenologists' vocabulary. They should call it "heterodoxology," since they believe that phenomenology is not about feelings but about beliefs (and beliefs about feelings) -- and, needless to say, believing doesn't feel like something: believing is just something we do....

      Delete
  16. "People undoubtedly do believe that they have mental images, pains, perceptual experiences, and all the rest, and these facts–the facts about what people believe, and report when they express their beliefs–are phenomena any scientific theory of the mind must account for."

    Dennett describes heterophenomenology as the 3rd person, empirical test of consciousness. He attempts to argue against the view that people can have first hand knowledge about their conscious experiences, that those who don't believe empirical data is sufficient to understand consciousness are misguided. While he attempts to use neurological and psychological studies and methodologies to his advantage, I think the empirical data we are able to collect at this point is not sufficient enough to make such strong claims about consciousness. Dennett makes the analogy of the "hunch" and that in the past, we've empirically demonstrated that our hunches about given phenomena were wrong (such as the sun revolving around the earth, or that we have perfect field of vision). To me it seems superfluous to bring in such examples as they are more fixed, stable phenomema. The whole difficulty with consciousness is that it highly variant and unpredictable across a population; many people characterize their sense of consciousness differently. I find so many people are inclined to be on the A or the B team instead of just seeing where further developments in cognitive science lead us to. Personally I'm on the fence, I'm inclined to believe that consciousness is rooted in physical, empirical phenomena but I'm not opposed to believing that there is an intangible quality to consciousness that can only be understood from the first person perspective.

    ReplyDelete
  17. Dennet argues that heterophenomenology is the objective way to study subjective experience. With his method, every observable behavioural output of subjective experience is measured and catalogued.

    I think he's correct that recording everything is the closest you could ever come to getting a picture of subjective experience, but my zombic hunch still tells me something is being left out. Dennett would have me disregard my hunch about feelings, they either can be observed, or don't exist. There is also the possibility however, that science is incomplete, and cannot measure things that certainly exist.

    These two models correspond to the formal conception of Completeness and Consistency. Completeness says: everything true within a system can be deduced in the system. Consistency says: Everything that can be deduced within a system is true. With his incompleteness proof, Godel showed that any formal system that includes arithmetic cannot be both consistent and complete. Despite its name, the proof does not say: "Math IS incomplete", it says "Math is either incomplete or inconsistent", and it happens that an inconsistent math is not as all useful as a model for studying the world, so we take incompleteness as the more useful explanation.

    In the Dennett-Chalmers debate, these two mutually exclusive propositions come up in a similar way. For "the B team", 3rd person science is incomplete: we feel, but we can never show we feel, so this true thing is undeduceable. For "the A team", 3rd person science is inconsistent, because our feeling that we feel is in fact false, although it is arrived at by observation (albeit introspection) like the rest of 3rd person science.

    With this conception, it's clear to me that the B team model is the more useful of the two. If "feelings" are a false belief, then Cogito is a mess and nothing makes any sense to me. So I'm on the B team. While I do feel that heterophenomenology is maximally inclusive, I don't think it'll ever tell us anything about how/why we feel. If causal mechanisms of feeling were observable, we wouldn't be stuck in this quandary in the first place.

    Applying the logic of Godel's formal proofs to a less formal argument worries me; there must be pitfalls I haven't considered. But I'm submitting this on the hope that it's thought provoking, even if it's not logically compelling.

    ReplyDelete
  18. This week’s article made my head spin in circles a little bit, as I tried to decide what team I was on. Without a doubt, heterophenomenology doesn’t bring us any closer to solving consciousness, or solving the Hard problem. It reminds me of the week when we talked about Fodor and how brain imaging and looking at structures of the brain that are more or less active during certain behaviours doesn’t bring us any closer to solving how and why we cognize. Dennett’s description of heterophenomenology is just that if a robot can do everything we can do, it is conscious! But that certainly isn’t the case. Even if a T3 robot were to pass the Turing Test, we could not ever 100% say that it were thinking. So obviously, Team I’m a team B-er when it comes the fact that heterophenomenology is leaving out feeling. However, I would have to say, that if an explainable mechanism existed for consciousness, then it would have to be proven by heterophenomenology. Although it seems unlikely that we would be able to figure out how and why we feel, I’m hesitant to say that it is impossible (however, without much reason, other than that many things that were thought to be impossible in the past became possible)…so I’m still holding out a little bit for heterophenomenology.

    ReplyDelete
  19. ''when we assess the attributions of belief relied upon by experimenters (in preparing and debriefing subjects, for instance) we use precisely the principles of the intentional stance to settle what it is reasonable to postulate regarding the subjects’ beliefs and desires. Now Chalmers has objected (in the debate) that this “behavioristic” treatment of belief is itself question-begging against an alternative vision of belief in which, for instance, “having a phenomenological belief doesn’t involve just a pattern of responses, but often requires having certain experiences.” (personal correspondence, 2/19/01). ''

    If I correctly understood, Denett’s Intentional Stance theory states that 1) the subject is rational, 2) it has some beliefs caused by its environment, 3) it has some desires caused by its environment, 4) his beliefs will determine how he will fulfill his desires. On this account, in an experimental setting where we know the desire of the subject, the examination of his actions will inform us on what his beliefs are. Chalmer’s criticism to this is that it is question-begging, in the sense that is says the individual had a certain belief because it acted the corresponding way, and acted the corresponding way because it had that belief. I agree with him that this does not solve the question and although it may inform un on what the belief is, it cannot inform us on why the individual has it. The latter must most likely be explained by the previous experiences of that person. It seems so obvious to me that I find kind of useless saying that people’s beliefs (or impressions/conscious perception) are based not only on the present situation they are exposed to but can also be shaped by previous experience (cognitive penetrability?). But this is not understood by Denett as he says that it is an ‘imponderable issue’ on which we better be neutral, where he erroneously makes uncouscious beliefs and experience-altered beliefs mutually exclusive.

    ReplyDelete
  20. I kept thinking about sleepwalking while pondering the hypothetical Zombie Twins. Sleepwalkers are molecularly identical to real human beings, and they are fully capable of performing functionally indistinguishable actions from their awake selves, although they are limited to a handful of actions (e.g. walking, sitting up, grabbing and hitting things). I'd go out on a limb and say that the neurological and neurochemical processes underlying the sleepwalkers' actions are very similar to when they are performing such actions awake. Although this certainly does not qualify them as being functionally identical to any awake person, I still find it interesting to think about how sleepwalkers are able to carry out a number of simple motions under a low level of consciousness. I wouldn't argue that sleepwalkers are completely unconscious during their nocturnal activities, but it seems to be a much lower level of consciousness compared to when they are wide awake. How does this lower level of consciousness (or, feeling what you are doing) affect the actions carried out by sleepwalkers? Maybe I am too optimistic to say that studies like this could help us get some sense of the why aspect of the hard problem. Why do we feel when we do something? Of course it is unrealistic to observe and examine the states of feeling when someone is sleepwalking -- it's hard enough, if not entirely impossible, to do that with fully conscious subjects -- so all of this is just hypothetical.

    ReplyDelete
    Replies
    1. Regarding the article on the LSE blog:
      In this day and age, we have moved well past the argument that "animals don't have feelings". Numerous studies have proven and demonstrated that humans are not the only sentient beings in this world. People are aware of this, and yet they still treat animals like inanimate objects with no feeling or pain. Why?
      Harnad perfectly sums up the problems about animal cruelty we face today in his discussion of the movie AI: Spielberg's AI: Another Cuddly No-Brainer (http://cogprints.org/5331/1/ai.html).
      "Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter."
      In a way, this is similar to the other minds problem. We give other humans the benefit of the doubt and think of them as having the same feelings that we do (so that we consider other humans as equal beings to ourselves), even though we don't exactly know that other humans have the same feelings, whereas we are not willing to give animals the same benefit of the doubt and consider them equal. Just because they look and behave differently, we are content with the false belief that hurting them is okay. Speciesism is the belief that even if animals do feel, their pain and suffering does not matter because they are inferior (or just different) beings. I feel that the advances made in cognitive science, especially those regarding the hard problem, might be of tremendous help in demonstrating -- empirically, not just by inference from physiological data -- that animals are sentient beings exactly the same as we are.

      Delete
  21. I was satisfied by Harnad's "Animal pain and human pleasure: ethical dilemmas outside the classroom" paper. I've been a vegetarian for twelve years and this paper is pretty much a logical, well-organized representation of what I've been trying to explain to people for years. Assuming that beings other than ourself can feel, we have no reason to believe that animals feel any less than other humans. As explained in the text, we all have nervous systems (brains that react in similar ways to the same stimuli) and similar observable behaviour (which fit into the category of what could reasonably be understood as purposeful attempts to survive). Harnad mentions that if there was no such thing as feeling then there would be no such thing as morality, explained in another way, what is considered the ethical treatment of something is based on whether or not that being can feel. To protect one kind of feeling being with rights and condemn the other to be used in any way that may benefit humans does not seem ethically or logically sound.

    ReplyDelete
  22. I found the paper The Fantasy of First-Person Science by Dennett really hard to understand and although I did not think I understood what he was trying to say, I still could not agree to his point. Dennett seems to say that there is no difference between first person and third person in terms of thoughts and experience because as long as we build a robot that can do what we can do, questions regarding to cognition will be completely solved, even the problem of feeling.

    I could not accept this, especially the feeling part. When discussing feeling, Dennett uses the notion of Zombie Hunch. What he is suggesting is that we have feelings and we think we have feelings, and Zombie Hunch does not consciousness and it thinks it has feelings too. It seems that feelings, in Dennett's sense - although he does not give explicit definition - are similar to some kind of beliefs that people hold, not a real "feeling". Well, I do not think feelings are beliefs and I do not really think Zombie Hunch thinks it has feelings - they might have beliefs, but not feelings because they are not conscious. For example, I understand the word "apple", or the French word for apple, which is "pomme", and I feel something when I know that pomme is apple, and I do not get the same feeling when I do not understand the word "pomme". "Pomme" will be just nonsense to me; it will be just a string of symbols and does not mean anything, but when I know that pomme means apple, it carries meaning and I feel something different when I understand "pomme", and I do not think it is some kind of belief I hold - the feeling is right there.

    At the same time, I do not think building a robot that can do what we do simply solves all the problems. I think it cannot solve the problem of feeling. Feeling is special because it does not require words or even movements to exist. Although we can express feelings with words, feelings exist before we actually describe them. Then, where does our feeling come from? I think it is a similar question to the symbol grounding problem. We use words that we already know to make definitions of other words, but where do these words come from? I think it is essential to think about why feeling exists and where does it come from.

    ReplyDelete
    Replies
    1. Jie, T3 grounding solves the symbol grounding problem, but it neither guarantees nor explains feeling. And it feels like something to believe (or mean or understand) something.

      Delete
  23. According to Dennett, heterophenomenology 'investigates to see what could explain the existence of those beliefs.'

    Dennett affirms that there are no such things as feelings, but only beliefs. I understood that heterophenomenology predicts subjects' verbal reports, but does it also try to confirm or reject subjets' beliefs by comparing their verbal reports with their reports' neural and behavioral correlates? If so, I have a hard time understanding why Dennett would still claim there is no feeling if he sees that a subject's belief is actually supported by neural data. I could admit that a feelin
    (even though I am not convinced about that) that does not have any neural correlate is a belief (false positive), however, it seems harder to deny someone feels when neural activation constitutes a physical sign of feeling. But I think I am still confused with the definition of feelings.

    ReplyDelete
  24. A lot of smart things have been said already and on the one hand I wish I had commented earlier but on the other I wish I had more time to digest and go through both the article and the comments a couple more times before class tomorrow, but I feel like I need to leave something here at least for now:
    First of all, we will definitely need to clarify exactly what heterophenomenology is in class tomorrow. In an earlier comment, Professor Harnad wrote that "heterophenomenology is just predicting verbalizations from their neural and behavioral correlates" but elsewhere defined it as "introspect + T2 + T3 + T4" - I'm a little confused because these seem like semi-conflicting definitions. Introspection is something I and only I am capable of for myself, whereas with predictions anyone who has the proper measurement tools could do it; I'm not the only one with access to my neural activity (assuming you have an EEG/MRI machine) and behavior. So that's something I need answers to.
    Second, the term 'heterophenomenology' itself is very confusing for me, especially as someone taking a "regular" phenomenology class this semester. So for anyone in that class in particular, I was wondering what you think Heidegger would have to say about this new type of phenomenology - I know he puts Dasein on a pedestal, and talks a lot about experience and letting phenomena 'approach' you, so I *think* he would be on the B team, but I am not sure. It make sense though if Dennett presents heterophenomenology as an alternative to regular phenomenology, of course Heidegger would not be a fan.
    Anyways, the last thing I wanted to bring up was that like many of us I tend to learn better when I have an example to use, so almost as soon as I started getting to the part of the paper about qualia, the first thought that came to mind was the recent debate (the one that almost broke the internet) about the famous dress - is it black and blue or white and gold? The world was basically split down the middle on this issue; with each side of course feeling very strongly that they were right. This might be a useful example to use in class when we get into debate on the issue of 'first-person science.' On a more tangential thought-experimental note, what would happen if there were two robots that could pass T-2 through T-5, and one claimed it saw white and gold and the other claimed it saw black and blue? Would the robots get as upset as humans did, in seeing their claim so adamantly disputed by the other? I think this may not be the worst 'test' for feelings - a robot may perceive a color, but if it doesn't *feel* that it is perceiving that color, then it has no reason to care if another robot disputes it; but if it really does feel and believe it sees that color, wouldn't it naturally get upset if it was told that it was wrong?

    ReplyDelete
    Replies
    1. "In an earlier comment, Professor Harnad wrote that "heterophenomenology is just predicting verbalizations from their neural and behavioral correlates" but elsewhere defined it as "introspect + T2 + T3 + T4" - I'm a little confused because these seem like semi-conflicting definitions. Introspection is something I and only I am capable of for myself, whereas with predictions anyone who has the proper measurement tools could do it; I'm not the only one with access to my neural activity (assuming you have an EEG/MRI machine) and behavior. So that's something I need answers to. "

      Maybe this was already cleared up in class - but I don't see how these definitions are conflicting. Maybe introspection needs to be considered a part of behaviour, since heterophenomenological experiments look at self-reported feelings/thoughts/processes alongside neural and physiological correlates. So the introspection could be considered a form of behaviour. Also, T3 and T4 would correspond to behavioural/neural processes as well!

      "On a more tangential thought-experimental note, what would happen if there were two robots that could pass T-2 through T-5, and one claimed it saw white and gold and the other claimed it saw black and blue? Would the robots get as upset as humans did, in seeing their claim so adamantly disputed by the other? I think this may not be the worst 'test' for feelings - a robot may perceive a color, but if it doesn't *feel* that it is perceiving that color, then it has no reason to care if another robot disputes it; but if it really does feel and believe it sees that color, wouldn't it naturally get upset if it was told that it was wrong?"

      I reckon you could program robots to get upset! In the same way that we get defensive when our beliefs are challenged. I have a question for you Esther: why would a robot care about anything (why would it feel in the first place, let alone enough to care about something?)

      Delete
  25. Ever since the idea was introduced early on in the course, I have not been able to shake it from my head: "Why feel at all?". Once feeling is accepted as existent, this question becomes the interesting part of the "hard problem". From an evolutionary perspective, Dennett's heterophenomenology (a representation of our perceived "world" based on our beliefs as well as our physical state) is all that would be required to propagate genes forward - these entirely define our actions, and so only these qualities are selected for. This, combined with the fact that feeling seems to be mixed into everything that we do (it even feels like something to not feel something), without a specific locus in the brain, suggests to me another "System Reply", where feeling is no more than a spontaneous ("epi")phenomenon which occurs when specific kinds of "subsystems" interact in specific ways. The question then becomes: "T3 or T4?" - do we need functional equivalence or full systemic equivalence. I would suggest that because evolution is only selecting for function, that this is all that is required, but perhaps there is something "magic" about our specific design. Either way, the path forward is clear: T3 or T4 can be developed from a heterophenomenological standpoint, since whatever feeling is, it clearly either has a functional basis, and then will require integration into the models, or it does not, in which case the "Hard problem" becomes the "easiest problem", and feeling is inherent in some level of structured functionality.

    ReplyDelete
  26. Suspending the fact that I don't believe the so-called Zombic Hunch is something to be leapt over, I am unconvinced that Dennett's heterophenomonology is an appealing way by which to do so, particularly by his explanation as follows: "Now faced with these failures of overlap–people who believe they are conscious of more than is in fact going on in them, and people who do not believe they are conscious of things that are in fact going on in them–heterophenomenology maintains a nice neutrality". Besides the unintentional insinuation that there are two types of people with regards to explaining an individual's experience, I fail to see the purported lack of overlap between the two—much less that there are two such things, and the neutrality heterophenomenology claims to exercise between them. To divide the former across the lines with which he has done so: that people believe some things and do not believe others, and that people have things going on in them...that they are or are not conscious of. Assuming no one disputes the fact that people have things going on in them, the latter part of that second condition seems to be the exact same condition as the first (if "belief" is taken to be the conscious aspect of experience—or what it feels like to feel). Basically: some things are felt and not others. However, if one insists on dividing those psychological things into what apparently really happens or not, I both fail to see how this pertains to or allows a leap over the Zombic Hunch, and how, having arbitrarily earmarked feeling by psychology or biology, this is a neutral judgment.

    He continues: "it characterizes their beliefs, their heterophenomenological world, without passing judgment, and then investigates to see what could explain the existence of those beliefs." To reiterate: beliefs, or feelings, are being classified—judged, really—by some corresponding psychological mechanism invoked by the heterphenomonologist to explain their origin—and therefore tying them to an existing psychology, which by no means is finished fact—when by their own standards, beliefs can really only be separated from things we don't believe, or feelings from what we don't feel. How is examining what we don't feel at all indicative of why the other things are felt? "Often, indeed typically or normally, the existence of a belief is explained by confirming that it is a true belief provoked by the normal operation of the relevant sensory, perceptual, or introspective systems. Less often, beliefs can be seen to be true only under some arguable metaphorical interpretation–the subject claims to have manipulated a mental image, and we’ve found a quasi-­imagistic process in his brain that can support that claim, if it is interpreted metaphorically. Less often still, the existence of beliefs is explainable by showing how they are illusory byproducts of the brain’s activities: it only seems to subjects that they are reliving an experience they’ve experienced before (déja vu)." Again, the truth or falsity of these beliefs is assessed on a scale to which it is not provably causally related (co-occurrence, or whatever human mental biases we may have, is not grounds for doing so). The explanation for the existence of those beliefs—which are all true with respect to feeling; inquiry into what we don't believe would be a markedly separate endeavour—remains profoundly unsatisfied by Dennett's heterophenomonology.

    ReplyDelete