Saturday 11 January 2014

10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

Harnad, S. (unpublished) OnDennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

The mind/body problem is the feeling/function problem (Harnad 2001). The only way to "solve" it is to provide a causal/functional explanation of how and why we feel...








52 comments:

  1. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Beyond Beliefs

      "Belief" is another weasel word.

      Beliefs are only beliefs if and when they are being believed ("online"). And then it feels like something to believe them. Otherwise they are just ("offline") functional states, stored data, potentials, not beliefs.

      So it does not help to say that "feeling is just a belief."

      A belief is just a belief while it's being felt. Otherwise it is just an unfelt functional state, stored data, a potential, not a belief. And a non-feeling system -- like a teacup, toaster, computer, or a feelingless robot that only has functional states, stored data, and potentials -- has no "beliefs."

      Put otherwise, the (Cartesian) belief that I am feeling is a feeling.

      (Reflect on it, Marc; you are obviously in the thrall of the "Team A" view, but you need to defend it, not just repeat it... Saying everything is just beliefs neither solves nor eliminates the hard problem.)

      Free will: It feels like I do things because I feel like it. It feels like my feeling is causal. Maybe it's not. That's (a big) part of the hard problem (of explaining causally how and why I feel). But it's certainly not a solution to it.

      The hard problem is about feeling, not about aboutness. (No contradiction.) Or, if you like, "aboutness" has two components, grounding and feeling. Grounding neither guarantees nor explains feeling.

      Zombies are impossible, like a square circle? Would you mind running that proof by me again? I got it for the square circle, but not for the impossibility of zombies. (If your proof works, it will also be a solution to the hard problem of how and why organisms feel. But I'm afraid it will instead just be a ritual reiteration of the belief that "T3 capacity is (by definition) feeling" and "feeling is just belief"...)

      The shorter your reply, the better it's chance of being right. I'll settle for the proof that there can't be zombies...

      Delete
    4. Marc, what happened? You removed your comment!

      Delete
    5. You only feel like I commented! ;)

      I commented last night. Woke up this morning and started reading Doing/Feeling and realized I should give you the benefit of the doubt that you'd make a case for there being an inherent difference. I figured my comment might be missing the gist of your argument, so I removed it and told myself I'd re-think it once having read that paper.

      If you want, I can simply repost my comment for sake of clarity and exchange. I have it saved.

      Delete
    6. Only re-post it if you still mean it! Otherwise post what you think now. (But not long!)

      Delete
    7. Okay, well I finished reading all the papers, and to be quite frank, my position hasn’t changed (I can repost my comment but it's long winded so I’ll continue from your replies unless you insist). I’ll elucidate what I feel differs in our understanding of the issues. I’ll try keeping it brief and to the point.

      “Reflect on it, Marc; you are obviously in the thrall of the "Team A" view, but you need to defend it, not just repeat it... Saying everything is just beliefs neither solves nor eliminates the hard problem.”

      I’m not claiming (and I think Team A feels the same from what I’ve read) the hard problem is solved or eliminated. What I’m claiming is that the Hard Problem is really encompassed by all the Easy Problems; they aren’t distinct. I also assume the amount of so-called Easy Problems are practically infinite depending on what level of explanation and predictability one feels adequate in applying the word “solved” to. What I would definitely argue is that some of the Easy Problems which have partial solutions have made inroads on the Hard Problem. If Consciousness is the Hard Problem, a more appropriate title for Dennett’s “Consciousness Explained” would be something like “Consciousness Distilled” (showing why the easy-problems are all that’s left). Take the visual system and cell assemblies. The structure of higher order computations travelling down the ventral “what” pathway helps us understand how we see and that helps us understand why we experience what we experience. Or take the distribution of cones and rods in the retina and the distinct ways in which we see in the dark vs daylight. These are functional explanations that impinge on what type of experience is to be had. I understand you’re leaning on the bigger question which asks “Why is there anyone there to experience it? Why isn’t it simply being done, rather than done and felt?”. I’ll get to that.

      “Zombies are impossible, like a square circle? Would you mind running that proof by me again? I got it for the square circle, but not for the impossibility of zombies.”

      Zombies are indistinguishable from us in every regard
      Humans feel
      Zombies don’t feel
      Thus Zombies are distinguishable from us in some regard
      → CONTRADICTION

      When thinking of a hypothetical zombie, I have a feeling Team B (and Team C) have a tendency to be thinking about separate zombies while believing they’re thinking about one and the same zombie. How can feeling have no bearing on the causal process? Some people have made it their life’s purpose to uncover why and how we feel. I feel like when people posit zombies that are doing what we are presently doing, they zoom in to the lower levels, looking at the cold non-thinking/non-conscious machinery, ignoring the higher level of representation where thinking and feeling occurs. People used to and still do this with animals, seeing them as automata.

      Any emergent property/phenomenon can be mistreated this way. Is that dog playing fetch? I can zoom in to every cell and all I see are electrochemical reactions. I can zoom in further and further and eventually we’re in the quantum foam with no semblance of dogs or fetching in sight or even electrochemical reactions. It’s all a matter of whether we’re looking at the right level, and I believe that the level at which we categorize the world, the realest categories to us includes [people], [emotions], [chairs], [freewill], [feelings], [predators], [I], [intentions], etc…

      In the process of refining our categories, we abstract potential members and they become real to us. Why wouldn’t we abstract ourselves and what we feel?

      Delete
    8. FEEL FREE TO SKIP THIS ELABORATION

      I think getting back to zombies would be useful here. If aliens were to come down and visit a world inhabited by these hypothetical human-like zombies. What would the aliens tell each other about what the zombies are doing (the aliens could be zombies too; it makes no difference, and that’s my point)? They would probably start off as behaviorists making sense of basic inputs and outputs. In their attempt to reverse engineer, wouldn’t they notice that a multitude of different inputs seem to be treated similarly, as if the zombies are putting things into categories? It seems obvious to me that the levels of description have different explanatory purposes and, to explain a zombies’ behavior, it would be best explained at multiple levels. They might, in principle, be able to explain everything from the quantum level alone with no gaps needing to be filled, but I’d assume evolved organisms would most likely use simpler heuristics that lets them act and react quickly (leaving room for error). Wouldn’t it be simpler for the aliens to imbue intentional states to make sense of these zombies (at least at first in the process of reverse engineering)? Are we not in the exact same situation amongst ourselves? And you may ask, why bother knowing about that level of description? And I’d say that’s the level at which our evolution allowed for in terms of communication through language (Can’t we see beyond that level now? Well, we know about our blind spot but our brain still fills it up for us. We can’t help it. We are stuck in our brain’s storytelling level because we are part of the story!). Do we need to know about molecules and their non-spiraling property to make sense of spiraling hurricanes? If we want a full causal explanation with no gaps, sure. But is that an affordable strategy? Chairs and hurricanes don’t exist more than feelings. It’s just that we have a better understanding of the relation between chairs/hurricanes, our cognition and what chairs/hurricanes are made from. Feelings are confusing, but I see it as just another category. I say we try understanding how we do what we do (including believing we feel) before assuming feeling and doing are intrinsically distinct phenomena.

      At the end of it, it seems like you’re asking “Why is anyone home rather than seeming only like there is someone home?”, and my reply would be : “Yes, it seems like there’s someone home and that’s good enough for someone to be home if we’re the ones to whom it seems that way.”

      AAAI PAPER

      “The research of Libet (1985) and others on the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move.”

      This makes sense if the purpose of language is to communicate between each other. We’re storytellers, or rather humans tell each other stories in which they are the central characters. Intention is granted retrospectively.

      “And mentalistic interpretation will merely cover up the impoverished level of today's performance-capacity modeling.”

      I find this passage revealing because I agree with you that present computers and robotics that try emulating cognition are mostly a smoke and mirror show. This only tells me that the type of computation and sensorimotor grounding necessary to create (not just emulate) human-like performance capacity must be different. For one, I think the serial nature of our computers can’t do justice to the parallel computations occurring within our heads that give a rise to a seemingly (another ad hoc interpretation I believe) serial channel of thought. The huge gap shouldn’t be reason for pessimism though. It should be reason to believe different approaches are worth considering. Human cognition isn’t impressive in its raw computational power, but rather its strange flexible nature. I think solving the SGP and how semantics can be approximated syntactically will put another chink in the armor of those who believe there is a Hard Problem.

      Delete
    9. Marc, T3 and T4 (which is all there is to "heterophenomenology") can predict and explain -- given that you feel -- why you feel this rather than that (mental weather-casting and "mind-reading"), but it cannot explain how and why you feel anything at all. (And increasing the quantity and the fine tuning still does not get around the problem that the underlying question is being begged.)

      You got the zombie definition wrong. They are T5-indistinguishable from us but they do not feel. (That's just an assumption, like the assumption that zombies exist. They're defined that way. And there is no way you could tell whether they did or didn't feel [if they existed], because of the other-minds problem. Hence no contradiction.) (My advice is still the same, Marc: think longer, write shorter!)

      So, no, zombies are not self-contradictory. But that doesn't mean they are possible. The hard problem, however, depends in no way on whether zombies are possible or impossible. So forget about zombies.

      "Levels" do not explain how or why we feel, and "emergence" merely re-states the problem, rather than solving it.

      Delete
    10. "Marc, T3 and T4 (which is all there is to "heterophenomenology") can predict and explain -- given that you feel -- why you feel this rather than that (mental weather-casting and "mind-reading"), but it cannot explain how and why you feel anything at all."

      I'm claiming that for every solution we find to the an Easy Problem, the more the Hard Problem changes because we reinterpret what we are as we learn about the stuff that makes us do what we do. I don't think anyone is claiming the Hard Problem is solved (am I mistaken?).

      "They are T5-indistinguishable from us but they do not feel."

      I may not know how "feelings" arise, but my underlying assumption is that they're a product of our biology, so T-5 indistinguishable with no feeling seems to be a silly concept in itself. This doesn't explain in any way how "feelings" come to be, but I don't need to know how lift-force works to know that two identical planes will either both fly or both not fly. I guess what I'm asserting is that I doubt my intuitions more than the tenets of physicalism.

      '"Levels" do not explain how or why we feel, and "emergence" merely re-states the problem, rather than solving it.'

      Levels help us understand how or why we feel by allowing us to avoid category mistakes (so yes, it re-states the problem which to me is making inroads). Emergence doesn't solve it, but it allows one to move the question to the appropriate context, namely the world of linguistic categories. Again... not claiming it solves it. If we take the other minds problem as an absolute explanatory barrier, sure, its insoluble, but I rather ignore it and see what the inductive process says. Before evolution or machines came to the table, the animate problem seemed insoluble too. We were stuck describing behaviors through agency. I don't see why we should remove ourselves and our feelings out of the physicalist framework just because it feels like we are. I've felt/believed many things that I turned out to be wrong about (like this maybe!).

      Delete
    11. Marc, the life/mind analogy fails because there's nothing left of life to explain once you've explained all its doings (no "vital force" or "élan vital", and no need for it), but there is something left of mind once you've explain all its doings: feeling.

      Delete
    12. I'm more than willing to give in to that. I just hold out hope that some counter-intuitive mechanism could make us understand why feelings can be part of the causal framework (or an illusion of sorts, just as elan vital was an illusion). If we had a concrete understanding of how meaningful symbols dance in our heads and how they became meaningful in the process of development (and/or evolution), then I think our conception of ourselves and everything we experience would change along with The Hard Problem. The Hard Problem would inevitably remain, but so does wonder about the products of evolution long after one understands how natural selection is a mechanism for complex "design".

      One thing that confuses me about our exchange on this topic (and in your reply to Dennett) is whether or not you're assuming Team A claims the Hard Problem is solved. My interpretation is that they don't see the Hard Problem as separate from the Easy Problems... so, they claim it doesn't actually exist in the form its generally presented (whether your take or Chalmers' take). Is there a reason I should doubt that?

      Delete
    13. As I understand it, the "A Team" is supposed to believe that (1) feelings are just beliefs, (2) the "zombie hunch" is wrong because there are no feelings for zombies to not have (just beliefs), (3) there is no "hard problem" because there are no feelings to explain (just beliefs), and (4) beliefs will be fully explained by "heterophenomenology" (input patterns, output patterns, internal processing mechanisms/correlates).

      According to the "B Team," (a) organisms feel, (b) they don't just falsely believe they feel, (c) "heterophenomenology" cannot explain how and why organisms feel rather than just do, and (d) it feels like something to believe something.

      Delete
    14. "That's just an assumption, like the assumption that zombies exist. They're defined that way. And there is no way you could tell whether they did or didn't feel"

      I think what everybody agrees on is that if you have a brain; you must have the CAPABILITY to feel. And since feeling is just the result of some neurons in the brain firing, we are able to tell if zombies can feel or not by disabling the feeling neurons. To make a zombie, you can either disable the mechanism that invoke feeling, or you have to disable the feeling all together. If it's the first case, then we can be sure that zombies can still feel if you directly poke their brain. They are just not feeling in an ordinary way from the input of the environment (such as seeing) like we do. On the other hand, if you were to completely disable zombies’ ability to feel, you have to disable the part of the brain that produce feeling. Then we know for sure that it doesn't feel.

      The only case that we don't know if it feels or not is that if we do not know where feelings are located in the brain. But if that's the case, we cannot make zombies.

      Delete
  2. Harnad makes Denett’s “heterophenomenology” project clear in this paper. Basically, what Dennett sets out to do is to correlate feelings (ex: feeling hot, sad, like I saw a red square) with neurological/physical/behavioural phenomenon that a scientist can measure and observe (ex: when I feel like I am seeing the color red, this bit of my brain lights up in an MRI machine). As Harnad explains, the best that heterophenomenology can ever do is get really good at mind reading human beings. Maybe one day it will be so good that I will be able to tell, just looking at your brain scan, if you are feeling tired.

    Howerver, Harnad also points out the limits of heterophenomenology: it will never provide a causal explanation of how and why organisms feel. Correlation, yes! Causation, no. Knowing that the left side of my brain lights up when I feel like I am seeing the color red doesn’t tell me how or why it feels like something to see the colour red. In short, Dennett’s project won’t be able to answer what is known as the “Hard Problem.” It won’t be able to explain why I am not a zombie.

    Moreover, Harnad doesn’t think we will ever be able to answer the Hard Problem. It is, in his words, “insoluble.” In fact, this problem demonstrates that we can’t provide a causal explanation for everything.

    I am sympathetic to Harnad’s argument that the Hard Problem is insoluble—as a lover of phenomenology (not the really weird Dennett version but the Husserl/Heidegger/Gadamer stuff), I think science should be modest about what it can accomplish. But it isn’t clear to me reading this paper why Harnad thinks that why and how organisms feel is a question that science can’t answer.

    ReplyDelete
    Replies
    1. There is a potential explanation for the causal role of feeling: Psychokinesis (mind over matter, feeling as an independent 5th force in the universe). Trouble is that all evidence suggests that this is false. Hence there are not enough causal degrees of freedom left, once T3 explains all doings, to explain feeling.

      But anyone who thinks they have a causal, functional explanation for how and why organisms feel, rather than just do, your are encouraged to post it!

      Delete
    2. "hence there are not enough causal degrees of freedom left": what does this mean?

      Delete
    3. Causal Degrees of Freedom

      "Not enough causal degrees of freedom left" means: once you've reverse-engineered and explained everything that a system can do -- and by "do" I mean every observable thing it, or any part of it "does," every observable structural and functional state, right up to T4 synthetic neural function, or, if you insist, T5 molecular function -- then you've explained everything you can explain (since feeling is not doing, and hence not observable, because of the other-minds problem).

      But it's even worse than that: As I mentioned in another commentary, even if a god came and told you that among your T3s, T4s, and T5s there were some zombies, and the god told you which were the zombies, and even told you which was the T3 (or T4 or T5) widget that turned the feeling on or off (but otherwise left performance capacity Turing intact and indistinguishable), you still would not be able to explain how -- and especially why -- the widget caused feeling. It would still be a mysterious component that, when present, resulted in feeling, but otherwise left capacity unaltered. That means that both the feeling T3 (or T4 or T5) and the zombie T3 (or T4 or T5) would say "I feel tired," but only the feeling one would actually feel tired (or feel anything). (So only a god could tell them apart, not a cognitive scientist.)

      That's why the "why" question is the hardest one: Because, causally speaking, it's clear that for evolution as well as for function it's organisms' performance capacity that matters (for their success, survival, reproduction). And even with the god-given other-minds periscope, there would be no difference between the feeling TX and the zombie TX other than the presence or absence of feeling. You still wouldn't have a clue of a clue of what the feeling is for, what causal function it's performing, other than just being there!

      If, instead, the feeling widget was not an on/off zombie widget, independent of performance capacity, but it was somehow inextricable from some specific capacities -- i.e., if unless the feeling switch is on, those capacities don't work -- then the hard job would be explaining why those capacities cannot be generated without feeling.

      I don't have a proof of the fact that some sort of plausible causal explanation could not turn out to be possible, but I do know that all the explanations of the actual or potential functional role of feeling so far can be easily shown to fail: just-so stories in which feeling is still completely superfluous and is being injected by fiat, begging the question.

      That's why I said if anyone thinks they have a causal, functional explanation for how and why organisms feel, rather than just do, they are encouraged to post it!

      Then I will try to show how the feeling always turns out to be superfluous to the function except if we stamp our feet and insist that the function in question is precisely what feeling is: detected tissue damage is what pain is; attended optical input is what seeing is, and so on. Existing robots already show this can't be true. T3-scale there's no reason to expect that it will be any different.

      Delete
  3. This is more of a question rather than a comment (because I feel like the entirety of Prof's can be summarized as: "you aren't addressing the hard problem of how/why of felt states" - which echoes what he saw in previous lectures about functional imaging, mirror neurons, evolutionary psychology):

    I am not perfectly understanding the "zombie hunch". From what I understand, it's that you could have two perfectly identical systems, with only one being capable of feeling. And then we have to know how/why one would have felt states?? I am not sure why someone would argue in the first place that two identical systems could be different as to their "felt experiences"...

    ReplyDelete
    Replies
    1. The zombie hunch is that something could pass T3 without feeling. The anti-zombie hunch is that something could not pass T3 without feeling. The trouble with both hunches is that they don't explain how or why...

      Delete
  4. In sum, Harnad states that Dennett has simply focused on the wrong aspect of the question, or the wrong question altogether in The Fantasy of First-Person Science. I agree with Harnad's evaluation, and I don't have too much more to add to it.

    A few of my observations, which occurred while I was reading it:
    Dennett and Harnad respectively state:
    "How does it work? We start with recorded raw data. Among these are the vocal sounds people make (what they say, in other words), but to these verbal reports must be added all the other manifestations of belief, conviction, expectation, fear, loathing, disgust, etc., including any and all internal conditions (e.g. brain activities, hormonal diffusion, heart rate changes, etc.) detectable by objective means.
    Sounds like the familiar disjoint -- yet (inexplicably) correlated -- function/feeling family: Behavior, brain function, and all manner of structural and functional substrate on one side, and what they feel like on the other. Now the picture is a "hetero" one alright. There's both "kinds" of stuff, and they are 100% correlated."

    Dennett's heterophenomenology seems to rely on correlation. As we've discussed in the course, in the Neuroscience week, this does not provide the mechanisms (easy problem), nor an answer to the hard why question? Even with perfect ideal correlations, the mechanism will not be had for the many reasons we discussed earlier in the course. It would allow prediction, but not explanation. And even if the mechanism was to be had, somehow uncovered, the hard question would still be unaddressed.

    Harnad makes very similar points later on, and reiterates them quite a few times. Dennett is simply not directly addressing the hard problem. As Harnad states "We are not interested in whether your toothache was real or psychosomatic, or even if your tooth was hallucinated, nor in the conditions under which these various things may or may not happen or be predicted. We are interested in how/why they feel like anything at all."

    ReplyDelete
  5. I frankly found this reading frustrating. What I am struggling with most is Harnad's criticism, because although it is valid, I do not see a suggestion of how in fact to tackle the 'how and why' we feel anything at all.
    It is not the 'right' question to ask whether a zombie is feeling or not, but instead the question must be if not, why not?
    I agree with Harnad, but I am frustrated because I truly do not see what direction we are left with to go in understanding how and why we feel. We might come up with evolutionarily adaptive explanations for why we feel, but why are the mechanisms actually the ones that neuroscience (inadequately) tries to explains? Although interesting, I find it awfully overwhelming and daunting to realize that many of the routes of exploration in understanding how and why we feel have been the wrong ones.
    The closest I got to having a semblance of an idea for a new way of answering the question is the passage in which Harnad corrects Turing, wishing he had said:
    "I don't think we can make any functional inroads on feelings, so let's forget about them and focus on performance capacity, trusting that, if they do have any functional role, feelings will kick in at some point in the performance capacity hierarchy, and if they don't they won't, but we can't hope to be any the wiser either way".
    Does this suggest feeling as a potential 'biproduct' (for lack of a better word) of computation? If we continue working on designing a computer that will produce more and more of the same behavioural outputs, do we just hope that the more evolved the computer becomes, the more similar it will be to us, and the more of chance it has at feeling?

    ReplyDelete
    Replies
    1. If CogSci does only what it can do -- which is to try to reverse-engineer all of our doing-capacity -- it may be that, as a matter of fact, feeling will arise, somehow, because that's the only way to generate those capacities. The trouble is that we still won't know whether and when this happens, let alone how or why...

      Yes, the hard problem is frustrating. After all, feeling is surely our most important property and is in many ways the only thing that matters in the universe.

      But maybe the only consolation is that there is still a colossal amount to do reverse-engineering our doing-capacity, and that T3 (or T4 or T5) will turn out to be a demanding enough test so that only a feeling system could ever pass it: that there can't be Turing-scale zombies, even though we will never be able to explain why.

      Delete
  6. Harnad feels that all that Dennett is doing with his heterophenomenology is to gather data about both sides of the function/feeling gap without offering any explanation or mechanism that could bridge them. For Harnad, the important question that a science of consciousness should answer is this: “How or why is there feeling?”

    Prof. Harnad has many times claimed in class that he is not concerned with metaphysics, something he leaves to the philosophers. However insofar as he has confidently made truth statements (“x is true, y is false”), we can claim that he is operating under some metaphysical assumptions, i.e. some beliefs as to the kinds of things that can be said to exist and what can truthfully be said about them. Of course, these assumptions will influence the kind of explanations he will find acceptable (or even intelligible). What I’m trying to say is this: Harnad’s question is a metaphysical question, i.e. a question that cannot be answered within the domain of physics. Why? because physics takes perception for granted: it is conditioned on the existence of an observer (or, if you prefer, a feeler).

    In other words, to ask “How or why is there feeling?” ends up meaning the same as “How or why can there be science?”. If this analysis is wrong, then we at least need an explanation as to why the feeling question is not a metaphysical question. If we cannot provide such an explanation, then (1) we will never know what an answer to the feeling question looks like, and (2) we will never know if it’s even consistent to ask it.

    ReplyDelete
    Replies
    1. Keven, "Does feeling exist?" is a metaphysical (ontic) question. The answer is "Yes," courtesy of the Cogito.

      But "How and why do organisms feel?" is not an ontic question (about what exists) but an epistemic question (about what we can know).

      I also don't see why unfeeling machines could not do the measurements and even induce or deduce the theories of science. No magic in measurement, induction or deduction. The only things those unfeeling science-learning machines would lack would be an understanding of the meaning of their measurements and theories. It would all just be squiggles and squoggles, as in the Chinese Room. And even if the scientist machine was a grounded T3 robot, it would not understand anything it said or did unless T3 generates meaning.

      (But let's not make too much of a cult of science...)

      Delete
  7. "we are robots made of robots; we're each composed of some few trillion robotic cells, each one as mindless as the molecules they're composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent."

    System's reply, right?

    ReplyDelete
    Replies
    1. That was my first thought as well, however, this statement doesn't even seem to credit the system with having feeling. Here, it seems that action and behaviour is all that counts. The system's reply at least admits that there is feeling existing in the system.

      Delete
  8. "Hand-waving -- emergence, giant cooperative entities consisting of dumb homunculi that "add up" to feeling agents..."

    What about information integration theory? Or is that what you're talking about here?

    ReplyDelete
  9. "David Chalmers is the captain of the B team, (along with Nagel, Searle, Fodor, Levine, Pinker, Harnad and many others)"

    Great to see Prof. Harnad on a team with S. Pinker.

    ReplyDelete
  10. I have an idea about the 'why' of feeling, and though it is not a direct response to Dennett, this seems like an ok place to bring it up:

    A common first response to 'why do we feel?' is a behaviourist type "We feel pain so that we know not to do something/we feel reward so that we do something again". The reaction is often "Yes, but a reward could just be an increase in the probability of increasing an action, totally unaccompanied by feeling". So a feeling system is pitted against a probabilistic system.

    My argument is:
    1. Saying we could run on probabilistic rewards/punishments doesn't mean that we don't run on feeling system. There could be multiple weakly-equivalent systems that do what we do.
    2. We are not probabilistic systems. Evidence suggests we don't have randomness generators in our heads running a reward/punishment system for everything we do. Instead, we make decisions based on meaning. (Eg: when we choose chocolate over vanilla, it's because we like it more, and to like it more does not just mean to choose it more often. We choose chocolate because there is a meaningful difference in the way we experience chocolate and vanilla - the feeling of one is preferable to the other.)

    I don't know of a system that could do what humans do without appealing to either meaning (a preference for certain feelings) or probability, but perhaps there are others. For now though, accepting that those are the only two possibilities, doesn't it give a "Why?" for feeling? Non-probabilistic systems need to feel in order to make decisions. "How?" is as yet unaccounted for.

    ReplyDelete
    Replies
    1. Jacob, I'm not sure why you pit feeling against behaviorism (which never even explained doing) and "probability."

      Pit it instead against whatever causal mechanism it takes to pass T3 (or T4).

      The hard problem is to explain how and why that T3/T4 mechanism feels (if it feels). And if it doesn't feel, then explain how and why organisms with the very same doing (i.e., behavioural) capacity do feel.

      (It feels like something to mean, believe or understand something. So appealing to meaning does not help. And reward feels like something, but "reinforcement" just means a state that the robot is wired to do things to enter: it need not be a felt state.)

      Delete
  11. “Consciousness, being half-epistemic, like thought, is equivocal. This is just about feelings. Aboutness has nothing to do with it. It's just the how/why of feeling that the A Team (and everyone else who has a go) invariably leaves out. To not leave it out would be to answer the simple question: "How and why do T3 robots like ourselves feel?" Why don't they just go about their Turing business (including the emailing you and I are doing right now) zombily? Why feelingly?”

    This may be much too philosophical or general for a proper response, but reading this I just couldn’t help but wonder: is this sense of “feeling” really such an important human quality? When Harnad suggests “Why do T3 robots like ourselves feel? Why don’t they just go about their Turing business zombily?”, it just brings to mind the image of millions of people around the world going about their daily business ‘zombily’. Indeed, there are millions upon millions of people (or ‘T3 robots’) living on our planet right now and not really feeling, but rather just going through the motions of their life, or going about their ‘Turing’ business. In fact, there are multi billion dollar drug markets specifically dedicated to creating pharmaceuticals that make you able to feel less, and go about more Turing business “zombily”. How, then, is this whole idea of “feeling” seen as so uniquely and innately human, and defining of what makes a machine thoughtful, when it’s something we humans are so ready to depart from in so many ways?

    ReplyDelete
    Replies
    1. "If you prick us, do we not bleed?"

      Julia, the "hard problem" is not about feeling this rather than that, or about feeling more rather than less. It's about explaining how and why we feel at all.

      Living on the planet today are not just 7.5 billion feeling T5 robots like us, but countless other feeling ("sentient") organisms. They all bleed if you prick them, and they all feel the pain if you hurt them. And that's why feeling matters. (That some of them try to numb themselves with drugs just underscores the importance of feeling; and although they may be more numb, they are not zombies, as long as they are still alive and not comatose: they still feel.)

      When you contemplate whether the question of how and why organisms feel "really such an important human (!) quality," that's what you have to keep in mind. Not abstract arguments in philosophy class about "qualia" or 1st vs 3rd person states...

      Delete
    2. This is an interesting point.

      First though, I don't think that "going about their daily business ‘zombily’" means that people are not feeling. It just may mean that they feel stressed, bored, etc. (i.e. feeling differently) but it doesn't mean that they are not feeling. They are doing which presumably for humans is inextricably tied to feeling (let's leave the other minds problem aside for now).

      As for your question about feeling as a human quality - yes, I think it is. Suffering as the human condition can be seen as a weasel-word for feeling. Feeling is such an integral part of our daily lives and our motivations. I agree with Prof. Harnad that it extends to other organisms as well, so there are other cognitive things that add together to make us human. Feeling is not uniquely and innately human but rather the ability to express it the way we do (through language) and to communicate about it (again through language) is.

      Delete
    3. I like Harnad’s point here about not keeping this so narrowly focused on “humans” while ignoring other animals. I’m certainly flawed in the way I’m often quick to draw a line between humans and nonhuman animals, but in terms of feeling, there’s really not much of a line at all. In fact, ‘feeling’ may be one of those genuinely animal qualities about humans, which may also explain why many people are eager to overcome certain aspects of feeling in order to be more “civilized”. After thinking about it, it really does seem that most nonhuman animals do at least as much ‘feeling’ as humans do, if not more, because they don’t seem to have the same desire to repress certain aspects of their ‘feeling’ as we do.

      Delete
  12. The Dennett paper has to be one of the least kid-sibly thing I have read in this course so far. It wasn't exactly filled with terminologies or concepts that are difficult to understand, but I struggled to grasp how it relates to what we study in CatComCon, or cognitive science in general. To me, the study of heterophenomenology and the methods it utilizes seemed more like psychology and neuroscience than cognitive science. Combining the first-person and third-person evidence is, in my opinion, only introspecting on the by-products of feeling. Feelings, unlike the proverbial "belief" and "thought", are usually -- if not always -- ineffable. We often find ourselves at a loss of words when trying to describe instances of feeling, which results in the ostensive (and circular) conclusion that "it feels like something to feel/think/understand/be ___(insert emotional state)". Even the feeling of pain -- which appears to be more "real" than some other feelings -- is hard to describe, otherwise doctors wouldn't have to ask us to choose from a list of different types of pain, or rate the level of pain on a scale from one to ten. I don't doubt for a second that the methods used in heterophenomenology could potentially be tremendously useful in studying, for example, the dichotomy between what we believe and what we perceive, or as Dennett called it: the false positive and false negative beliefs, but I don't think it would prove helpful in the study of cognition and consciousness. Taking the shortcut and examine the verbalizable by-products and correlated functions of feeling is too simplistic an idea to work in terms of explaining consciousness.

    Reading the Harnad paper sure helped putting my frustrations into words. (Although I personally don't think that Dennett was interested in discussing the why aspect of feeling at all, even though it is part of the hard problem.)
    "Never mind "qualia." Just call them feelings. I can misremember, I can misdescribe, but whatever I felt, I felt. Whatever that feeling felt-like (not how I remember or describe it, but how it felt at the time) is what we are talking about here, and not even how it felt, but that it felt like anything at all."
    This sums up my doubts about the Dennett paper perfectly. When we talk about consciousness or feelings, it doesn't matter if they are illusions or false beliefs. Whatever I feel at a specific moment is true/real at that specific moment, all that matters is the first-person experience. The moment we take the third-person evidence at that specific moment into consideration, we wouldn't be studying feelings anymore.

    ReplyDelete
    Replies
    1. Alice, "heterophenomenology" is just input + T2/T3 + T4/T5. That is cognitive science (which includes psychology and neuroscience). It will eventually explain how and why organisms can do what they can do. Dan Dennett is right that it will also allow some mind-reading, and prediction and explanation of what people will do (and say). But the question is whether it can explain how and why organisms feel rather than just do.

      Delete
  13. I read Dennett and was briefly concerned I’d been misunderstanding the hard problem for the entire semester, so getting to the following passage was a relief.

    “I can tell you that until you explain why and how a pinch hurts, the game's not won. (Harnad)

    This is what I was really hung up on at the beginning of the course when we were reading Pylyshyn and Turing and trying to wrap our heads around computationalism. Sure, we can compute answers to questions about story books, but how can we explain the way our stomach drops when we get an email saying a loved one is sick or injured? How do we explain what it feels like to cry or to laugh? I don’t have an answer. But it’s nice to be at the week where we’re firmly acknowledging that there isn’t an answer yet. The question of how and why we feel may not be unanswerable (And hopefully it isn’t!) but as of now it is unanswered. Dennett’s heterophenomenology misses the point. Getting all the third person correlates of feeling is not going to allow you to determine how and why cognitive beings feel. Furthermore, the third person ‘signs’ of feeling are a far cry from feeling that feeling yourself (and I’m not sure I agree that we will one day be able to 100% mind read based on the functional correlates of feeling as is suggested in this paper but that’s beside the point). You can make assumptions about how someone else might be feeling at a certain time but you can’t really know unless you are that person. I don’t know exactly where Dennett is trying to go with heterophenomenology but I know that I don’t like it and I know that it definitely is not going to help us figure out how and why we feel.

    A final key thing that this paper touches on is that a molecular mechanism for what causes feelings to occur is only correlative. It is not the same thing as explaining how/why it feels like something to do the things we do.

    ReplyDelete
  14. One of the articles posted at the top of this page makes a case for the claustrum, the thin neuronal sheet underlying the cortext, as the seat of consciousness in the brain. This article provides examples of how Salvia use, a psychedelic drug that binds and activates receptors to overall inhibit the claustrum. The central point of this article is:

    “If a region central to the integration of consciously represented information is disturbed in its function, we would expect fundamental disturbances in the conscious experience. The core of a person’s consciousness seems to be altered by Salvia divinorum, rather than merely some distortions of vision or audition.”

    This article shows more than just correlation between brain structure and function. In this case, the inhibition of the claustrum results in subjective accounts of distorted consciousness.

    I’d like to use this example to make sure that I understand heterophenomenology and to examine just what weight this finding carries.

    This is a case of heterophenomenology in the sense that the researchers look at introspective accounts from Salvia users + neuroanatomical and neurochemical information to get a full picture of what is going on in the human. By combining all of this information, researchers get a fuller picture compared to subjective reports alone.

    Harnad, you would still say that this evidence does no more than illustrate the problem. It still does not show any mechanism. As much as I reject this article as a gross oversimplification, I do still see that there is some sort of causal relationship at play. The chemical action at receptors in the claustrum does cause a change in experiences of consciousness. Sure, we cannot jump to the conclusion that consciousness is controlled/produced by the claustrum, but isn’t this some sort of mechanism underlying one aspect of feeling? This seems to be a minute evidence that a reverse-engineered T5 robot with the same types of receptors and properties as the claustrum (and all other identical brain structures and chemicals) would feel. I am of course open to being challenged on this. It is one of the first concrete examples that I’ve tried to apply these lines of thinking to, and thus I’m still finding my footing.

    ReplyDelete
    Replies
    1. Lila, you don't have to turn to the claustrum or Salvia to show that the brain causes feeling: Just seeing red vs green is enough, along with color receptors, opponent processes, etc.

      No one (except maybe a psychokinetic dualist) doubts that the visual system causes what it feels like to see, somehow. And you can give all the details of what causes us to see red rather than green -- not just to detect or say the difference between red and green, but to see it (i.e., to feel what if feels like to see it).

      But what that causal mechanism does not explain is why it is not enough to just detect and say the difference (which is all that Dan Dennett actually believes happens!) but instead actually feel something while you're doing that. On the face of it, without further explanation, the actual felt seeing (which the brain undoubtedly causes) is causally superfluous.

      That's the hard problem. Not the correlations, from which we are of course right to assume causation.

      Delete
    2. Yes, this is all MUCH clearer in my mind after our discussion last class. Thanks!

      Delete
  15. "How/why does it feel like something to have (or to be!) certain functional powers? Although that sounds superficially like asking 'How/why does gravity pull?' it isn't, because pulling is gravity, but feeling is not doing. (It's function/function in the first case, feeling/function in the second.)"

    I suppose this is sort of my difficulty here - I would suggest that a pull is something that can be experienced and observed (like a heterophenomonologically observable property), and gravity is something that we have defined as the predictable "force" which is that pull in certain situations, though we cannot directly observe it. Why can we not say that feeling is the same - behaviour is the observable side of the property we call feeling, in which case it actually has an advantage over gravity - we cannot experience gravity directly, but we do experience feeling. This is where I feel that the question becomes interesting - why do we even experience feeling if it is only an underlying principle for "doing".

    ReplyDelete
  16. The Harnad article takes on Dennett’s in a dialogue. The main focus of the comments are to say that Heterophenomenology does not address the ‘hard problem’, does not give us a causal and functional explanation of feeling. The author insists on this point very clearly and maybe even a little too much as the majority of the comments only repeat this lacking element. The heterophenomenological method, described by Dennett, wishes to understand what explains the existence of our ‘beliefs’, but not so much the fact that we have beliefs (feelings), rather how particular beliefs follow from objective data (‘cognitive mechanisms’, ‘direct evidence’) based on their correlation. This way, it pretends to help resolve the challenge exposed by Turing, but in order to reverse engineer feeling it should attempt a causal mechanism of it, which it fails to do. One reason may be the great vocabulary confusion that Harnad rightly points out and that can lead to contradictions (as in the zombie puzzle).
    However, I did not understand the end of the article and am still puzzled, because it seems as Dennett (erroneously) presents subjects' beliefs as something they do -as something that has not to do with the fact that they feel-. If beliefs are something we do , then it could be possible for the heterophenomenological method to acquire data on which basis it could emit a hypothetical functional explanation...

    ReplyDelete
  17. In the article The Mind/Body Problem is the Feeling/Function Problem, Harnad examines the paper by Dennett and disagrees with him totally. Considering what Harnad says, I believe the biggest mistake that Dennett makes in his argument is the focus of the problem.

    Dennett is always trying to talk about experiences, which is a concept totally different from feelings. Experiences, or thoughts, can appear in robots of zombie hunch, but not feelings. Zombie Hunch can have experiences and thoughts, but it is impossible for them to have feelings, and feelings are not beliefs, and is different from beliefs about feelings.

    I believe that Dennett does not recognize that the hardest problem is to explain why people feel, not a specific some kind of feeling, but all feelings at all. The heterophenomenology that he creates may help predict what feelings a person might have, but it cannot help explain why and how people have feelings.

    Also, Harnad seems to hold the view that causal mechanisms will fail to explain how and why we have feelings. I still feel it hard to understand..

    ReplyDelete
  18. Regarding Dennett's use of belief (and ensuing quest to use it as a substrate of heterophenomenology, and thus in explaining cognition): "First, that "belief" is a weasel-word. This is controversial and (based on the resistance I have encountered over the years) probably original with me: "I believe that X" is no different from a sentence on paper or on screen, or implemented dynamically as a computer state -- unless there is something it feels-like to have that belief. If you don't feel, "you" [I hesitate to use the animate 2nd person to refer to a Zombie, I should really say "it"] don't have beliefs, "you" merely have (meaningfully interpretable) internal sentences (or internal states that are interpretable as sentences are)" I would add—if only for the purpose of laying out the particular mental gymnastics I had to go through throughout Dennett's arguments—that what separates a belief from a(n internal) sentence is indeed feeling; that feeling would be that of truth, or lack thereof, which would really be a truth in and of itself—which is precisely why Dennett's continued insistence that the non-feeling psychological be a valid standard by which to judge and extrapolate the origins of feeling (or as he would say, everything we believe) is unsatisfying and perplexing. Indeed, "belief" is a perfectly adequate substrate for heterophenomenology and what it does, if not exactly what it aspires to do: by being determined to outline causality of one phenomenon by correlation with another, and judging the validity of the former by the latter, it cannot do anything but presume the "truth" or lack thereof in what we feel to be true—since it very well can't quash the very existence of those same feelings. And that, I feel, is the crux of why "belief" and Dennett's heterophenomenology fails entirely to get to the root of cognition.

    ReplyDelete
  19. In his article "Scientists discover the on-off switch for human consciousness deep within the brain", Sebastian Anthony explains that electrically stimulating the claustrum shut-offs feeling/self-awareness/consciousness. Like the idea of online/offline discussed in class, it seems that the claustrum is what allows for there to even be an online. The claustrum is depicted as the organizer/boss/mediator of feeling.
    Aside from the limitations that Anthony mentions in the article (a single person sample with an already lesioned hippocampus), Harnad argues in his response (Claustrum Nostrum) that stimulating the claustrum arrests consciousness but also functioning.
    In this sense, they break the brain's circuit by disrupting a connection, and of course perturb the system. Then, they leap from merely disconnecting a wire to calling it a switch. Needless to say, it doesn't tell us much about what the claustrum does, much less about how and why we feel.

    ReplyDelete
  20. My answer to the question from the previous session remains an even 15. The is not a bee nor flower which could transpose a different answer from me. I appologize for the forwardness. but In my believe system, it is not valid for one to use their dietary needs in order to argue a point within the "realm" of "cognitive science". Many ontolgoical systems exist. It is for this reason that we aim to achieve a passable TT. But what if you have taken many tests over time that essentially add up to the answer 15. If i have completed enough testing, then I feel comfortable saying that there is not enough evident to suggest any "orgasm" is worth "any" life. But "since" it is "up" to me. I can choose. And I choose: 15.

    ReplyDelete
  21. Well, if Harnad makes something very clear in “Consciousness: The Mind/Body Problem is the Feeling/Function Problem,” is that what’s at issue is really FEELING.
    This at least shed some light on what heterophenomenology really is trying to do, and what is not actually answering, the first reading had me very confused. As Harnad points out the hard problem: how and why do we feel is insoluble, as of now at least. While thinking about it, it occurred to me that there is an even bigger question that hasn’t come up yet in the course (but I don’t know if we really want to go there): how and why do we exist at all? I think Prof. Harnad answer, would probably dismiss the answer to this, and just take it that we know we exist. I would love to have your input on this.
    To add, my mom sent me this TED talk the other day, and I think is relevant to the subject of this week: http://www.ted.com/talks/david_eagleman_can_we_create_new_senses_for_humans?utm_campaign=ios-share&utm_medium=social&source=facebook&utm_source=facebook&fb_ref=Default
    The talk, titled: “Can we create new senses for humans” by David Eagleman is very interesting. His original project is about allowing people with hearing or vision disabilities to be able to feel (mainly through the skin on the back) what they are being told, or to feel what is in front of them through a sensory vest, the idea is to do “sensory substitution”. By the end of the talk he talks about “sensory addition” to expand the “human umwelt.” For example he talks about pilots being able to feel a whole plane, which if I understand correctly, would permit them to assess faster the state of the plane rather than trying to find in the billions of buttons in the cockpit what is wrong with the plane. I found it extremely interesting because it is true that other than doing, we just feel. He says at the end of the video: “there is a difference between accessing big data and experiencing it,” which really just means there is a difference between interacting with data and feeling it. I can see, I can access all of those sensory inputs, T3 can also do that, but I also can feel what it feels like to see. I wonder if this sensory addition models would help in trying to answer the question: why do we feel?

    ReplyDelete
  22. Both the article claiming the drug Salvia may shut off consciousness by acting on a class of receptors found in the claustrum and the article claiming that stimulating the claustrum in an epilepsy patient shut off her consciousness exemplify the problem of using consciousness as a weasel word for feeling.

    Neither article explicitly defines what is meant by consciousness and, especially in the epilepsy article, it is unclear whether they are simply trying to say not-awake as opposed to unfeeling when they say unconscious.

    I think evidence that the claustrum actually causes us to feel is pretty meagre. The authors describe how being on Salvia supposedly differs from being on LSD, they say that Salvia users often believe they are in an environment different from a space they are actually in and believe they are interacting with fairies, dead people etc. This supposedly represents a more fundamental disturbance of consciousness than what is experienced on other drugs. But doesn’t it still feel like something to think you’re somewhere else? Doesn’t it feel like something to believe you’re interacting with a fairy? (We were also given anecdotal evidence that your do in fact feel while on Salvia in class.)

    While I found these articles interesting to read, I think that even if feeling is found to be the product of a specific brain region like the claustrum or cingulate cortex or pulvinar nuclei, this isn’t going to help us solve the hard problem. Knowing that my claustrum lights up when I raise my finger isn’t going to tell me why I felt like my feeling that I wanted to raise my finger CAUSED my finger to move when logically I know that it had to be the four foundational forces that caused the movement. It’s also not going to tell me why it is adaptive for it to feel like something to look at a coloured wall or understand that the claustrum is part of the brain.

    ReplyDelete
  23. Harnad’s paper confirmed my inquiries and problems with Dennett’s paper. His main point is, without feeling there is nothing. Heterophenomenology does not consider feeling so consequently it is useless. It can take someone’s experience, have them recount it, and have someone else categorize all the information. However, heterophenomenology cannot tell us “what” the person felt nor can it tell us “why” the person felt what they felt. So once again (as I mentioned in my previous skywriting), the hard problem is still left unanswered and we have no solution to it. Even the Turing Test cannot provide a solution to the hard problem because “until further notice, the only one who can actually feel the feelings themselves is the party of the first part, the feeler”. There is simply no objective, measurable quality of feeling. Therefore, we cannot fully understand cognition.

    ReplyDelete