Saturday 11 January 2014

11a. Clark, A. & Chalmers, D. (1998) The Extended Mind.

Clark, A. & Chalmers, D. (1998) The Extended MindAnalysis. 58(1) 



Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes.

65 comments:

  1. “What about socially extended cognition? Could my mental states be partly constituted by the states of other thinkers? We see no reason why not, in principle”

    Socially extended cognition of course is something that can be noticed; just look in any classroom. When we are on our computer, we have an electronic-social extended cognition. We are having a social situation, and there is something happening in your brain that makes you feel. You have a mental state that is influenced electronic-socially induced. I do not see the necessity of saying “in principle”. It seems clear to me that shared consciousness occurs all the time, this is not the way to get out of the problem. The issue at hand is how to explain how/why this could be possible. Philosophy is a useful tool in setting up thought experiments, but cognitive neuroscience is the space where experimental evidence will be found.

    ReplyDelete
    Replies
    1. “What about socially extended cognition? Could my mental states be partly constituted by the states of other thinkers? We see no reason why not, in principle”

      To me there seems to be a difference between shared consciousness and influencing another’s mental state. Please correct me if I am wrong, but I am interpreting this quote from Clark and Chalmers as saying that mental states can be made up of the states of another. Alternatively, it is possible for specific mental states to not exist in some people but exist in others. Clark and Chalmers give the example of John Wooden who cannot remember the names of other people. His wife remembers the names of people Wooden meets and “[serves] as his memory bank”. On the other hand, influencing a mental state seems to be altering that already existing mental state in a person rather than having that mental state exist in another person altogether.

      Mental states being constituted by the states of others seems more like filling in the blanks in a paragraph of words. Influencing seems to be something different; it seems to be more like changing the words slightly.

      Delete
    2. I think what is meant by socially-extended cognition is that we rely on other people (others' cognition) in order to do certain things. In the case of John Wooden, he relies on his wife in order to remember. I do not see this as "influencing" eachother's mental state. "Influencing" is just a sort of floppy way of saying (what I think Clark and Chalmers are trying to say): certain things I do, for instance remembering, rely on others cognition. I'll say more in my own post what I actually think of the thesis, but as far as socially-extended, I agree with Robyn. Socially-extended does not exist "in principle". If we are willing to say that the demarcation between skin and skull is arbitrary, and we are willing to say that we rely on objects, tools, computers, and the environment to 'cognize', then why stop there? It would be just as arbitrary to say that we rely on all of those things and not other people.

      Delete
    3. I completely agree with you in that in the case of John Wooden, he is not being influenced. As he is relying on his wife to remember the names of other people, he is not being influenced, but relying on the mental states of his wife. In my comment, I was trying to highlight the contrast between relying on other peoples’ mental states and “influencing” another’s mental state. I agree with you in that I believe these two things are very different, but I don’t think I put that across in a very clear way.

      Delete
    4. Where Am I?

      A mental state is a mental state because it is a felt state. If it's not felt, it's not a mental state.

      Knowing something is a mental state. It feels like something to be in that state. If you rely on what others know and tell you, or on what you read in a book, that's input to your mental state, but is it part of your mental state? Are those others (or their mental states) part of your mental state? Is the book part of your mental state? When you are looking at the moon, is the moon part of your mental state? (I didn't ask whether what the moon looks (feels) like to you is part of your mental state. Of course it is. I asked whether the moon itself was.)

      Harder question: When I ask you who was your 3rd grade school teacher, and it takes a while to remember it was Penny Ellis, is what is going on before you remember it was Penny Ellis part of your mental state? (It's going on in your head, but you're not feeling it.) There's no doubt it's part of you brain state; but there's lots of unfelt things going on in your brain all the time (e.g., your vegetative functions). Are they part of your mental state?

      Delete
    5. A mental state is a mental state because it is a felt state. If it's not felt, it's not a mental state.

      Knowing something is a mental state. It feels like something to be in that state. If you rely on what others know and tell you, or on what you read in a book, that's input to your mental state, but is it part of your mental state? Are those others (or their mental states) part of your mental state? Is the book part of your mental state? When you are looking at the moon, is the moon part of your mental state? (I didn't ask whether what the moon looks (feels) like to you is part of your mental state. Of course it is. I asked whether the moon itself was.)

      Harder question: When I ask you who was your 3rd grade school teacher, and it takes a while to remember it was Penny Ellis, is what is going on before you remember it was Penny Ellis part of your mental state? (It's going on in your head, but you're not feeling it.) There's no doubt it's part of you brain state; but there's lots of unfelt things going on in your brain all the time (e.g., your vegetative functions). Are they part of yourmental state?

      I think knowing something is a belief. Feeling something is a mental state. Having a belief seems like having a notebook. But remembering provides us with much more than a belief. Chalmers says, "In both cases info is reliably there when needed." Only the functionalist interested solely in outputs and inputs can find solace in this answer. Chalmers is wrong. Consider the Ingret/Otto example. Having info in a book, which affords one with access to the proposition that something is located somewhere, is different from remembering where something is located. It feels like something to remember. There's a feeling of space. It looks like something to follow directions to a familiar location. You can travel there in your head. Not so with written direction. The sense of confidence one has with a gps compared to just having written directions (not a map) attests to the power of a feeling. You can know about a location without the address. Chalmers wants to compare having directions and knowing the directions to the twin-earth-water-xyz example. Water on earth and on twin earth feels the same, whether the molecular structure is h2o or xyz. Not so if all you have are the directions and no map.

      The professor wanted to know if vegetative functions going on in the brain were mental states. A mental state is only the state of which you yourself are aware. You gotta feel it. But what do we call the unfelt feelings we can feel, but just aren't feeling at the moment? Consider pants. Until someone draws your attention to the fact, do you feel your pants? (Consider a college degree. Do you feel the effect of a college degree until you get out of college? It's kind of like pants). The visual system lets us feel the whole picture, but we can also let go of that feeling and appreciate the individual parts which constitute that picture. The dorsal stream of the visual system which culminates in feeling the image of someone's face is constituted by many neurons which respond to less complicated stimuli. Orientation specific line cells fire and we feel just lines on our retina. I think that feeling is there for us. But because neurons that fire together wire together, that feeling instantaneously turns into something else.

      Consider the word noodle. The feeling elicited by the word nude is also present (activated?), but then inhibited by the neurons which fire upon taking the input of the phonemes which follow. We're trying to look at the brain as some static chunk of meat, but it's dynamic and constantly moving.

      Vegetative function of the brain can be felt because every single neuron contributes to feeling. It is not the case that we reach consciousness only after a long train of "mindless" neurons. Every single neuron that fires does feeling.

      Delete
    6. The other minds problem comes up for me here. We don't know if others are feeling, or what they are actually feeling. From personal experience interpretations are important and shape beliefs and people feel differently and therefore have different mental states as well. So I think it's possible that our mental states are socially influenced, but not that they are based in the mental states of others (or rather that they could be entirely based in others), since we don't necessarily have access to that information. So feeling must be based in something else, too.
      So is the moon itself part of my mental state? I say no - my interpretation (the way my eyes are processing the light, for example) of it is.

      What needs to happen in order to be feeling something? Another way of saying, what processes lead to a belief becoming conscious? Back to the hard problems.

      Delete
    7. Alex,

      It feels like something to see that the cat is on the mat.

      It feels like something to say (and mean) that the cat is on the mat.

      It feels like something to understand that the cat is on the mat.

      It feels like something to think or want or expect that the cat is on the mat.

      It feels like something to believe that the cat is on the mat.

      It feels like something to "know" that the cat is on the mat (though knowing is usually just confidently (and with reasons and evidence) believing something that also happens true -- real "knowing" only rises to the level of certainty with the Cogito and to the level of necessity with the law of noncontradiction).

      C & C's "extension" just pertains to "offline" (unfelt, i.e., "zombie) data or processes -- regardless of whether they are located and taking place in the brain or in google-space. They are potential input to a felt state. (Or, if in the brain, they are perhaps also potential causal components of a future felt state.)

      The only aspect of this that is like h2o vs. xyz is that, being unfelt, it makes no differences whether zombie data or processes are located and occur outside the brain, or inside it, but "offline.

      You are right, however, that the unfelt brain processes preceding or underlying felt brain process can be very interactive, passing in and out of being in the actual felt state. (And all this is very vague!) But google data and processes never become components of the felt states, just inputs to it.

      Delete
  2. This paper reminded me a great deal of Searle’s Chinese Room and the System’s Reply. I think that an extension of Chalmers’ experiment would argue that Searle does understand Chinese, given that his rulebook (internal or external) is very much like Otto’s notebook in that it provides the necessary information to achieve the desired output (though Otto at least knows what he is trying to do with the information; but we can imagine Searle using a phone to order from a restaurant, in Chinese, from the confines of his room, receiving the desired order, and still not having a clue what he told the receptionist at the restaurant). And, though Chalmers lists a great deal of reasons why Otto and Inga are not different, he doesn’t satisfactorily address the fact that, as we have talked about in class, it “feels like” something to hold a belief, and although we may have a hard time determining the status of an offline belief, we do know that consulting a dictionary to check the definition of a word we have forgotten is qualitatively different from remembering the word ‘on our own’. I don’t have any problem allowing the external environment into our cognitive processes, but I do think that this poses an obvious difference that Chalmers has not addressed. Modules plugged into our brain may or may not be part of our consciousness – it all depends on what it feels like to access them.

    ReplyDelete
    Replies
    1. I agree, I too was left wondering about the intricate relationship between "mind" and "feeling" (which is most likely due to the fact that, in this class, we're being prompted to conceptualized in terms of "feelings" instead of any other possible weasel word).

      Talking about extended cognitive processes is one thing, but talking about extended mind is much more complicated - again with the easy problem and the hard problem! While Chalmers argues that the mind is extended outside the body, I don't see why I couldn't counter-argue that while it "feels" like your capacities are extended to the outside world, your feelings are still internally-generated. Because if we follow Chalmers reasoning, we can ask whether feelings could be "extended" to the outside world, or residing outside the body. As if feelings could be "out there", ready for you to pick them up.

      Personally, I think that this is just playing with words. Yes, there are environmental "stuff" happening that constrain cognition, and yes, this may create "out-of-body feelings". For instance, Asian people have a sense of self that contains other people ("I am a father" or "I am X's friend", as opposed to Individualistic culture like: "I am a student"), thus they may "feel" as if their own self is extended into the outside world within other people. But to me, these are still feelings generated by the brain, they are still "internal".

      (By the way, the Prof elaborates on your point with Searle's CRA in his video. He said that system reply people would argued that the room "understands" Chinese, and that it is also an extension of Searle's mind. So there is some understanding in an extended "somewhere". That's what I understood from it)

      Delete
    2. "though Chalmers lists a great deal of reasons why Otto and Inga are not different, he doesn’t satisfactorily address the fact that, as we have talked about in class, it “feels like” something to hold a belief, and although we may have a hard time determining the status of an offline belief, we do know that consulting a dictionary to check the definition of a word we have forgotten is qualitatively different from remembering the word ‘on our own’."

      I do sort of agree if the point you are trying to get at, Dia. I do not really see how it directly counters what Clark and Chalmers are saying though. There are a few rebuttals they might have...
      The extended-mind thesis does not argue that ALL cognition is extended, only that sometimes, our cognition incorporates things other than our brains. So yes, it does feel like something to remember a word 'on our own'. However, it also feels like something to understand something only as a result of a dictionary, another person, or an experience (sounds to me like this thesis could be applicable in the symbol-grounding problem maybe?). Or it can feel like something to forget (unextended cognition) and then something else to remember because my friend told me (extended). The part of your issue that I agree with and cannot resolve is: is the feeling extended? Or was the process of remembering and arriving at the feeling of knowing/understanding part of the 'computation'?
      Again, I just keep coming back to the idea that not all cognition is forcibly extended, just some and sometimes.

      Delete
    3. Hyperextension

      Suppose I think "the cat is on the mat." It feels like something to think that thought, so that is a mental state.

      Now suppose I was made to think that because I saw that the cat was on the mat. Or someone told me that the cat was on the mat. Or I read that the cat was on the mat. Or I remembered that the cat was on the mat.

      Same state: I'm thinking that the cat is on the mat. Different ways I got into that state (saw something, was told something, read something, remembered something).

      No problem so far.

      Now the question: that ongoing, online state of thinking that the cat is on the mat: what is plausibly part of the physical implementation of that mental state? the cat? the mat? the one who told me? the book I read it in? the part of my brain that activated remembering that the cat is on the mat?

      You don't want either the input to the mental state (which could be looking at the moon) or the referent of the mental state (which could also be the moon) to be part of the mental state. You just want that to be the parts of your brain that are making you feel what you're feeling right now: the parts that, if you turned them off or removed them, would make what you are feeling when you are thinking "the cat is on the mat" disappear.

      The retina is part of the brain. So the cat's and mat's shadows on the retina could be part of the mental state of seeing that the cat is on the mat.

      But, skipping over the cat, the mat, the friend and the book -- none of which are part of the mental state of thinking that the cat is on the mat -- what about remembering that the cat is on the mat? While the memory is being felt online, any ongoing activity in your brain might potentially be part of that mental state (though most of it isn't). But was the offline data in your brain -- the data that (like the friend or the book, or, for that matter, the cat and the mat) provided the input for your thought that the cat is on the mat -- part of the physical implementation of the mental state of thinking that the cat is on the mat?

      Because whatever is not part of the physical implementation of that mental state is no "extension" of your mind (even when it's in your brain, let alone outside your brain)...

      As for the Systems Reply: if it's overextending a mental state to include in its physical implementation either its input or its referent, it's overextending the very idea of "mental" or "mind" to apply it to states that someone does not even feel (such as Searle's alleged understanding of Chinese).

      Delete
  3. “Perhaps the intuition that Otto's is not a true belief comes from a residual feeling that the only true beliefs are occurrent beliefs. If we take this feeling seriously, Inga's belief will be ruled out too, as will many beliefs that we attribute in everyday life.”

    I do think it’s worth distinguishing internal information from external information as well internal concurrent beliefs from stored beliefs/knowledge. Noticing the commonalities is also useful, but why push so hard for this over-simplification? It seems like cognition and the brain architecture that allows it to work aren’t as amenable to Occam’s Razor as most other phenomenon. It is after all, not a rule, but a suggestion. I think it’s worth noticing the difference and yes, ruling out Inga’s long-term memory stores as beliefs. As far as I can tell, I feel like my concurrent beliefs, nothing else.

    “To consistently resist this conclusion, we would have to shrink the self into a mere bundle of occurrent states, severely threatening its deep psychological continuity. Far better to take the broader view, and see agents themselves as spread into the world.”

    I’m okay with the “mere bundle”, because as far as I can tell, that is more or less what I actually am. Psychological continuity will occur either way since the context of information retrieval between any two organisms is much more different than that between me-now and me just-before-now. In fact, the two sources of information which dictate my next occurrent belief, internal activated symbols and external sensory cues, are consistent across time. Physical space and conceptual space tend to have a certain order (even though we’re not sure how concepts are ordered to behave the way they do).

    Even though I disagree with the gist of the paper, that there’s no inherent difference between external and internal cognitive tools, I agree that the commonalities shouldn’t be ignored in our attempt to make sense of cognition (but neither should the differences!).

    “Perhaps there are other cases where evolution has found it advantageous to exploit the possibility of the environment being in the cognitive loop. If so, then external coupling is part of the truly basic package of cognitive resources that we bring to bear on the world.”

    This is where I think he’s got a great point. When it comes to evolution, this is most certainly the case for animals adapting to and forming niches. Beyond that, I think there’s a deeper idea that can be uncovered through this line of thinking. Not only do organisms use their environment as a part of their tool kit which allows them to survive and spread their genes, but they can use their own bodily reactions as sources of information (since their own bodies are the most consistent part of their environment!). As far as I can tell, bodily reactions (which are unconscious reflexes) are the only available precursors to a nervous system. I believe this is the only reasonable avenue towards explaining sensations (or feelings if you prefer).

    As the psychologist Nicholas Humphrey explains: “Both sensations and bodily actions (i) belong to the subject, (ii) implicate part of his body, (iii) are present tense, (iv) have a qualitative modality, and (v) have properties that are phenomenally immediate”. (Humphrey, Soul Dust 47)
    It could very well be that in the process of evolution, bodily reactions were highly informative cues for representing what’s out there beyond the confines of our selves. Monitoring our own bodily responses could have evolved into monitoring our responses “in secret”, meaning internally. In principle, natural selection could simply do some tidying up by eliminating the outward response. In a certain sense, responses became privatized within our brains.

    ReplyDelete
    Replies
    1. Unfelt bodily responses or felt ones? If felt, that begs the question; if unfelt, it doesn't even touch the question. (In the body need not be in the mind; but can in the mind be other than in the body?)

      Harnad, S. (2000) Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. [Commentary on Humphrey, N. "How to Solve the Mind-Body Problem"] Journal of Consciousness Studies 7(4): 54-61. http://cogprints.org/1617/

      Delete
  4. “Some find this sort of externalism unpalatable. One reason may be that many identify the cognitive with the conscious, and it seems far from plausible that consciousness extends outside the head in these cases. But not every cognitive process, at least on standard usage, is a conscious process.”

    This is the view supported by Dror & Harnad, except for the last sentence. Indeed, they would deny that a nonconscious functions or states are mental or cognitive: “Otherwise, breathing and balance are unconscious and automatic - we might call them “vegetative” rather than cognitive functions.” Since they are not starting with a shared definition of cognition, Clark & Chalmers and Dror & Harnad speak right past each other. I will leave my criticism of Dror & Harnad for the next post and focus here on why externalism makes sense.

    First of all, I would like to argue that “intentionality” is not as weasle a word as Pr. Harnad would have it. Intentionality is not a synonym of consciousness but a way of talking about it that emphasizes an aspect of it. (Homo sapiens is not a weasle word for human being but a way of warning you that I will be looking at human beings in the context of their evolutionary history.) Intentionality comes from the latin verb “intendere” which means to “direct towards”, the image being that of an arrow pointed at an aim. What’s the point? The point is that consciousness is known only because of the things it aims at, not because of what is doing the aiming. When I see something, I am not thinking about my eyes; I know that I’m seeing because of what I see, not because of my eyes or brain or whatever.

    Again what’s the point? One way of conceiving of a cognitive agent would be to say that it is whatever is doing the aiming. We can think of cognition as everything that goes on that allows that aiming to happen. And if we identify a system that aims at the world in some way, then we have a cognitive agent.

    Perhaps this “aiming” business is too abstract. The pipe underneath my kitchen sink is leaking. I need a plumber. The plumber knows about pipes, and he has a wrench. The plumber without a wrench is not a plumber but a dude that knows about pipes. Me with a wrench is just me with a wrench. What is “aimed at” is the leaking pipe and what is doing the aiming (in this case, the “fixing”) is the plumber (the system consisting of a dude that knows about pipe and which has a wrench). Is this a System Reply? No. Searle’s Chinese was useless: he couldn’t use it to order a salad. It could not serve any purpose whatsoever. The plumber system can fix pipes.

    This does not mean that Jerry, the dude that knows about pipes, is an incomplete cognitive system on his own. It just means that the cognitive system is codetermined by the cognitive function, or problem to solve, or what is “aimed at”.

    “One might argue that what keeps real cognition processes in the head is the requirement that cognitive processes be portable. [...] On this view, the trouble with coupled systems is that they are too easily decoupled.”

    I can let go of a kidney, a lung, a pint of blood, even limbs, and parts of the brain and remain alive and kicking. Ultimately, the search to draw a line between easily and hardly decoupled organs is arbitrary.

    It turns out that Dror & Harnad are not unsympathetic to this picture of externality. They just insist on their terminology: everything I have described is not cognitive but sensorimotor…

    ReplyDelete
  5. From what I understood in the reading and the Youtube lecture, the extended mind puts forth an idea that cognition is not restricted only to what occurs inside our skulls, but can also incorporate other things (ex. the notebook example given in the paper) that serve as reminders or facilitative thoughts. Where does one draw the line of what is considered part of the extended mind? The paper includes criteria such as portability and reliable access, which are both demonstrated to have been met in the notebook example, where the notebook acts as an extended mind for Otto. At some point in the paper, the authors mention ‘beliefs’, which I don’t quite understand.

    “The moral is that when it comes to belief, there is nothing sacred about skull and skin. What makes some information count as a belief is the role it plays, and there is no reason why the relevant role can be played only from inside the body.”

    From this quote, I understand it as belief can arise from anything and does not need to come from the mind. Otto has the belief that the MoMA is on 53rd street based upon what is written in his notebook. But, at that point, doesn’t what he read then become integrated into the mind, and thus the belief originates from the mind? I don’t see how a belief can be a belief if there is not a mind to believe in it.

    ReplyDelete
    Replies
    1. I think the fact that he needs to notebook in order to form the belief is part of it. He definitely needs a mind to have the belief but the belief doesn't have to originate in his internal mind. Since he uses it, it provides the right information and is readily available, the notebook is part of his mind. So the beliefs are already his.

      But even if that were true, there is still something fundamentally different about the way Otto believes and Inga believes. Chalmers & Clark shot down the difference in the source of the belief as being equivalent because of the extended mind. But if we see belief as a feeling then the better question is, can we have feeling in the extended mind?

      Delete
    2. Although I am very satisfied by the idea that the environment is part of our mind, I was also having trouble understanding to what extent the outside world is part of our cognitive process. I was finding the distinction made between "epistemic action" (any action in the world that contributes to our cognitive experience, for example: looking at a word in a book, recognizing the word and deciding to explain its meaning to someone else) and "pragmatic action" (the meaning of which I am still unsure of, I'm assuming any action in the world that does not contribute to our cognitive process. But wouldn't opening a book, though perhaps seemingly pragmatic, be epistemic since it eventually leads to the recognition of the word?) hard to wrap my head around. I also believe that the criteria Vivian mentioned (portability and reliable access) is quite arbitrary. If we are going to accept the theory that the environment is part of our cognition, then I think it is logical to automatically accept that all aspects of the environment that we are experiencing is part of our "cognitive process" at all times, though perhaps largely unconsciously and indirectly.

      “The moral is that when it comes to belief, there is nothing sacred about skull and skin. What makes some information count as a belief is the role it plays, and there is no reason why the relevant role can be played only from inside the body.”

      I agree with what Jocelyn said. There is no doubt that there must be a mind to experience the belief. I think that statement is an attempt to reiterate the idea that one component of a cognitive process, such as the experience of making a decision, depends completely on other components of the cognitive process, such as the recognition of whether or not a fridge can fit into a microwave. We can arguably recognize that a fridge cannot fit into a microwave in two ways: 1) We can retrieve memories of the sizes of both these objects stored in our brain 2) or we can attempt to put a fridge in a microwave in the environment. Since both of these processes lead to the same conclusion (the decision, or belief, that we should not put fridges into microwaves) then both of these processes, actions inside the brain or outside the brain, are equal in how fundamental they are in the larger cognitive process. Thus the "necessary role" the author speaks of in the quote is not played exclusively by our bodies (brain, skin, skull) and thus the brain is not anymore sacred when it comes to consciousness than the act of moving that fridge.

      Jocelyn's question ("But if we see belief as a feeling then the better question is, can we have feeling in the extended mind?") is really interesting. If cognition is feeling, then I suppose feeling is part of the extended mind? Although it's tricky when we think about things like pain which feel so contained in our individual bodies, yet are so dependent on external stimuli.

      Delete
    3. I don't really understand what Chalmers really means by "extended mind", maybe because I don't really believe it exists. Obviously, the environment plays an integral in shaping how we cognize, but I don’t think that there is “cognition in the environment” as the article seems to suggest. If Otto’s notebook is an extended memory (or all his memory, since he cannot remember anything), maybe he does have that belief, but does Otto believe that he has that belief if he does not feel it? I would think not. I don’t even think that it could really be described as “belief” if “belief is something that you believe to be true”. What’s in the notebook is just information, accessible to Otto…but it is not his own belief until he accesses it and chooses to believe whether or not it is true…and that feels like something. If he has no memory capacity, then after he is finished believing, it goes back to residing in his notebook. Similarly, I feel like a belief requires history (like Chalmers mentions), but not accessing that belief as it sits and collects dust in our memory is not cognizing! Only believing, in the present is cognizing

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Angela, I agree with your first point, if you consider the extended mind to be solely outside the physical delimitations of the brain, then I don’t think that an “extend mind” exists. I agree that we have inputs from the environment that influence and interact constantly with our mind but I do not think that a notebook can be part of our mind. Really, as Harnad would say, we should stop using weasel words and just talk about feeling. The mind and our consciousness all mean one thing: feeling. So as Harnad states, anything that is not felt, is not part of the mind. So it seems to me that anything that would be part of the extended mind would then be things that are unfelt. The example used in class was that it feels like something to remember Penny Elis’ name but the process by which we remember Penny Elis’ name doesn’t feel like anything. It just “pops” into our head. Therefore if the extend mind were to exist, the process that is used to remember Penny Elis’ name be part of C&C’s extended mind. So in this case, the extended mind seems to be within the physically limited brain because it seems to me that all the processes by which we form a feeling should be within the brain, but this is still unknown.
      Vivian, you state: “But, at that point, doesn’t what he read then become integrated into the mind, and thus the belief originates from the mind? I don’t see how a belief can be a belief if there is not a mind to believe in it.” The way I see it, belief might be just another weasel word for feeling, therefore the mind and beliefs are the same thing and so if you have a belief, it is in your mind.

      Delete
  6. In Clark and Chalmers article they advocate that cognitive processes are not solely based in the mind. Essentially, they are arguing for “an active externalism, based on the active role of the environment in driving cognitive processes.” They use two different cases of people and their memory capacities to further their central claim. The example is that one person (Otto) is able to use a notepad and the other (Inga) just her mind. Clark and Chalmers believe this to be the same thing because both people are able to remember facts, such as where a museum is located, just that one person has the memory outside the skull and skin. For the case of Otto, he can be regarded as an extended system because he is still considered a cognitive agent just he is able to rely on external resources to influence himself.
    I agree with that all cognitive agents require an external system to develop. For example, a human that does not receive external input will not develop properly – the brain is plastic and therefore to shape cognitive abilities it requires input to be able to function properly. There is no case in which a cognitive being has no interaction with the environment and so therefore we must conclude that it is necessary and that the mind requires it.
    However, I do not agree that the two cognitive beings they use as examples are the same. It feels like something to have a belief. Otto is missing the felt state that corresponds to believing. Even though he is able to do everything that Inga can, the two processes they use cannot be equated – one involves a state of feeling and the other does not. Although Otto and Inga’s ‘doings’ are the same, their ‘feelings’ are not, and feelings are what I believe make cognitive beings unique.

    ReplyDelete
    Replies
    1. Hi Danielle,
      I don't think think that you can argue their feelings are different. Yes, it feels physically different to retrieve a piece of information from your brain vs. from a piece of paper, but once you have retrieved that information, and have that moment of 'ah, that's where the museum is located' I would say that they both have the same feeling of knowing that piece of information at that point. Until they were given a reason to need to know where the museum was, that belief was offline for both of them, in Inga's case it was somewhere in her long-term memory and in Otto's case it was in his notebook, but once that information is recalled to active consciousness, it's the same state of knowing where the museum is for both of them. Of course, you could argue that the feeling of knowing where the museum is is slightly different in the sense that Inga might think, "I feel that I know where the museum is because I recalled it from my memory because I memorized it from that time I went several years ago" whereas Otto might be thinking "I feel that I know where the museum is because I looked it up in my notebook and gosh it's a good thing I have this notebook otherwise I'd be lost" but I think you can certainly break those thoughts up into pieces, and the piece of pure knowing where the museum is is the same for both.

      I think where Clark and Chalmers argument breaks down is precisely at this point of feeling - you can certainly outsource many cognitive functions to external technical devices, e.g. notebooks, calculators, etc. - but the capacity to feel is one that I don't think you can outsource, which is why an extended mind is not possible.

      Delete
  7. I love the extended mind thesis because it makes sense and feels intuitively true. Yet it really just address the question of "where is the mind" and "what is the mind" without targeting how and why we have a mind. So I am torn because I find it alluring, and yet I do not find it useful.
    Why is our mind a constitution of our brain and the environment? How do certain objects, tools, devices, people become a part of our mind? How would it be evolutionarily advantageous for the computer in front of my to become incorporated into my cognition? Do I learn better, more, and faster if I can utilize my environment?

    Also, as far as a computational model goes, could we not just consider others, objects, computers, tools, devices as part of the symbols that are manipulated to arrive at a mental state? And if we consider it that way, then are we not learning anything new from the extended mind thesis?

    ReplyDelete
    Replies
    1. I completely agree with you, I found the extended mind thesis very convincing but it still doesn't address the hard problem at all. The things in the environment we use do not feel. Only what our minds do, do we feel.

      I think of value for the extended mind thesis, as you asked in the previous paragraph, is that it is more efficacious to use the environment, in general. Like using a calculator or abacus could relieve some load on working memory and allow your attention to drift elsewhere. In previous classes we have talked about the use of language in facilitating learning--it is safer to learn through instruction than experience sometimes. I guess for the same reason, it is more advantageous to incorporate the computer into cognition--more memory space, information is more available and faster to retrieve. It obviously helps the mind, but I am not sure that is enough to make it PART of the mind.

      Delete
    2. Stephanie,

      I think there some consequences that the Extended Mind Thesis has, not necessarily on being able to solve the how and why, but on the way we approach both problems. If we extend the notion of cognition as a coupled process of internal and external events, then a cognitive scientist might approach the process of reverse engineering a cognitive system in an entirely different way. There focus might turn towards the construction of mechanisms that allow an organism to take advantage of the external resources that might extend ones cognitive scope. If an extended cognition, one that utilizes human language and transmits it between cultural mediums (such as the internet, books, cell phones etc…), is advantageous for our species as a whole, then it might serve as an altruistic trait selected in order to preserve the existence of genes that make us cognitive beings in the first place. Here, extended cognition would serve an altruistic purpose benefitting all humans, but would inherently support the selfish existence of a gene within our species.

      Also, mentioning computational nature of externalized cognition is really interesting to me. Although devices that aid in sensory pathologies such as a cochlear amplifier, hearing aid, subcutaneous implants etc… are all mechanical devices that approximate dynamic events, they serve very well in transmitting sensorimotor events to a cognisor. The same has held true in every instance when we consider the plausibility of a cognitive T3 robot. This transformation of digital to analogue seems perfectly plausible in helping a cognisor derive meaning from sensorimotor events. I don’t see, then, why externalized cognition need be purely symbol manipulation. Externalized cognitive mechanism seem capable of grounding just as well as a human brain can.

      Jocelyn,

      How any form of externalized cognition, beyond an organism’s mind, can feel, is equally as baffling to me. It seems to bring up more larger philosophical problems, which I might address in another commentary :P (time is always a factor).

      Delete
  8. “Once we recognize the crucial role of the environment in constraining the evolution and development of cognition, we see that extended cognition is a core cognitive process, not an add-on extra.”

    Extended cognition not an extra add-on for Clark and Chalmers because it plays an active role in constructing a given person’s cognition at any one point in time. In the paper they bring up Dror and Harnad’s idea “some find this sort of externalism unpalatable. One reason may be that they identify the cognitive with the conscious, and it seems far from plausible that consciousness extends outside the head in these cases”, but what is different for Clark and Chalmers is that they include all cognitive processes, not just the conscious ones. There are two examples I want to bring up: one that highlights this idea, and one that has detrimental consequences.

    Highlight
    Their proposition accounts for all environmental things having an effect on cognition and therefore being a part of cognition. For example if someone were to add information on a wikipedia page, that information would be considered as a cognitive system: 1. the person who wrote it and the piece that they wrote and 2. any person who is using that information at any given point in time that has an effect on their cognition and 3. 1 and 2 together creating a sort of causal cognitive system. By considering cognition distributed in this way it becomes evident that the mind is not an independent construction nor is it an actor independent from other things and cognitions.

    Consequence
    Their proposition cannot distinguish the boundaries of cognitive doings from cognitive feelings therefore posing a problematic idea of responsibility. If we were to continue the example, lets say something extremely offensive was written on a give wikipedia page. Who would then take responsibility? Would it be the cognitive system; so the individual who had that idea and the wiki page it was written on? Or would we just assign the responsibility to the person who wrote that page? Only the ‘feeling’ individual would be held accountable for such a thing, and not the rest of the cognitive systems which lead to that output.

    Overall, I see both have strong and weak points- But the basic downfall is just that both papers speak about how much of cognitive processes should be considered distributed. So Clark and Chalmers consider the mind as an extended process that seeps into and out of other cognitive technology, where as Dror and Harnad consider the mind as an internal process only felt by the feeler. And I could understand both points just the former is a metaphorical one whereas the latter is a literal one.

    ReplyDelete
  9. First I'll comment on some of the discussions from the previous class, and afterwards I'll comment on the paper discussing the extended mind.
    ____
    The position of not eating animals/animal products, with certain exceptions has always perplexed me. Perhaps this is because my understanding of the topic is still shallow?

    But if I understood the argument correctly, knowledge of the fishes' life cycle, and that they are sustainably caught is sufficient to justify eating them. Assuming the reason one does not eat other animals is due to ethical concerns, I can't see this knowledge resolving the ethical issues at all. Unless the only opposition is exclusively to factory farmed or otherwise domesticated animals being killed? In that case, using the same argument, could we not justify eating any wild-caught and sustainably hunted animals? This is not to attack the student who discussed this; I just simply don't find the logic compelling.
    ____
    On the topic of trolley problems, I heard a very interesting variation in my neuroethics class.

    The standard trolley problem involves the trolley heading towards killing 5 people on the track. The individual has an option of flipping a switch, and directing the trolley down another track, which would kill one person, but save the other 5. Most of my class stated they would switch the track.

    The variation is: 5 people need organs to save their lives, otherwise they will die. You are a surgeon, operating on a patient whose signed up for organ donation, doing elective surgery. You could euthanize this patient (and no one would ever find out), and have the organs to save the other 5 lives. With a few other details, the situation is very analogous (1 life versus 5), but most students in my class stated they wouldn't make this choice.

    This, in relation to what we discussed in class on Wednesday, make me consider the effects of framing on feeling. The way we feel determines the decision we make. Feeling, as Prof. Harnad argued, is utterly important. Often times our logic and reason is simply a post-hoc rationalization of our impulses.
    ____
    Sleep cycles: Regarding dreaming in Non REM sleep, I found a few papers presenting evidence for it.

    "Dream experiences were reported for 51.2% of the REM naps and 17.9% of the NREM naps."

    The dreams between the two states can be distinguished in other ways as well. Content tends to be different for example.

    http://www.journalsleep.org/Articles/270805.pdf
    ___
    Finally getting to the reading.

    The question the extended mind argument comes down to is what constitutes "mind." The fundamental question seems to be: Does something having an impact on your mind imply that it is part of your mind?

    The authors, Clark and Chalmers, seem to think so.

    I disagree. Having a notebook as a memory aid is exactly that - a memory AID. It's not memory itself. I think there is a requirement of internalization for memory/mind/all related concepts.

    " Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes."

    I think their definition is overly encompassing. Anything can drive and influence cognitive processes. From other individuals, to books, to data accessed over the cloud, everything has the potential to alter your mind. I don't see the benefit in this theory. It does not seem to be more parsimonious or have more explanatory power than alternative non-extend theories of mind. I simply don't see the need for this concept (though I could easily just be missing something.)

    ReplyDelete
    Replies
    1. I'll take a shot at replying to the first discussion topic you commented on. As a caveat, I don't eat fish, and I don't recommend eating fish, but I am aware of some of the reasons as to why some people continue to eat fish even when they are aware of the ethical issues involved in eating fish. I also think it's important for veg*ns to consider these sorts of things carefully and compassionately, even though the act of eating animals is typically uncompassionate.

      i) It feels good and they can't be bothered
      I don't have a lot of respect for this argument, and it basically devolves into a very typical debate you might hear between a meat eater and any sort of veg*n.

      ii) They're concerned about the environmental reasons, and not the suffering of animals
      I still think this is flawed since, quite frankly, our oceans and water sources do not need more strain as it is (and we could reduce that strain by making better choices). To those who are interested in more information, it's not hard to look up.

      iii) They associate the consumption of fish with a cultural practice
      This is where things get more complex (at least in my mind), and the topic was touched upon briefly by the Robyn in discussion. As a general rule, I think it's unethical for people to rely on this argument because in a certain sense we determine how culture evolves and (I believe) we should be making active efforts to reduce violence of all kinds. I would say food is a very central component of many (if not most or all) cultures, and the cultural malleability of food choice is perhaps exactly why I consider it important to actively make changes.

      The one exception to this rule that I have found are cultures that are dealing with certain types of oppression or genocide (e.g., indigenous groups across this continent right now and in the past). In these instances, maintaining cultural identity becomes a much bigger issue. Advocating that they forego these traditions, especially as settlers, creates a scenario where people must fight to maintain their culture in order for their communities, children, and loved ones to survive or avoid inflicted suffering. Part of this, believe it or not, may include eating fish especially since food, and the rituals humans create around food, do bring communities together.

      Throwing veganism at these communities would be nothing short of divisive and violent. This is particularly problematic when settlers put forward veganism as the best path. While I agree veganism is an ethical way to lead ones life, and I would recommend it to most people, one of the strongest arguments I've found against veganism is that the movement is highly privileged and driven primarily by white settlers who impose their views upon others rigidly and without regard for socioeconomic status, colonialism, or health (more on that later; TL;DR: the health reasons don't apply to most people either).

      In my opinion thus far, I would like to see colonialism discussed more within a vegan context. In particular, I think we need to acknowledge the impact of colonialism on land use and mistreatment of human and non-human animals. Indigenous peoples have been raising these issues for centuries, so it's quite hypocritical for vegan settlers to turn around and criticize them on a dime. Additionally, the attitude that they should adopt such practices can unfortunately lead to rigidity of thought and an inability to recognize where vegan thinking could be improved.

      Delete
    2. iv) The societal structures they have access to facilitate fish eating but not veganism
      Or as I like to call it "Whole Foods doesn't build in my neighborhood". The first thing to realize is that a healthy vegan diet can be maintained for the same amount of money or less than what it would cost to eat a low-cost diet that includes fish, meat, eggs, dairy, etc. The second thing to realize is that low socioeconomic status is correlated with a lot more than ones food budget. Stress, learned helplessness, education, the list really just goes on. For these reasons, it's typically much more challenging for a person of low socioeconomic status to eat a vegan diet. In my opinion thus far, the solution to this is two-fold: as a society, we should consistently aim to improve the quality of life of those with low socioeconomic status. At the same time, there are choices that people of low socioeconomic status can make to reduce the suffering of animals, and I hope they make them.

      v) Creating strict rules around food has a direct and negative impact on their mental or physical health
      As an anecdote, my mother is diabetic and, though she has tried eating vegan, she is allergic to many vegan foods. Of the vegan foods she is not allergic to, including fruits and even many vegetables, her blood sugar spikes through the roof. Eating vegan poses a serious health condition for her and imposes a psychological tax on those of us who love her. For some people, eating meat is actually a matter of survival and I think this becomes a much different scenario.

      Another less frequently talked about (and often stigmatized) issue surrounding veganism is that not everyone is able to easily adopt many rules around food without it directly affecting their psychological state. For many people with histories of disordered eating this is might not be a good idea for them. I do think that this can be a matter of survival for some people. As such, I think it is better to not pass judgement on others for this and at least give them the benefit of the doubt in the event that they don't divulge this information immediately. I've also met some people who will deny being vegan because they feel it binds them into holding certain rules around food and they would prefer to think of it differently. As far as I'm concerned, in that instance it doesn't even matter: they're not eating animals and that's generally a good thing.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Ethan, thanks for your response. I am genuinely curious about this.

      So to respond point by point.
      i) I agree. I don't see why this argument (it tastes good) wouldn't be applied to other types of meat. I don't see anything unique about salmon
      ii) I also agree, and this is why I made the point of eating other sustainably hunted wild animals. Once again, nothing unique to the salmon.
      iii) To touch on the cultural aspects, I see what you are getting at. But it seems to me that just about anyone could justify eating meat based on this rationale. Just about every culture has some aspects of eating meat embedded within it, although perhaps not as "ritualistically"? I realize there is a distinction because some cultures are oppressed, and there are more complex issues in their communities. But ultimately I don't really accept this argument. If one believes in vegan/vegetarianism of any sort because they feel it is wrong to kill animals, I don't think making allowances for cultural reasons is logical (although it may be ethical).
      iv) This point makes sense to me. Because of societal structures, there are inherent limitations to food choices.
      v) This is interesting as well. I think eating meat in this scenario seems justifiable.


      So to sum up, I see why some people may choose vegan/vegetarianism and it seems logical to me. However, to reiterate my point of confusion, it does not seem logical to not eat animals because you want to avoid killing/hurting them, but then to eat salmon. I don't see anything unique about salmon that allows them to be put under different rules if that makes sense.

      Delete
  10. “Even if one were to make the portability criterion pivotal, active externalism would not be undermined. Counting on our fingers has already been let in the door, for example, and it is easy to push things further. Think of the old image of the engineer with a slide rule hanging from his belt wherever he goes. What if people always carried a pocket calculator, or had them implanted? The real moral of the portability intuition is that for coupled systems to be relevant to the core of cognition, reliable coupling is required. It happens that most reliable coupling takes place within the brain, but there can easily be reliable coupling with the environment as well. If the resources of my calculator or my Filofax are always there when I need them, then they are coupled with me as reliably as we need. In effect, they are part of the basic package of cognitive resources that I bring to bear on the everyday world. These systems cannot be impugned simply on the basis of the danger of discrete damage, loss, or malfunction, or because of any occasional decoupling: the biological brain is in similar danger, and occasionally loses capacities temporarily in episodes of sleep, intoxication, and emotion. If the relevant capacities are generally there when they are required, this is coupling enough.”

    This paragraph, especially the comment about how the brain is subject to occasional decoupling, instantly made me wonder how Clark and Chalmers would relate people with certain mental illnesses. When they say “If the relevant capacities are generally there when they are required, this is coupling enough”, I picture an individual with a serious mental illness such as schizophrenia or bipolar disorder. Surely individuals suffering from such severe mental illnesses do not always have the relevant capacities when they are required. However, does this change how we would describe their consciousness in terms of extended cognition and active externalism? Surely their own brain could be described as improperly coupled to them, even though it is internal. based on this, do we extrapolate on our understanding of external coupling to include improper internal coupling? For example, if a patient with severe and persistent schizophrenia were to process an electric calculator and always have it with them, would we say that this calculator is more a part of their cognition and mind than their brain is? Surely a calculator would be more reliable in its function than a brain with severe schizophrenia, but it seems absurd to say that the calculator is more coupled to the individual’s consciousness and cognition than their own brain. I wonder then, how would Clark and Chalmers explain the mental coupling of this individual?

    ReplyDelete
  11. Clark and Chalmers argue that epistemic actions (which are actions that serve to aid cognitive processes) should be given credit for participating in our cognition. Basically they are saying that cognition extends beyond the bounds of the brain and body. I’m failing to see how sensorimotor input that comes from shifting a block around in a tetris game or checking your notebook to see what street a museum is on is different from any other sensorimotor input. To me, retrieving a memory via an unconscious processes that we don’t fully understand is fundamentally different from flipping through your notebook to find a piece of information or punching square root 81 into your calculator because it might be a little faster than retrieving the memory. We understand how a calculator works. We understand how a notebook works. We don’t understand how memory retrieval works. The problem of cognitive science is figuring out how and why we end up in mental states. I’m not seeing what the point is of including calculators and notebooks and all kinds of other technology under the umbrella of cognition when it’s unlikely that they are going to be relevant to answering the how and why questions.

    “The external features in a coupled system play an ineliminable role - if we retain internal structure but change the external features, behavior may change completely.”

    I disagree with this. It seems that in most of the examples that Clark and Chalmers put forth, the technology that they claim is part of the coupled system is really doing something that the human cognizer could do alone. The difference is speed. Of course a machine that is designed specifically to rotate shapes is going to be able to do so faster than a human can rotate mental representations of shapes. The machine has one job while we have countless other jobs that our cognitive technology needs to be able to do. I think that what is really going on is we are relying on faster machines (and more skilled humans) to perform certain functions that we are capable of performing for the sake of efficiency. It sounds pretty adaptive to me. …And it doesn’t sound like a reason to claim that my mind extends into my notebook or cell phone or close friends.

    ReplyDelete
    Replies
    1. Bailey, I agree with you. Clark and Chalmers seem to be talking about various cognitive add-ons--things that may make cognition easier, but that are not part of the core of cognition itself. When I read their paper, I thought of a car with a Tule container on top. The container provides extra storage space--great if there's a lot to carry, but not necessary and removable whenever I want. Tools like a partner or a notebook may make cognizing easier, especially by providing cues for easier memory recall, however they are not NECESSARY for cognition, and are therefore merely add-ons and not what we should be concerned about when studying the mind.

      Delete
    2. But in the case where Otto ‘’has no belief about the matter until he consults his notebook’’, Otto believes that the address of the museum is 53rd Street because he read the notebook. In this case, the external instrument explains how Otto was able to have this belief. The notebook becomes here necessary, and it seems that referring to the notebook is necessary to explain how Otto was able to do what he did.

      Delete

  12. Chalmers’ extended mind thesis takes a very fruitful conversation about how technology synergizes with cognition and pushes it to uncomfortable extremes. Right off the bat, the implications of extended minds bring forth an awkward question: where does they end? Sure, a pen and paper seems like a natural extender of our cognitive abilities, and if mental states can really be distributed, it would be the golden standard. But when you allow one in, you open Pandora’s box: every item that contains information is now a contender for a mental state. For example, does the fact that I Ordered Boustan’s while writing this make me a Lebanese chef? I certainly don’t know the recipe, but I can get the pita wrap as easily as Chef Deck himself, only I get it from inputting my order into the phone while he gets it by making it with his hands. The phone Is now a cognitive AND sensorimotor extensor. I am the Lebanese chef. I am the Calculator. I am the Google. I am the Universe and we are all made of star-stuff, man. If you support the extended mind and don’t believe in making new-agey pseudoscience platitudes, you need to have a concrete boundary between what is mind and what is environment.


    The example may be a bit of a stretch, but it shows how arbitrary any distinction between the Cognizer and her environment becomes the second you attribute ANT mental states to the environment. Dror and Harnad covered the issue of arbitrariness well enough though, so I’ll focus on the other flaw: Cognitive technology needs interpretation to be meaningful. All it changes is the type of information a cognizer must process. To be fair, this is not to say that objects cannot process information. But even IF all mental states resulted from information processing, it does not follow that all information processing results in mental states.
    “By embracing an active externalism, we allow a more natural explanation of all sorts of actions. One can explain my choice of words in Scrabble, for example, as the outcome of an extended cognitive process involving the rearrangement of tiles on my tray. Of course, one could always try to explain my action in terms of internal processes and a long series of "inputs" and "actions", but this explanation would be needlessly complex. If an isomorphic process were going on in the head, we would feel no urge to characterize it in this cumbersome way.[*] In a very real sense, the re-arrangement of tiles on the tray is not part of action; it is part of thought. “
    All of Chalmer’s examples can be characterized as an object performing a computation that can be interpreted as something meaningful by a person. The tiles aren’t doing any thinking; they just allow the scrabble player to offload their working memory by taking place of mental images of letters that they would have to use in their absence. Without a literate scrabble player to interpret the tiles, however, these tiles are just inked pieces of wood. I would consider the cognition here to be the way the cognizer is able to manipulates and interprets these symbols, not the symbols themselves.

    Ultimately all these problems arise from Chalmer’s claim that “not every cognitive process, at least on standard usage, is a conscious process”. Without the consciousness requirement, all information bearing objects can form part of cognitive process, and we are back to the new age. Funnily enough, Chalmer’s also entertains Panpsychism, the view that all information bearing systems are conscious, so a sufficiently prodded Chalmers could always bite the bullet and claim panpsychism to save his extended mind.

    ReplyDelete
  13. “If so, then external coupling is part of the truly basic package of cognitive resources that we bring to bear on the world. [...] Think of a group of people brainstorming around a table, or a philosopher who thinks best by writing, developing her ideas as she goes.”

    In this example, what Clark and Chalmers fail to address is the separation between output, input, and ouput again. When a person writes down her ideas, she is doing two different things. Firstly, she is freeing up space in whatever part of the cognitive processes plays with things (call if online, working memory, what you may). Everyone has felt this limitation on cognitive load, probably when attempting to solve a complex problem or make a list in your head. Secondly, the person is providing the opportunity to re-input the information by re-reading her own thoughts. This separation, where her written thoughts are perceived as sensory input and processed, provides the brain the opportunity to look at the idea again. The paper is not part of her mind, not an external extension of her thoughts. The paper is the source of her output, which then may or may not be re-input into the system again. This process is what I understand to be external coupling. The ability to couple is indeed part of the mind, as it is one of the cognitive processes we can undertake. However, the ability to couple is the ability to produce output, re-integrate that output as a new input, and potentially gain something from this process. This process is separate from the physical paper itself. Since all the cognizing occurs “within the brain”, I don’t see how any of this can count as extended mind.

    ReplyDelete
  14. For the most part, Chalmers’ extended mind thesis really resonates with me. But, as others have questioned before me, the notion of active externalism make me wonder whether the entire system feels (is conscious), or whether it is my mind merely using the environment for its own personal end. When Chalmers address the notion of a coupled system, he seems to address what that system can do, namely “govern behavior in the same sort of way that cognition usually does” inside a brain. He then notes that “if we remove the external component, the system’s behavioral competence will drop, just as it would if we removed part of its brain”. In this way, with regards to what a cognitive system can do, it becomes clearer that cognition might exist beyond the head, and more so, in less complex forms than the brain.

    However, it’s really the notion of “behavioral competence” which makes me wonder whether an extended mind is necessary in order to understand the mechanisms underlying cognition. For example, if we are to ask how many neurons are needed in order for an individual to be conscious, and one by one, stripped neurons from the brain to figure it out, we probably wouldn’t learn anything about a threshold for spontaneous occurrence of consciousness. Since the beginning of this course, it has always been more about organization of a system, rather than the amount of parts that have governed cognitive phenomena. Couldn’t we then say that cognition, and the consciousness that seem bound to a cognitive system must, at least, depend on some sort of baseline organization? In this case, wouldn’t all external phenomenon only add an efficiency component, but wouldn’t contribute to the inherent mechanism of cognition itself? Yes, even if the environment has the potential to function in contributing to our cognitive experience, what cognitive experience can it explain that we cannot examine and describe in the human directly?

    ReplyDelete
    Replies
    1. I completely agree Adam. I think that this paper aims more to characterize cognition - saying this is what is and is not cognition. For example, Chalmers' talk of beliefs is all about testing the line between what is an isn't belief. In that sense, I think that Chalmers would actually agree that the environment is just functioning as a helper to cognition, but is doing so in a way that simply is cognition. However, if we want to get to the essence of how our cognitive system functions, we wouldn't start by studying notebooks and Scrabble tiles. We definitely start with the core organization of the mind/brain.

      Delete
  15. “A person sits in from of a similar computer screen, but this time can choose either to physically rotate the image on the screen, by pressing a rotate button, or to mentally rotate the image as before” (p. 1)

    Chalmers proposes “active externalism”, the idea that processes in the environment can be integrated with human cognition. Chalmers provides examples like the one above to illustrate the role of technology in cognition. If I understand correctly, this means that Chalmers would argue, for instance, that using an iPhone consistently to look up directions can be considered an extension of the user’s cognitive power. Or that a cashier person who uses a calculator to give out change is seamlessly reliant on performing cognitive process with this device. Chalmers also gives the example of using Scrabble tiles as a tool to rearrange letters and make a move.

    He describes all of this saying, “By embracing an active externalism, we allow a more natural explanation of all sorts of actions.” (p. 3)

    In what way is active externalism more natural? It certainly accounts for the way that humans feel intuitively about their tools. There is feeling associated with reaching for the external device, entering input, and re-integrating output into our own mental processes. However, this is the feeling of using a tool, not of remembering or rotating or trying to remember the exact combination of Scrabble tiles you have in your hand. All of these devices help externalism cumbersome processes. This is a natural tendency so it is nice to have an account of this in cognitive science, but is it cognition? To me, this is more specifically tool-using and not natural extensions of all aspects cognition.

    It seems like you can only get on board with active externalism if you are a computationalist. In this case, the implementation independent nature of cognition would be easily lifted from the mind and manifested in any external device. Accordingly, reaching for a calculator would in fact be no different from performing a function in the head. Both have the same input-output relationship and some would demand that both even have the same underlying algorithm. This doesn’t leave room for the feeling at all or explanations of how or why feeling exists. As mentioned above, one would have to rest strongly on the system’s reply and believe that feeling lies in the system as a whole, so it is ok that the direction search on the iPhone doesn’t specifically feel like anything to the mind until it is reintegrated as information.

    ReplyDelete
    Replies
    1. I think specifically, "tool-using" is only perceptual - which is one of the arguments Chalmers refutes on p. 8, "Otto has access to the relevant information only by perception". Again, this demands a systems reply.

      Delete
  16. "We submit that to explain things this way is to take one step too many. It is pointlessly complex, in the same way that it would be pointlessly complex to explain Inga's actions in terms of beliefs about her memory. The notebook is a constant for Otto, in the same way that memory is a constant for Inga; to point to it in every belief/desire explanation would be redundant. In an explanation, simplicity is power."

    I am not sure i agree with the argument, unlike with Inga and her memory, of which she has no control over Otto has control over the notebook. He knows that the notebook can be relied on in a way Inga does not of her memory. While i use my memory when nothing else is available, i trust it less than something like notes i took at the moment of learning about information. I guess i am not convinced that Otto's notebook is totally analogus to Inga's memory

    ReplyDelete
    Replies
    1. I agree with your sentiment here, and I would add that Chalmer's explanation is not so simple as he makes it seem. Yes, perhaps it is more simple to see Otto's notebook as part of his mind, but that is only in the distinct case of Otto. If we were to consider people generally, everybody's minds would be constantly fluctuating and diverse in complex ways: the minds of those with an internet connection could no longer be simply compared to minds without one. The conception of cognition as inside the skull allows us to generalize across humans, who we can assume have similar brains (and similar minds). To me it's far simpler to say that there are diverse tools available to a cognitive ability that is common across brains.

      Delete
    2. ab, That's exactly what I thought WRT that particular quote. Our memory is unreliable. It is prone to the "Seven Sins (Absent-Mindedness, Transience, Blocking, Bias, Persistence, Misattribution and Suggestibility) of Memory". Our memory of something can be omitted, distorted and interfered with (and we wouldn't even be aware of that!), but what's written on a notebook is always a truthful record of the information. However, the authors later talked about how information on the notebook can be easily tampered with and/or missing, which is to say that what is written in the notebook isn't completely reliable either.
      In my opinion, the distinct difference between Inga's memory and Otto's notebook is the fact that retrieving something from Inga's memory can sometimes be more difficult than looking up something in Otto's notebook. Even if the memory is present, Inga's retrieval attempt may not always be successful, whereas Otto's lookup (given enough time) would always be successful.

      Delete
    3. OK, although there are clear differences between a notebook analogy and retrieving something from memory, in terms of ease and reliability of retrieval, I don't think these are the fundamental differences that make the notebook any less part of the mind as our memory. These reasons remind me of what the Systems Repliers were saying about Searle’s Chinese Room Argument – that there was no way he could do the computations quickly enough. Imagine if our memories were 100% infallible, or at least as infallible as the facts written in the notebook. The fundamental thing is that there is no feeling in the notebook, there are just facts in it. In your mind, you also have these “facts” (or beliefs, since they may or may not be true) in your memory, waiting to be retrieved. As they lie dormant in your brain, or in the notebook, there is no feeling going on…there is no cognizing. When Otto reads the information, it then FEELS like something to believe the museum is on 53rd avenue…but there is none of that feeling in his notebook…that’s just happening in his head…in his skull. There is no cognizing beyond what is happening there. Obviously, writing things down in the notebook is a tool to help him remember, and cognitive tools can help make cognizing easier and more efficient, just like search engines and the internet have made it much less necessary to rely solely on our memories…it reduces the “cognitive load” that our brains have to endure…but it just doesn’t make sense to say that cognizing goes on outside the brain…because there are no felt states outside the brain.

      Delete
  17. Chalmers and Clark argue that cognition isn’t just what happens in our heads. They write that “[i]f, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of a cognitive process, then that part of the world is part of the cognitive process.” According to this argument, pen and paper multiplication is a cognitive process (because we could do that entirely mentally, drawing out the numbers in our mind), rotating a shape in Tetris using a keyboard is a cognitive process (because you could rotate it in your mind), and rearranging Scrabble tiles to help you find the write word would be too (you could rearrange them in your mind too). Looking up your friend’s address in an address book is also cognitive, and is akin to “looking through” your memory to remember where she lives.

    In this account, looking up the address in your address book “is not perceptual at all…it is more akin to the information flow within the brain.” But this is a strange thing to say. It feels different to remember that my friend lives at 32 Clarke Ave than to not remember and to look it up in my notebook. I would never confuse the two situations! This is what Chalmers and Clark call the “perceptual phenomenology” of looking something up in a notebook (I feel like I am perceiving the address, rather than remembering it). Chalmers and Clark think this difference is irrelevant, but I am not convinced. While a notebook can play a “biological role of memory,” it isn’t memory. It feels different to remember something than to read something. Is this difference really that “shallow”?

    ReplyDelete
  18. “More interestingly, one might argue that what keeps real cognition processes in the head is the requirement that cognitive processes be portable. Here, we are moved by a vision of what might be called the Naked Mind: a package of resources and operations we can always bring to bear on a cognitive task, regardless of the local environment. On this view, the trouble with coupled systems is that they are too easily decoupled. The true cognitive processes are those that lie at the constant core of the system; anything else is an add-on extra.”

    After reading this, I can’t help but wonder: how do we decide what the “add-ons” are versus what is part of the true cognitive processes? As an example of somewhere that this isn’t clear, lets look at a person who has been diagnosed with ADHD, and has been given a prescription for Adderall (amphetamine and dextroamphetamine) to be taken every day. This individual becomes much more focussed and capable in school and work, and also heavily reliant on the medication. If they are taking these pills every day, and if these pills change and enhance their cognitive processes, is this a part of them? Sure, these cognitive processes can be “easily decoupled” with the cessation of Adderall treatment, but they also “lie at the constant core of the system”. Then are they “true cognitive processes”, or are they “coupled systems”? This same question is raised by any drug dependency or regular drug use, as well as any other habitual patterns of consumption or use. When we rely so heavily on our coupled systems that the line between external systems and internal systems is blurred, how do we determine what qualifies as our “true cognitive processes”?
    On the other hand, this statement also reminds me of the Chinese Room Argument and the System Reply objection to it, where it’s suggested that if the non-Chinese-speaking individual were to memorize the translation codes he would still not understand Chinese. In this case the translation codes are an external resource that becomes coupled to the individual. However, this argument suggests that without him actually knowing/comprehending/understanding Chinese and the meaning of the translations, these codes are not actually a part of him. However, since the codes are memorized, they are not easily decoupled, and hence are part of the true cognitive processes lying at the constant core of the system. So perhaps in this sense Clark and Chalmers would suggest that the individual actually does functionally know Chinese?

    ReplyDelete
  19. “In these cases, the human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behavior in the same sort of way that cognition usually does.”
    I find the notion described above as intuitive, although I also think that the external entity component of the cognitive system is not separate, but rather a part of the cognitive processes “going on inside our heads.” I do agree, however, that these components come together causally to produce behaviour, but I do not think that there is a need to separate the external entity from the human organism; I think they are joined in the brain’s cognitive faculties. Thus, I do not think that “removing the external component” would cause behavioural competence to drop; it may just work in a different fashion.
    “But not every cognitive process, at least on standard usage, is a conscious process. It is widely accepted that all sorts of processes beyond the borders of consciousness play a crucial role in cognitive processing: in the retrieval of memories, linguistic processes, and skill acquisition, for example.”
    Clark and Chalmers make a convincing point here. It is true that not all cognitive processes are conscious. Memory consolidation is perhaps the clearest example of a cognitive process that is unconscious. Although cognitive (and functional), we are unaware that when we are reading a list of words that this information may (or may not) be consolidated. Thus, it seems plausible that the internal and external cognitive processes are dependent on one another and interact to create cognition. It is also clear to me that cognition is dependent on some kind of external input; I do not think that one can engage in certain cognitive processes (like language acquisition) without exposure to specific environmental stimuli.
    “It may be that language evolved, in part, to enable such extensions of our cognitive resources within actively coupled systems.”
    I like that Clark and Chalmers connected the coupled system to language, because I think that language evolved in part to link the internal and external cognitive processes. Language is a cognition component that allows one to interact with and describe the environment with other humans present; at the same time, the speech organs, which receive signals from the brain, produce language. Thus, I think that the concept of active externalism greatly aligns with language specifically.

    ReplyDelete
  20. For me, what's interesting with regards to extended cognition is the interaction between a person's mind and external things capable of taking over a subset of function, not whether we actually define these things as being part of the mind or not (the issue of defining the scope of "the mind"). For example, what are the behavioral implications of using these extensions, in what ways do we become more vulnerable when these tools are removed ("unreliable coupling"), does that matter, etc.

    The thesis of extended cognition seems generally right to me as we are capable of and do use the environment to represent things, etc--as well as our internal mind, the environment enables us to do what we do because it has certain affordances, and thus has a place in explaining how we do what we do (cognition). But I am inclined to say that the "mind" is that thing which we carry with us everywhere--it's the thing that explains our ~basic~ capacities or types of basic abilities, instances of which consist in doing more complex actions that our environment/external objects and devices play a role in.

    Counting on our fingers (an example he brings up in the TED talk) is an instance of using an object to supplement the mind's capacity, but I am not inclined to call my fingers part of my mind. I'm not sure I have good reasons for thinking this. In the instance of a blind person using a program to describe scenes to them, I am more likely to call that an extension of mind, perhaps because it replaces a capacity that is not there (not just supplementing existing capacity).

    "If the rotation in case (3) is cognitive, by what right do we count case (2) as fundamentally different?"

    My intuition about solving this problem is to deny that (3) is cognitive (i.e. to deny that things that take part in epistemic actions are part of the mind). Our cognitive processes are what allow us to do long division on a piece of paper, the doing of which augments our cognitive capacity. I don't deny that we use things in our environment to extend our capacity, but I don't believe this necessarily makes them "a part of" the mind.

    "We submit that to explain things this way is to take one step too many. It is pointlessly complex, in the same way that it would be pointlessly complex to explain Inga's actions in terms of beliefs about her memory."

    This doesn't seem to be a valid reason to posit that Otto's notebook is a part of his mind. Giving an of Otto's action that mentions him looking in his notebook (rather than something like "he consulted his extended memory") is a more truthful and complete account.

    ReplyDelete
  21. This comment has been removed by the author.

    ReplyDelete
  22. "Where does the mind stop and the rest of the world begin?"

    The problem is where do we draw the line between our cognition and tools that help us to cognize. One of the example given in the paper is the notebook that could replace our memory.

    Since the effect of external tools affect our cognition greatly, it seems that they should be included as part of our cognition. And if so, our mind would not be limited by our brain.

    It all depends on how you define cognition. If you define cognition as anything that enable us do what we can do, then of course the tools we use should be included into our cognition. But Dror and Harnad's paper argues that cognition must comes with mental state, which must be felt. Then, tools like notebook or the world wide web should not be considered part of our cognition. Because they are not part of the mechanism that make us feel. Even things such as memory which is in our brain shouldn't be considered part of cognition, because they are not involved in the mechanism that make us feel. So our cognition must be limited in a limited area of our brain.

    ReplyDelete
  23. “In an unusually interdependent couple, it is entirely possible that one partner's beliefs will play the same sort of role for the other as the notebook plays for Otto.”

    I have a habit of finishing other people’s sentences. I like to think this is a good thing, since it means I’m so immersed in their dialogue that I can predict what is coming next, but I know it irks some people, especially when we simultaneously come to different conclusions. Three years the girlfriend of a boy with a brilliant mind but a less than impressive memory meant we resembled the Wooden couple mentioned in the article. He would introduce an idea, express himself until he couldn’t pull up the right words, then I would prompt him with my best guess. This happened so frequently that neither of us were conscious of it – I had become an extra bank of knowledge for him that dug up its own answers. Just send enough identifying information (“That nasal sounding instrument that has a solo at the start of Rhapsody in Blue?”) and I would work to the corresponding word (clarinet). This, I argue, is an example of an extended mind.

    The article was written in 1998 so the authors were not able to comment on smart phones, but I bet they would have a thing or two to say. Give anyone a smart phone with a data plan and watch them become one. Rapidly retrieving definitions, facts, maps, names of songs currently playing with a few quick finger swipes (or a verbal command in the case of Apple’s Siri), and voila, access to what other people have written online about the subject. In a way, the Internet is the biggest book of doodles to ever exist – we use it to send messages, show art, therapeutically put our thoughts somewhere, and recommend music, with billions of contributors. Maybe that isn't extended cognition, but being able to instantly share thoughts with someone on the other side of the world with some quick keyboard tapping is very efficient interpersonal connection.

    ReplyDelete
  24. First off, I would say that Chalmers raises some interesting and important points for discussion. I do think that the model of active externalism, and indeed distributed cognition in general, is in some ways a more robust model of cognition in that it accounts for more events and scenarios that affect our capacity to feel the way we feel. However, the relevance of this line of thought depends on just that: in order for something to be considered distributed cognition it depends on a being with the capacity to feel, and the importance of distributed cognition as a model in which we may view cognition is limited to instances in which the external world affects what feelings we generate (or are able to generate). For this reason, I question how relevant it is for getting at how and why we feel, though it is certainly relevant when considering how it is we do what we do.

    "How much cognition is present in these cases? We suggest that all three cases are similar. Case (3) with the neural implant seems clearly to be on a par with case (1). And case (2) with the rotation button displays the same sort of computational structure as case (3), although it is distributed across agent and computer instead of internalized within the agent. If the rotation in case (3) is cognitive, by what right do we count case (2) as fundamentally different? We cannot simply point to the skin/skull boundary as justification, since the legitimacy of that boundary is precisely what is at issue. But nothing else seems different."

    Unfortunately, I didn't find this to be a strong criticism of the regular model of cognition we've been discussing all semester. Under the model we (or at least I) have been using, the answer is very clear: the neural implant in (1) is not a part of cognition any more than the screen in (2) and (1) -- it just provides a different set of information to the user. The skin/skull boundary holds here because we can assume a neural implant has no capacity to feel and it only acts as an interface through which information can be displayed to the brain (except in this case it's displayed directly into the cerebrum instead of through the eye and retina).

    As discussed extensively by this point in the class, information is simply squiggles and squaggles if there is no cognizer present to interpret and extract meaning from them. As with Searle's Chinese Room argument, in all three of these scenarios the person is the only one manipulating information based on it's meaning and, in this sense, the person is the one cognizing while all the other components (including the implant) are simply computing. I believe it is a fallacy to conflate the actions and communication mechanisms (e.g., the internet, writing in a notebook) of humans with the manipulation of information based on feeling.

    Upon further reflection, I think I'd also be alright with excluding all of computation from cognition (even computation in the brain) since, as far as I can tell, the manipulation of symbols based on shape can be reasoned about independently from the manipulation of information based on meaning. Perhaps we should adopt a definition of cognition such that it begins at the point where meaning is extracted and ends at the point where that meaning is no longer felt. Everything else is computation, and we have another term for that that.

    ReplyDelete
  25. Regarding to the paper The Extended Mind, I think the part I do not agree with Clark and Chalmers most is the definition of belief. I think what they mean by "belief" is more like memory. Inga remembers which street the museum is on and Otto can look up which street the museum is on in the notebook, and the notebook, I think, is just a paper version of the "memory" that a modern computer has (see, it is called a "memory" instead of "belief").

    I agree with Clark and Chalmers that the memory part in the brain and the notes in Otto's notebook do not really differ in terms of getting information and taking action. Otto, in this case, is similar to a T3 robot, that can do what Inga can do - find where the museum is and go to that museum, but as we discussed before, a T3 robot cannot feel, so the problem is that finding out where the museum is and going to that museum actually do not require feeling - it is just a doing process and a T3 robot is able to do that. That is the point that I do not agree with Clark and Chalmers. The finding and going to the museum part that Otto is able to do, is only doing and is only about memory, but accessing memory is not believing. Believing, in my opinion, is not only about doing. When I believe something, it also has feelings because believing is regarding some proposition to be true, and it should be a subjective process not an objective one. It is possible for the same proposition, different people have different opinions, but for memory, it is always there; as long as it is the same memory, i.e. the same notes in the same notebook in Otto's case, no matter who is going to look up the information, it is going to be the same information. That's why I think believing has feelings inside. Inga believes that the museum is on 53rd street and she feels something, but I do not think Otto will have the same feeling as Inga does, or I do not think Otto will be able to have that feeling at all. Thus, I think Clark and Chalmers are mixing memory and belief up.

    I cannot really comment on other parts of the paper, especially the explanations of Clark and Chalmers regarding to some objections to the differences between Inga and Otto, because they have used the wrong definitions and it is kind of useless to discuss their explanations. However, I do think that the way to access information, the reliability of the storing place, and other things, are not the key issues here. They are only the minor parts and should not be worried about.

    ReplyDelete
  26. "More interestingly, one might argue that what keeps real cognition processes in the head is the requirement that cognitive processes be portable. Here, we are moved by a vision of what might be called the Naked Mind: a package of resources and operations we can always bring to bear on a cognitive task, regardless of the local environment. "
    Here, the authors brought up one of the essential properties of cognitive processes: portability. I feel that this is somewhat similar to what we the implementation-independence of computational processes. Being portable, or implementation independent, means that as long as the core causal algorithms are intact, the same cognitive/computational processes can be carried out in different environments/with different hardware. We can always mentally add and subtract numbers without any add-ons (pen and paper, calculator, etc.), whereas a calculator can always be counted on to calculate as long as it's processor chip is still intact, even if you take everything else away.
    The authors continued to write, "On this view, the trouble with coupled systems is that they are too easily decoupled. The true cognitive processes are those that lie at the constant core of the system; anything else is an add-on extra. " When we talk about cognitive processes, the easily decoupled add-ons are merely extensions of our cognition and definitely not a part of the core system. A calculator, despite the fact that it greatly speeds up our calculations, is not an un-detachable module of the core system. Without a calculator, we might have to take an excruciatingly long time to mentally calculate certain things and still end up making a lot of mistakes, but we can still do it. I wonder, however, what about calculations that are more difficult than adding/subtracting/etc. What about calculating the square root of a number? What about solving for a logarithmic function? We know how it's done. This information is encoded in our portable core system, but it's highly improbable (but not impossible, as we have seen in numerous cases of mathematically gifted individuals) that we would be able to mentally calculate, for example, the square root of 538 to 5 significant digits. How do we address problems like this? In this case, a calculator is not simply a detachable add-on anymore. What takes about 5 seconds with a calculator may take an infinitely long time without one. Intuitively I would say this still does not qualify the calculator as a part of our core system in the cognitive processes of square root calculations, but if we exclude the calculator, there won't be any successful output of such cognitive processes. I wonder, then, how essential does an add-on have to be so that we consider it a part of the core system?

    ReplyDelete
    Replies
    1. What about math geniuses that can calculate (or know) the squareroot of a number within seconds? Isn't that the same thing as what the calculator is doing? Humans have different skills, and we are not the same...some people have difficulty adding two numbers together. Is calculating the square root of a number essential to our survival? If it were, maybe we would all be able to calculate square roots better. The "essential"-ness is kind of arbitrary...and kind of subjective. I find it hard to live without my computer, but the computer is not an extension of my mind. I guess the answer of "how essential does an add-on have to be"...well essential enough that it isn't an add-on...and that it feels like something to do it.

      Delete
    2. Alice's comment got me thinking about the necessity of add-ons such as calculators. And I agree that doing complex mathematical calculations would take years without a calculator, but calculators still don't cognize. Mental states are a part of cognition, as to have a mental state, one must feel the way it feels to be thinking. Calculators don't feel this way, so they do not cognize. Instead, they compute. Because of this, I don't see calculators as a part of the core cognitive system, even if we would literally be unable to cognize certain things (e.g. complex calculations) without them.

      Delete
    3. Your comments helped me put my thoughts (or…feelings) into words. I now realize that asking "how essential does an add-on have to be so that we consider it part of the core system" is just begging the question. If a calculator started off as an add-on, which is to say that it's not, according to Clark and Chalmers, some kind of portable cognitive process, and it doesn't feel anything while doing the calculations, then it's not a cognizer. Add-on and core system are fundamentally different categories, and they don't fall on a continuous spectrum. No matter how essential or efficacious an add-on is, that doesn't qualify it as a cognizer, because a calculator is, and always will be, an unfeeling extension of our cognitive processes. The question that I asked in the original comment is presupposing that somehow an add-on can become a core system, without having any felt states, just because it's "essential".

      Delete
  27. I’m not quite sure what the significance of externalism is—perhaps I don’t completely understanding the reading. I agree with other commenters that active externalism is an intuitive theory of the mind. Humans use external objects to help them cognize—doing maths on a piece of paper is a lot easier than doing it in your head. I’m not sure I agree with Clark & Chalmers’ statement that the coupled process of person +pen and paper constitutes a cognitive system. The pen and paper, independently cannot cognize, while the person can.
    It also seems fairly intuitive to me that external process can affect/influence cognition. However, I’m not sure how the mind can be extended to incorporate pens and paper and books. If the mind is the thing doing cognition, I see the aforementioned objects as tools that can shape the mind, or that can express what the mind thinks, but not the ones doing the cognition. I understand that Clark & Chalmers aren’t proposing that pens and paper cognize, I’m just not sure what they’re getting at beyond what I think is fairly intuitive.
    The question that Clark & Chambers allude to, and what Harand states in a comment is: what constitutes a physical implementation of a mental state? A mental state encompasses the ability to feel the way a thinking thing feels when it thinks. When a person is currently in a mental state (online), the parts of their brain that are making them feel the way a thinking thing feels is the physical implementation of that state. Harnad asks if whatever provided the input for a thought is a part of the physical implementation of the mental state of the thought.
    I guess the answer to that is no—inputs can be provided in different ways, but can result in the same thought. How one gets the input doesn’t affect them feeling like a thinking thing feels.

    ReplyDelete
  28. “In both cases the information is reliably there when needed, available to consciousness and available to guide action, in just the way we expect a belief to be”
    As I understand it, Chalmers and Clark claim that we have a sort of “extended cognition” based on the relationship we have with our surrounding world, what they decide to call “active externalism”. Their main argument in support of this is that the information is readily available when it is needed, and that there is no difference between Inga’s access to her memory and Otto’s access to his notebook. And all of this seems alright to me, but, even though they do address cases in which Inga could be intoxicated or had her brain tampered (the equivalent of Otto losing his notebook for example) what about memory phenomenons like forgetting or Tip-of-the-tongue phenomenon. Our brains are not computers that store memory files to be retrieved when needed, I think Chalmer et al. try to account for this with their “active externalism” idea, but I don’t know if they account for it properly.
    “In each of these cases, the major burden of the coupling between agents is carried by language”
    I wonder if Chalmers et al. would claim that memory phenomenons occur because, perhaps what occurs in our brains when thinking is not language as we understand it orally or written, but rather a different language of thought.

    ReplyDelete
    Replies
    1. To address the tip-of-the-tongue phenomena, just because it cannot be recalled immediately does not mean the information is not there. I agree that the way it was worded makes it seem like all the information we have have is at our beck and call but I do not believe that is what Chalmers et. al meant. Information we need can generally be recalled at will (of course with exceptions like the one mentioned but we are not computers that often function seamlessly). The interesting thing is that if we can consider Otto's notebook part of his cognition then so is my laptop. I type up ideas and make notes on my laptop as he uses his notebook but now I really do have information at my beck and call.

      Delete
  29. It may be a question of semantics, but I find this interesting: when asked what I am doing while chopping up carrots to make a salad, I will reply with "cutting carrots". While my ability to interface with the carrots without the use of an external tool (such as a knife) does not include cutting, I consider myself to be cutting the carrots. One could suggest that I also consider myself to be cutting with a knife, but then again, I also hold with my hands or think with my brain. When we write things down, we often cite the reason for doing so as: "so that I can remember it later". I am doing the remembering, but the process is mediated through the book. Later, when I try to access the information and remember upon reading the notebook, that feels like something distinct from remembering in a normal way. Remembering is certainly part of cognition, and it is mediated through the book; the entire system that consists of me and the book does, in fact, feel something different than the system that does not include the book at all. Of course the feeling does not extend to the book itself, but, this does not act as a necessary condition to elements that we are ready to admit give rise to cognition, like the amygdala or the claustrum: certainly they do not cognize themselves without the presence of a fully cognizing being. Emergence and the Systems Reply seem to be threads that run deep into the roots of cognitive science - in the end, we are all made up of exclusively non cognitive matter, so the question becomes where to draw the line between what order of organization is required for cognition, and how far can it be extended. As for external cognition itself, it does seem far-fetched, but I do not believe that it can be ruled out quite yet.

    ReplyDelete
  30. This is a fairly unfamiliar topic to me still so I haven't really come to a full conclusion as to what I think of the idea of extended cognition.
    Though I do find it strange. The idea that things our environment are part of our consciousness and actually affect cognitive processes, and that our minds do not stop at the brain but extend outside of us, is not immediately clear. From what I understand, the authors believe that our actions are part of our thoughts and that explaining our actions with inputs and outputs is needlessly complex. I don't see how it is needlessly complex and an explanation would have been nice.

    I must say, the mention of science fiction is not a good way to argue a point. The authors apparently believe that we will one day be able to add storage (like some sort of external hard-drive) or some sort of specific knowledge into our brains. I think that is ludicrous and there was no reason to mention it.

    The main idea I get from this article is that the authors believe that mental states like experiences, beliefs, emotions, desires, etc. can be significantly affected by external factors/features. More specifically, the idea that our beliefs drive our actions is one that is debatable. Perhaps we only believe that our thoughts drive our actions and really it is the other way around.
    In the case of Otto, the imaginary Alzheimer's patient with a notebook of the information he gathers, the authors claim that the notebook is analogous with memory (unaffected by Alzheimer's). So if Otto's notebook is analogous to memory, and the claim is that the notebook is part of cognition - i.e. something that is a felt/mental state - then memory is a felt state.. But what does it feel like to have this capacity we call memory? I know that I can access information and that feels like something but my memory itself isn't something I think I feel...
    So how does it feel for him? Does he feel that the information in the notebook is a belief of his? Better yet, what is the mental state, the felt state, of Otto as he reads from his notebook? The information he gains probably feels like something, but where the information came from doesn't really matter. The information is part of the input but the physical notebook certainly is not.
    I am quite happy to maintain that cognition, the mind, mental states, whatever you want to call it, is entirely internal and limited to the brain (and maybe the body but I'm not really sure yet).

    ReplyDelete
  31. “What, finally, of the self? Does the extended mind imply an extended self? It seems so. Most of us already accept that the self outstrips the boundaries of consciousness; my dispositional beliefs, for example, constitute in some deep sense part of who I am. If so, then these boundaries may also fall beyond the skin. The information in Otto's notebook, for example, is a central part of his identity as a cognitive agent.”

    I still find it difficult to accept that information in Otto's notebook be regarded as part of his belief system, and therefore part of his mind as well. That his notebook itself, or that it contains accurate information is certainly a valid belief. But just the same as someone who consults a GPS or the internet for information, the belief in that information's validity comes as a consequence of believing that the medium itself is a valid source of that information. It seems that the belief in the validity of that source is really what grants, in this case, the notebook license as part of cognition, over the belief in the information itself. Thus rather than the postulation that cognition be extended to every individual belief in the notebook, it would really be more a function of the singular belief that, as Chalmers and Clark would say, the skin-and-skull feeling that the notebook is an accurate receptacle for what had previously been cognised in this skin-and-skull.

    ReplyDelete