Saturday 11 January 2014

11b. Dror, I. & Harnad, S. (2009) Offloading Cognition onto Cognitive Technology

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins 



"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our “Cognitive Commons,” in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature. 

74 comments:

  1. "What difference does it make if the database in which the datum is stored, outside your awareness, is in your brain, or on the shelf of a library, or in someone else’s brain?"

    It may be helpful to speak of beliefs not just in the binary “X believes Y” but also in terms of certainty and source. Say the belief that there is an art museum on 53rd street, as in Chalmers’ paper. We can imagine multiple ways of accessing this information: (A) “just knowing” that it’s on 53rd street; (B) not remembering where it was, but then waking up and remembering “ah yes, it’s on 53rd street, right by the bakery”; (C) having the memory artificially implanted in one’s mind, complete with the subjective experience of walking by the museum; (D) looking at one’s own notebook; (E) asking a friend; and (F) looking it up on Google. In all cases, we could say “I believe that the museum is on 53rd street.” But these are NOT equivalent endstates.

    Regardless of whether the museum is actually on 53rd or 51st street, if you were to probe cases A-C, all of which are got the belief ‘on their own’ (“why do you believe it’s on 53rd? And not 51st?”), you will receive an answer that confirms “I know because I saw the museum there” (even if they didn’t actually see the museum there, because they merely dreamt it or because it was artificially implanted). Depending on how vivid the experience is to them, they might doubt their own memories – but this is a second factor, independent of which scenario across A to C we are looking at. If we assume equal certainty, it would be very hard to convince these cases that there was no museum on 53rd.

    In cases D-F, where the information was acquired externally at some point (it can be in the distant past and then retrieved via memory, it makes no matter), the answer might range from “everyone knows there’s a museum on 53rd street!” to “I got the information from a reputable source.” Would it not be a great deal easier to convince your subject that there was a conspiracy or a widespread misbelief regarding the location of the museum for D-F than A-C? For example, for case D, that someone tampered with the notebook and that it was not actually one’s past self that wrote down the address? Intuitively, we know it would be. People tend to believe their own grounded experiences more than referred knowledge, no matter how reputable (see: moon landing skeptics). So answering the question of “what difference does it make” might require characterizing beliefs in different ways.

    ReplyDelete
    Replies
    1. I say "may" because I may also have totally missed the point.

      Delete
    2. I think certainty is a red herring here (since you only have certainty for mathematical proofs and Descrartes Cogito): the rest is just beliefs, some true, some not, some based on strong evidence, some based on weak; but just beliefs.

      An "occurrent" (i.e., online) belief is a mental state (a state of mind), and the question of whether there can be an "extended mind" is about whether the mental state of believing can be wider than someone's head.

      Delete
  2. The whole talk of augmented reality made me think of this awesome XKCD https://xkcd.com/941/

    “Let us defer reply until we consider a few more cases, noting only that this question about whether perception/cognition is just (i) internal and local or (ii) internal/external and distributed is similar to the question of whether meaning is narrow or wide.”

    Taking into account how external things impact on our doing is obviously necessary to make sense of what, how and why we do what we do. However, ignoring the divide between ourselves and the world has its limits. What allows us to do what we do is our bodies (the design evolution has provided) in some respects more than the environment we find ourselves in doing what we do. We could say that we run thanks to our bodies (legs and nervous system included), but we could also say we run thanks to gravity and the ground we use to run on. There is an important line to be drawn between our bodies and the world, and that division can be exemplified by the R&D work needed to build the former. Gravity and the ground are ubiquitous on Earth, while complex human organisms are unlikely amalgamations of molecules. Our particular linguistic categories are wholly dependent on the environment, but in a completely different environment the same human would still make/refine another set of categories. To do what we do, we need the world as much as we need our biological makeup, but let’s not forget where the magic is happening (or at least the confusing part of it all).

    “There can be living organisms that have no mental states and there can be nonliving systems that do have mental states.”

    I don’t know about that 2nd option. Level of description matters (Marc says again). We shouldn’t assign the property of living to Gaia only because she’s made of living things. Gaia is all of nature, so gaia is nature and nature is made up of living things, but is not actually a living thing. Are we one big cell because we’re made of cells? Do RGB televisions not display yellow? Properties are emergent; we can’t just jump across levels willy-nilly.

    “Cognitive technology will do likewise, but instead of affecting our muscles it will affect our brain development, organization and capacities.”

    I think this point is worth taking in. I think there are some real world consequences worth acknowledging. Even though information is at our fingertips, the nature of the internet (which allows us to seek what information we want to find to confirm our biases) is such that misinformation can propagate just as quickly as information. This could possibly create more social division than cohesion (I don’t really know which way its swinging).

    The influence of our changing culture and social infrastructure can have surprising effects. Reading Ian Gold’s book “Suspicious Minds: How Culture Shapes Madness” really drove the point home for me. He presents strong evidence that the rate of schizophrenia increases with city size (p.131-133). I can’t help thinking this relates to our stepping out of our cognitive niche in which our tribe can only have so many members before social cohesion breaks down (Dumbar’s number).

    ReplyDelete
  3. Dror & Harnad argue that to define cognition simply in terms of functionality makes completely arbitrary how we go about identifying cognitive agents in the world and would tell us nothing about what we really want to know, i.e. how we have mental, conscious, feeling states. According to this view, we should maintain the boundary of the cognitive agent to the think that is responsible for generating this capacity for feeling, namely, the nervous system.

    This insistance on the nervous system and the consequent insistance on the human mind or a human-like mind is arbitrarily contriving. The fact of the matter is that how consciousness comes about and whether it only comes out of a nervous system is not at all settled. Yes, a human mind needs a brain, but that’s all that can be said. There may be other kinds of minds that could have other kinds of migraines. How do we identify them?

    Dror and Harnad will say there is no way to tell, because of the problem of other minds. Fair enough but what is in a mind does not stay there. Minds leave traces in the world. If we can identify stable structures in nature that are created by other stable structures, than we have a basis for calling the former a product and the latter a mind. A walking stick can be an object for a single human being; the Hoover dam can only be the product of a society. A hole in the ground is an object of the groundhog’s mind; an anthill is an object of the ant colony’s.

    If you take the problem of other minds to be doubt about the existence of other minds, then this is empty solipsism: it makes no sense. You did not invent language so the thoughts you are having now are borrowed from a culture that existed before you did: the content of your mind presupposes the existence of other minds. The problem of other minds, if there is one, is that you are chained to your point of view: nobody can feel what you feel in exactly the same way you feel it. But existence is not private at all.

    ReplyDelete
    Replies

    1. Kevan: While I agree that we shouldn’t be “brain chauvinists”, I think that Dror & Harnad are merely arguing that we should be “feeling chauvinists”. The confusion stems from the vague definition we all have of a mind. We know that we have it, and that it feels like something to have it. Harnad and Dror go from here to say that all mental states are felt states, and all cognitive states are mental states. No mention is made of a mind needing a nervous system, and this view is made explicit when they say that a TT passing robot would most likely have mental states. If I understand correctly, however, this would imply that our TT passing robot must have feelings.

      So if you take them at their word and believe that cognitive states = mental states = felt states, The dams and anthills you mention must be seen as the work of many minds, not of a “hivemind”, since you can't attribute unified feelings to societies and colonies. To say that a dam must be a product of society’s mind because it is a product of society’s actions is to mistake collaboration for fusion. It should (much more tediously) be viewed as a composite of the mental work of politicians, architects, engineers, and construction workers who built it. Society is admittedly a much more convenient creator to attribute the construction to, and at some levels of analysis may be the appropriate one, but that is not enough to warrant thinking of it as a mind by its own right.

      Delete
    2. Nicolas, thanks for your reply.

      You are right, the nervous system is not explicitly mentioned but if you look at points 15-17 of the summary, it strongly suggests some kind of nervous system chauvinism: "the functional mechanism of that altered mental state is still just proximal--skin and in", "only their sensorimotor input and output contact points with our bodies are part of our cognitive state", "Maybe parts of our brain... but that would just be a widened, spatially distributed body." Because most of our experience with feeling is "skin and in", insisting on equating cognitive states with felt states will beg the question. When you use a stick to check how muddy the ground is, do you feel it at the end of the stick or at the contact point of your hand? I think this is a trickier question than it looks.

      Nevertheless, the TT example is indeed evidence that they do not necessarily mean biological stuff. My problem remains that even in the TT case, we only recognize a mind that is human-like. You are right that the definition of mind is problematic, but it is false to say that the only thing we know about it is our own experience of it. We recognize minds at work in the behaviour of animals very different from ourselves.

      "...you can't attribute unified feelings to societies..."
      To be sure, I did not claim that the feeling of a complex system is anything like that of the individual agents. So it is not about fusion. I consider collaboration or communication as a coordination device that allows the system to interact with the environment. I do argue that we infer (actually, perceive) minds from their behaviour and that some behaviours are only describable as that of complex systems of agents.

      Perhaps attributing large systems with feelings or consciousness is unwarranted at this point. But it is just as unwarranted to rule it out.

      I am very much influenced by Francisco Varela's thinking on this and I think he makes a very good case for it here:
      https://www.youtube.com/watch?v=wh7_rxARhJc

      Delete
    3. Thank you for that video. I found it to be the best explanation of emergence I've heard so far, and his views on evolution are really beautiful.

      I admit I was being a little bit descartes-ian when characterizing the mind way I did, but I wasn't trying to imply that it is ALL we know of a mind, just that it is some of the things we can be sure of.

      Ultimately the question of whether minds can be extended is answered by your definition of a mind, and I still think that "does it look like it feels" is a better criterion for determining whether something has a mind than "does it look like It is intelligent". After all, things like deep blue can be attribute some amount of intelligence with 0 amount of mind (Though i agree with Varela that deduction is just the icing on the cake in terms of intelligence).

      I guess you're right to say that it is unwarranted to make a judgement either way on whether large systems can feel, but I am skeptical of this point of whether it is even possible to come up with an answer to that. I'm not just referring to the other minds problems, but that we may only be complex enough to conceptualize feelings as complex as the ones we feel ourselves.
      Just like it is impossible for neurons to know that the brain they are in is conscious, it is impossible for the french to know if france is conscious.

      Unwarranted as it may be, my hunch is still that france isn't conscious. It just seems like there is something special about the speed * connectivity present in things like neurons that would break down in something as large in heterogenous as a society

      Delete
    4. Wide Bodies

      "Emergence" is seductive, but completely non-explanatory, and question-begging.

      The mind/body problem ("hard problem") is the feeling/doing problem.

      Moving "emerges" from the doings of the horse's body.

      Heating "emerges" from the doings of molecules.

      (Living, too, "emerges" from the doings of molecules.)

      But in each of these cases the "emergence" is just of doings, emerging from doings. No problem. We can explain how and why.

      But that doesn't seem to work for feelings (which are not doings) "emerging" from the doings of the brain (or anything).

      No doubt feelings do "emerge" from the doings of the brain.

      Trouble is that no one can explain how or why -- whereas it is "easy" to explain how moving, heating and living "emerges."

      Just saying that feeling "emerges" does not explain it: it's the thing that needs to be explained.

      So far this has nothing particular to do with the question of whether there can be an "extended mind" -- a feeling state that is wider than a feeling brain/body (or wider than the innards of a feeling T3 robot).

      And, again, it's not the other-minds problem; it's the problem of explaining how and why any feeling system feels, rather than just does.

      I agree, though, that the best we can do is assume it does feel, if it behaves (does) indistinguishably from those that do feel. That's just the TT.

      Now, is there anything wider than a brain (or a T3 robot) that can pass T3 (or even come anywhere close)?

      (I'm not a big fan of "autopoiesis" either -- but that's just "Stevan Says...)

      Harnad, S. (2007) Maturana's Autopoietic Hermeneutics Versus Turing's Causal Methodology for Explaining Cognition (Reply to A. Kravchenko (2007) Whence the autonomy? A comment on Harnad and Dror (2006). Pragmatics and Cognition.

      Delete
    5. Proffesor Harnad,

      I have a question concerning this quote form your response above:

      "No doubt feelings do "emerge" from the doings of the brain. Trouble is that no one can explain how or why -- whereas it is "easy" to explain how moving, heating and living "emerges." Just saying that feeling "emerges" does not explain it: it's the thing that needs to be explained."

      For the "doings", there is always a clear measurement that can be recorded in order to establish the how and why's regarding the cause and effect of our brains (its performance capacity). More so, I see how it would be quite difficult to ignore the infallible truth of the Cogito - it clearly feels like something to feel. My question then is, although we can clearly say that the feeling emerges, are the how's and why's of "feeling" unanswerable because "feeling", unlike "doing", cannot ever be measured?

      Delete
    6. Heterophenomenology would be my go to response, but i guess, based on our discussion last week, its still really only a measure of performance capacity, and a correlational (not causative) measure at that.

      Delete
    7. Nicolas: I agree with you that feeling is more important than intelligence but at this point I can hardly make the difference anymore. Why does something look like it feels? Because it avoids what can destroy it (pain) and it goes after what allows it to remain a something (pleasure). Surely, something that does this is not moving randomly, so it is showing a measure of intelligence. Yes, deduction is the icing on the cake, but there HAS to be a cake: from an enaction (Varela's theory) point of view, Deep Blue has as much intelligence as it has mind, i.e. none.

      Pr. Harnad: I do think that the notion of autopoiesis goes a long way in making sense of feeling. I am no expert on the matter but I will venture to write my take on it.

      One reason why autopoiesis might not be straightforward at first is because it requires to let go of a big assumption, namely the assumption that there is a meaningful world prior to there being things to give it meaning. Autopoiesis describes a system that creates itself, and that has itself as an end (read survival), but that also, correlatively, creates itself a meaningful world. The world of mountains, and blue skies, and rocks, and the planets, the three dimensions, and even time itself... are things that we make up in virtue of being autopoietic systems. A different kind of system may not perceive a three dimensional world, nor time in the same way.

      Right here, we have a big advantage, because we can talk about meaning from the point of view of the system (feeling is first personal). We are not stuck in the third personal, objectivist perspective so revered in science but which cannot make sense of feeling on its own.

      In this picture, feeling is the world that the system is creating, or the story it is telling itself about its relationship with its surroundings. But this relationship with the world is observable from outside as well: we call it behaviour. In other words, feeling is the first-person perspective on life, and behaviour is its third-person perspective. They are perfectly correlated, but they look nothing alike.

      Asking the "how or why" of feeling in terms of a mechanism does not make sense, because it would require that we use concepts that precede feeling logically. But nothing precedes feeling logically (arguably, not even logic). The existence of the world and its relative stability, and all of its regularities, are all counterparts of the existence and doings of our bodies.

      The closest to a "how or why" that we can get is "How or why does feeling change the way it does?" Autopoiesis can make sense of this: life is fragile. An aut. system is in a state of precarious equilibrium, like someone walking on a slackline, and like the latter, the system needs to constantly move to remain in equilibrium. Whatever brings the system ever so slightly away from its equilibrium will be felt, made sense of and reacted to according to its existential significance: approach (feels good) or move away (... or not). Not only that, but the system does not feel anything if it does not move to explore differences (e.g. if you cannot move your eyes everything goes black). Therefore, because of reaction and because of movements necessary for perception, feeling is also a doing. But feeling is the doing of a living, fragile thing engaged in the project of staying alive. If the TT robot is such a thing (regardless of the stuff it's made of), then its definitely wrong to kick it.

      Is autopoiesis question-begging, a mere hermeneutical exercise? It certainly is circular, but it does cover its grounds by making circularity the name of the game. It is misrepresentation of emergence to use the heat and molecules example. This is a linear, reductive emergence. A better term to refer to what Varela is doing would be "co-emergence of self and world".

      Delete
  4. Dror & Harnad’s article elaborated on what I was trying to articulate on my commentary for Chalmer’s Extended Mind. While I gave a very specific example of “out-of-body feelings”* with the Asian person, whose sense of self is integrated into other people, Dror & Harnad’s go further by stating that NOTHING is in causal isolation.

    *I’m using feelings here since from what I gathered from Dror & Harnad’s argument, having a mind implies that these mental states are felt too

    Meaning that most of our cognitive processes imply an interaction with the outside world. In fact, I was surprise that Dror & Harnad relied extensively on cognition/mind being extended without ever using the world “interaction”. Wouldn’t an extended cognition be also an interactive cognition?

    However, I was not entirely convinced by the distributed life argument. I have trouble conceiving Gaia as a live organism. Of course, I do concede that Gaia is made of countless “lives”, just as the human body is made of cells that are autonomous entities. However, there is an incredibly complex orchestra of cell function to make us alive. Some do X function, others never do X but do Y; all of this being orchestrated for further functions. Every cell (well, most of them) has its own function and place and, without them, the organism cannot function properly.

    So where would that orchestrated action be for Gaia? Each species function with a specific goal of “staying alive”, which I don’t see how this contributes further to Gaia’s existence. Cells make us alive so we can adapt, reproduce and evolve. If humans have this goal to survive and adapt, does Gaia really has this same goal of evolving over time? To me, Gaia isn’t an organism, and I’m sure (and I hope) that someone will find counterarguments to my claim.

    ReplyDelete
    Replies
    1. I agree with all your points here. But D & H doesn't say otherwise...

      Delete
  5. "But is a toaster really autonomous? Don’t we have to build it, and repair it, and plug
    it in, and put in the bread, and set the level, etc.? Are the toaster and bread and
    ourselves just part of a still wider distributed system, the one with the real
    autonomy, while the toaster and the bread are merely “slave” systems, with no
    autonomy of their own?"
    Of course the toaster is not autonomous and of course we are not entirely autonomous. I think this really attacks a major flaw in the extended-mind thesis. Just because we rely on things other than our brains in order to arrive at particular mental states, it does not mean that those things we rely on are truly a part of the mental state. And mental states are what we want to understand in cognitive science, not the tools that help us to arrive at them.
    Harnad explains that we have been trying to understand mental states all along, not extended cognition. We know that cognition is extended in the way that we know we rely on all sorts of things to function.
    Could we make an analogy with food? The internal state that signals satiation requires food (but what the food is does not matter), so that the appropriate hunger hormones are signalled to the brain. In that sense satiation requires food, but satiation or the feeling of being full is not comprised of food, it is something that arises as a result of eating but is something else entirely. Moreover, by examining the food we will learn nothing about what exactly it feels like to be full, and why or how it feels that way.

    In short, we use things other than our brains to cognize, but only our brains have mental states. Knowing that we use things to cognize does not tell us anything more about those mental states, and provides no additional insight into building feeling robot.

    ReplyDelete
    Replies
    1. “In short, we use things other than our brains to cognize, but only our brains have mental states.”

      Is there a difference between using the input from outside sources to cognize and relying on others’ mental states to do the cognizing for us? I completely agree with you when you said that “just because we rely on things other than our brains in order to arrive at particular mental states, it does not mean that those things we rely on are truly a part of the mental state”. Cognition does rely on input. Otherwise, there would be nothing to cognize; it is why T3 is the correct level. In addition, we can rely on other cognizers to off-load some of the work.

      However, I do not think that cognition completely relies on things other than our brains. Off-loading the work to another cognizer does not mean that we do not have the capacity to do the work. Rather, it could be for other reasons such as simply choose not to do it at this moment. Just because a child uses a calculator to find the product of 7x9, it does not mean that the child does not have to capacity to do it and it does not mean that the child will always use the calculator to find that product.

      Delete
    2. Stephanie, I like your explanation and metaphor a lot. And Reginald, your comment brought me back to this quote from the paper:

      "Although sensorimotor and cognitive technology can undeniably extend our bodies' sensorimotor and cognitive performance powers in the outside world, only their sensorimotor input and output contact points with our bodies are part of our cognitive mental state"

      Dror and Harnad limit cognition to the brain and only allow the interface between technology and the mind to be considered cognitive. In your calculator example, the process of punching in numbers is cognitive, however the computation occurring within the device is not cognitive. Then, the internalization of the final answer is cognitive as the child glances at the calculator, sees a number, and does the next step of the math problem. Studying the calculator will tell us nothing about cognition. Only studying the “skin and in” as well as the point of interaction between cognitive technology and mind will tell us anything.

      Accordingly, the feeling that comes from using cognitive technology is the feeling of being able to do more, but NOT the feeling of actually doing. When you use a calculator, you don’t feel yourself doing the mental math. All you feel in that case is correctly identifying calculator keys and making a well-formed equation. Yes the outcome of having the final product feels the same in our mind whether we used a calculator or not, however, doing a math problem in your head vs. on a calculator are two entirely different feelings.

      Likewise, Harnad and Dror’s statement, “Retrieving a word from memory or retrieving a word via a Google search feels much the same to us” does not resonate with me. The only way that the two feel the same to me is the final outcome of having the word. A human mind is aware that the “underlying functional mechanism” of the two processes are different. One feels like something and occurs in the human, the other doesn’t feel like anything and occurs in a machine.

      Delete
    3. Felt and Unfelt Doings

      Lila, remember the 3rd-grade school-teacher/Penny-Ellis example? There's no question but that once you recall that it was Penny Ellis, that state -- of believing that it was Penny Ellis -- feels like something, and that's a mental state.

      But what about the time between when you were asked and when you remember Penny Ellis's name or face? That was the time in which the exercise showed that introspection does not reveal how you do anything -- even something as simple as the name of your 3rd grade school teacher: Your brain just hands the answer to you on a platter (and you have no idea how).

      Well what's the difference between when it's your brain handing you the answer on a platter and Google handing you the answer on a platter?

      Suppose that right after you were asked the question, when your brain was "searching its database" to find the answer, you got distracted, and never got to the point where you remembered her name. Did you feel the "searching in the database"? You certainly felt that you had just been asked the question. And you felt what it feels like to be trying to remember who it was, but not yet having succeeded. So if at that point your brain found the answer and was just about to make you feel it, and then you got distracted, and never felt the answer, would you still say that what had been going on in your brain while it was hunting for the answer was part of a felt state, the way that Google hunting for the answer was not part of a felt state?

      Or were they not both just parts of unfelt states? Not that you were not feeling something while your brain was hunting for the answer. But you weren't feeling the answer (P.E.). Nor were you feeling what your brain was doing in order to hunt for the answer. (Or if you think you were feeling what your brain was doing to find the answer, what was it? That's the blank that introspection always draws, and that's why it always has to punt to cognitive science to hunt for the answer -- not the answer to to Penny-Ellis, but the answer to the question of how your rain found the answer, Penny Ellis. What your brain did to find P. E, was done "offline;" what you actually felt was "online," in "real time" -- though of course felt time is just virtual time... )

      Note that there are some things you do in your head that are felt states. If you do mental multiplication of 3-digit numbers, or mental long division, you really do do in your head what you would do on a piece of paper. When you do it on paper, the paper isn't part of your mental state. But when you do it in your head, you're like Searle after he has memorized the TT-passing programme. What you feel you're doing is part of your felt state.

      Delete
  6. “In both cases, being an organism was conflated, animistically, with having a mind. This is an error; living and feeling are not necessarily the same thing. There can be living organisms that have no mental states and there can be nonliving systems that do have mental states.”

    How can we know that there are living organisms with no mental states? Does this not relate to the other minds problem, where we don’t know if something feels and I can only be certain that I feel? Since feeling is tied in with consciousness, and a living organism must be conscious, then where is the separation between living and feeling (unless living organisms are not necessarily conscious). I find it a little difficult to state that those that don’t act like us (ex. trees) don’t feel (maybe it’s because I like “The Giving Tree” too much), since that seemingly contradicts the other minds problem.

    Furthermore, I’m not sure I see the point in this extended minds discussion. It may be because I have not fully understood the paper, but to me, the mind is whatever is inside your skull. Neither reminders in a notebook nor information found on Wikipedia is part of any extended mind – it is just some method of communicating information that we have established, that has been effective within our society. It is information like we get from the environment, but with a more established and more effective way to transmit that information from individual to individual. While an individual can learn about the rules of basketball by receiving an oral explanation from a referee, one could also go online and find the information on Google. I don’t see how that’s part of an extended mind – we are merely transmitting information in a different format and making it more available. What about when bees do the hive dance to show fellow bees where to find flowers? If they were humans who performed such rituals, humans would find a way to write that down in some sort of documentation and allow access to others (via the internet). I see how it is our brains that do the thinking, making us cognizers, but anything written down, I think, loses that property of cognition and just becomes shared knowledge being passed on from one individual to the next, so that others do not have to discover each thing through personal experience.

    ReplyDelete
    Replies
    1. It's very unlikely that plants (or the individual living cells and organs of your body) feel. They have no nervous systems to feel with.

      Of course, the other-minds problem cuts both ways: you can never be sure anyone or anything (other than yourself) feels -- and you can never be sure anything doesn't feel.

      That's why we are stuck with relying on the TT!

      I agree that books themselves are not part of my felt state (though I can feel what it's like to read a book). But neither are someone else's words, spoken to me, part of my felt state (though I can feel what it's like to hear the words). They are, however, the output of someone else's felt state.

      Delete
    2. I agree with you that it seems contradictory to affirm that there can be living organisms with no mental states if we cannot even know whether another living being has mental states because of the other mind problem. However, maybe we can affirm certain living things do not have mental states because if they did we would expect them to have certain behaviors. For example, I can affirm that a robot does not have mental states if it does not pass the TT because if the robot had mental states I would expect it to be able to pass the TT. Maybe we could have the same kind of reasoning with living things…

      Delete
    3. (*I agree with you Vivian. I did not see Prof Harnad had posted a comment between the time I wrote mine and the time I posted it)

      Delete
    4. This comment has been removed by the author.

      Delete
    5. In response to Professor Harnad: I'm not sure why you think books are not part of your mental state. If you're feeling that you are reading a book, that experience of the book necessarily becomes your current experience of consciousness. That book that you sense through sight and touch creates a mental representation of itself, a completely new and unique experience. Perhaps you mean that it is the physical book itself that is not part of the experience of feeling? Do you believe that there is a discrete cutoff point between the objects in the environment that trigger cognitive processes of feeling, and the experience itself?

      In response to Marion: Are you saying that if we think it is (arguably) reasonable to assume that that Robot that does not pass the Turing Test does not have a mental state, then technically we could say the same for living organisms such as plants? Since both fail to behave the way we would in response to a specific stimuli, then we could arguably conclude that they are not experiencing what we would in that situation?

      Since we're all talking extensively about the fact that there is literally no possible scientific way to prove that others feel or don't, I'd just like to say that I think we do to a certain extent live our lives under the assumption that if someone is laughing, then they feel the way we feel when we're laughing. I feel that many people believe strongly in Neuro-imaging studies that conclude, say, that if a certain brain area is active in someone while they say they feel something on their arm, and that is the brain area that is active when others say they feel something on their arm, then that person really is feeling something on their arm. As Professor Harnad just mentioned, it is unlikely that plants feel because they do not have nervous systems (this is a good representation of our tendency to conclude that our experience of feeling must be correlated with other things about us, like our nervous systems.) I think that, after taking this class, we can all grasp the possibility that rocks who lack nervous systems (as we understand them) are just as likely to feel as your best friend, your mom, a turing machine or your pet rabbit. As Professor Harnad continues to ask us: Would you kick a robot? Why or why not? I believe it is in our nature, as social animals, to project our experience of feeling onto others and to a large extent it is essential to our mental health to feel that we are not alone in the experience of consciousness.

      Delete

    6. Answer to MASR:
      Question to Prof. Harnad: I think it is precisely the fact that if I sense a book through my sight and touch senses, and the fact that my brain can take that input in, process it and give me the experience/feeling of looking at and touching with my hands that specific book, that explains why the physical book itself is not part of the feeling. As long as the book is out of my sight and my touch, I don’t know how to cognize about this book, once I see and touch the book, I know what it feels like to see it and touch it, and I could go further and memorize how the book looks and feels, and “record” that feeling. And then, I could bring back that feeling of how it felt like to see and touch the book, even without the book having to be physically present. So, I would think that’s where the cut-off point is, it can exist physically, but as long as there is no feeler to feel it, no brain and no mind to process it, there is no mental state regarding its existence and hence no feeling.

      Question to Marion: Yes, I think that’s exactly the case here. If I cannot correlate what others are doing, with what I would do in their situation, then I’m prone to conclude that they are not like me. I know I have a mental state and I feel, if others don’t respond the way I do, then they probably don’t have a mental state and don’t feel like I do. I jerk back my hand when I get burned, plants don’t jerk back if a branch catches fire, but they do start burning, the way I do (tissue damage) because we’re both living organisms. I jerk back, because I feel pain, because I have a nervous system. Plants don’t have a nervous system, hence they are not experiencing/feeling what I experience/feel when I get burned.

      “I believe it is in our nature, as social animals, to project our experience of feeling onto others and to a large extent it is essential to our mental health to feel that we are not alone in the experience of consciousness.”
      I agree with you completely here, and I think that is the reason we would not kick Ethan.

      Delete
    7. Professor Harnad, you stated: “Of course, the other-minds problem cuts both ways: you can never be sure anyone or anything (other than yourself) feels -- and you can never be sure anything doesn't feel. That's why we are stuck with relying on the TT!” I understand, through the other-minds problem, why you can never be sure that anything doesn’t feel yet if we take a thing, such as a plant, there is no way for it to even accomplish anything close to the TT, it therefore seems redundant to reply on the TT in this specific case to assess feeling, same goes for a rock for example. This might be why we readily jump to the conclusion that these things don’t feel.

      MASR, I completely agree with Andrea and Harnad here: “I think it is precisely the fact that if I sense a book through my sight and touch senses, and the fact that my brain can take that input in, process it and give me the experience/feeling of looking at and touching with my hands that specific book, that explains why the physical book itself is not part of the feeling.” The main thing here is input. Whether it is from a book, a person, the Internet, you are getting input from the environment to a felt state. Same goes for the “offline” retrieval of Penny Elis, it remains an input into a felt state, we are completely unaware of how it gives us the feeling of remembering the name of Ethan’s 3rd grade teacher but we do feel it. (felt states and mental states being the same thing, this is why books are not part of our mental states)

      Delete
  7. “only mental states are cognitive states, that cognition is only narrow, and that the only place it is “distributed” is within a single cognizer’s brain”

    For Dror and Harnad (2009) cognitive function and cognitive state are two different things. Cognitive function are brain circumstances which are unconscious that help produce certain doings (ex: driving a car) and help produce cognitive states which are conscious (ex: to know you are driving a car). Cognitive functions are not something limited to the body; instead these can take form as many things such as the mentioned ones of: certain brain functions, language, reading, writing, telecommunications, computing, web. The argument given is that these sorts of cognitive functions although they do help create a mental state they are not a mental state themselves. The argument is focusing in on the idea that cognitive states are mental states, mental states are felt states, and felt states are only felt by the feeler as an individual. Therefore a given cognitive state cannot be something distributed amongst other people or things; it is only for each individual to have their own cognitive state of a shared thing/event/experience/etc..
    In this way cognitive functions are not equated to the same thing as cognizers who are feeling cognitive states. This is a rebuttal to Clark and Chalmer’s idea that the mind extends into the world, rather Dror and Harnad argue that the world extends into the mind.

    After these two papers I still can’t come to a happy settled conclusion. Both topics seem to be collated as a möbius strip. I agree with Dror and Harnad that it is only the cognizer who is feeling at one given time and there is no body else feeling that. But I agree with Clark and Chalmers that an ‘imprinted mind’ (so not the mind as a feeler, but the mind that is shared through another form ex: through language) is able to be distributed and shared and will make others be able to feel that person’s given mind through their ‘imprinted mind’ at any given moment.
    So if we take the distributed migraine example: yes there will two different felt states of the migraine by each head of the Siamese twins and each is their own cognizer in that sense, but the twins are capable to pass on their cognitive state not as the exact felt state of the feeler, but as an ‘imprinted mind’ state in which the other can assume a certain feeling. So if one twin tells the other “I’m getting pinching pain in my temples”, and the other says “I’m getting pressure everywhere including my eyes” they can easily imagine the feeling without needing to actually feel it, but just to imagine what each feels like. This is a form of distributing the mind, although it isn’t the actual feeler’s mind that’s being shared, but rather some kind of replica of it to help the other feel what they are feeling.

    ReplyDelete
    Replies
    1. If we agree that a cognitive state = a mental state = a felt state, then a "mind" is just a "feeler." What the feeler feels is part of the feeler's mind, what it does not feel is not (there's no such thing as unfelt feeling), even if it's going on in the feeler's head, and even if it's going on in the feeler's head while the feeler is feeling (but feeling something else). For example, vegetative states are not felt states, hence not mental states, hence not part of the mind (although you can consciously take over breathing, in which case it becomes a felt -- i.e., mental -- state).

      When another feeler tells me what they feel, and I imagine the feeling -- even if that makes me feel the same way, it's not the same feeling: That's the other feeler's feeling, the one they are feeling and this is my feeling, the one I'm feeling. There is no "extended" feeler, or "extended" feeling. Just the outputs of one as the inputs to the other.

      Delete
  8. “It is still cognizers who cognize — the tool-users, not the tools”

    This reminded me of Amber Case’s TED Talk (https://www.ted.com/talks/amber_case_we_are_all_cyborgs_now#t-445835) on cyborg anthropology. One of her last points made is that: “It’s not that machines are taking over, it’s that they’re helping us to be more human- Helping us to connect with each other… And really it ends up being more human than technology because we’re co-creating each other all the time.”
    In this way it is the tool-users who are making use of the tools in relation to their self rather than the other way around. In her talk we can deduce how cognitive technology is not actually the cognizer, instead it is some sort of a vortex that connects and aids cognition of someone at one given point in time to another. I think this idea of hers shows very well Dror and Harnad’s idea of the Cognitive Commons: where “cognizers and cognitive technology can interact globally with a speed, scope and degree of interactivity that yield performance powers inconceivable with unaided individual cognition alone.”
    Furthermore, the comment made about humans and technology co-creating each other reminded me of the idea in the paper of how “cognitive technology is not just something used by cognizers, but a functional part of the cognitive states themselves”. In this sense both cognitive processes (doings of a human and doings of a piece of technology) are working together at a level of function where each is playing a role to the cognitive state of an individual.

    ReplyDelete
  9. “So what difference does it make if you recall it through an unconscious retrieval state in your brain, or by Googling it (again relying on a state in some remote computer and database of which you are not conscious)?”

    I think one of the most important distinctions between an unconscious retrieval state in your brain and Googling it is the difference in the time and the effort to perform these actions. Obviously with offloading brainwork, it is easier and faster to perform multiple tasks at once.

    For example, if I wanted to figure out which mushrooms were safe to eat and what was the safest way to cook them, I could potentially perform all of these tasks on my own. However, I could also ask other people to do these tasks for me and I would simply receive all of the information. The latter option would be much faster because I could ask several others to perform these tasks simultaneously, collapsing the timeframe necessary to acquire this information. This does not mean that I do not have the capacity to perform these tasks, it simply means that the latter option is much faster.

    The same can be said about Googling information and trying to remember the information on my own. If I really wanted to, I could sit down and try to remember the specific fact that I need to retrieve. Let’s say that this task would take me several minutes. After those several minutes, I would have the information that I needed. If I Googled the information, it would only take me a few seconds. The end products of either procedure would be the same, I just chose a different, faster route to get to this end product.

    ReplyDelete
    Replies
    1. But the question is whether fast-googling makes google part of my mind: And if so, than, how fast? Is slow-googling not part of my mind?

      Delete
    2. In response to Reginald: “This does not mean that I do not have the capacity to perform these tasks, it simply means that the latter option is much faster.”

      Just because the option to offload allows for the function to be performed faster does not mean that it should be included as part of the mind. The ability to perform the function of offloading is part of cognizing, but the search retrieval of the computer program is not. The difference between you and the computer is that while you both have the capacity to perform this one instance of search-retrieval, you also have the capacity to do other things, including making the decision and performing the action of offloading. This distinction also answers Professor Harnad’s follow-up question. No, I do not think that neither fast nor slow googling are part of you mind. Speed does not a cognitive state make. Although we can choose to offload to Google, Google cannot choose to offload to us. It does not have this capacity because it is merely a tool to be used. We do not think that a hammer is part of our strength because it allows us to force a nail through wood; we understand that the hammer is a tool to channel and multiply force onto the nail. Google is the same thing. It is a tool that allows us to channel and multiply the information we access, but it is not part of our own intelligence or mind.

      Delete
  10. think this article by Dror and Harnad effectively dealt with a lot of the topics raised in Clark and Chalmers "The extended mind" article.

    Firstly, the migraine test:
    "The migraine is merely our stand-in for the capacity to feel anything at all -- in other words, for being conscious"

    This addresses the questions raised of the super-organisms, such as of ant-colonies, and even of all life on earth. Earth (and its related life forms) cannot be considered as a conscious entity without the "emergent" capacity to feel (a migraine or anything else).

    "But the fact that there is no such thing as an absolutely isolated local entity or state is not what we mean when we ask whether cognitive states are narrow or wide. Otherwise, the state of a toaster, toasting bread, is wide too, and includes not only the toaster and the bread, but also the events transpiring on faraway Alpha Centauri.
    But, leaving aside the physics and metaphysics of wide causality and action-at-a-distance: what about just the toaster and the bread? Does the 'state' of a toaster, toasting bread, include the bread, being toasted? It seems obvious that this distinction, too, is arbitrary, hence trivial: We can include the toaster in a distributed hybrid state and call that a state of the toaster, or a state of toasting. Or we can say that the toaster does what it does, and the bread gets done to it whatever is done to it, but we will consider their states as distinct, acting upon one another (more the toaster acting on the toast than vice versa, unless the toast catches fire) but not a joint, distributed state worth speaking of as such, in useful discussions of either toasters or bread, and their respective functional states and properties."
    I think this is a very important point. Things are inevitably affected by other things, minds included. As I stated in my comment to the Clark and Chalmer' article, this doesn’t mean that these affectors are extensions of our mind. The distinction Dror and Harnad make later about tools is very relevant. Intuitively, I'm much more inclined to agree with this article as opposed to the Extended Mind. It simply makes more sense.

    ReplyDelete
  11. “An affirmative answer to the question of whether, if the parts of my brain that control the left and right sides of my body could be moved out of my brain and two miles apart, while still being able to remotely coordinate my walking, does not address the question of whether cars or calculators are a part of my mind.”

    This point really resonated with me. A T3 robot that passes the Turing test might have parts outside of its immediate physicality that communicate with it wirelessly and help it to be indistinguishable in performance capacity from a human. However, the parts doing the wireless communication would actually just be extensions of the robot’s 'body.' This is why ‘distributed cognition’ is a misleading phrase. Based on my understanding it is really about questioning whether or not the mind ends where the body ends and has nothing to do with how spread out the components of the cognitive system are.

    This article seems to claim that mental states are felt states and it is unlikely that felt states extend beyond the bounds of the body and into the calculator. This meshes well with my inner grandmother objector’s horror at the idea of giving my calculator credit for being part of my cognition. But we know that many aspects of cognition are not felt. If we were consciously aware of everything that went on between the input and the output there would be no need to try to reverse engineer cognition. This seems to imply that there is a slim possibility that the calculator could be part of the function mechanism of cognition.

    What it comes down to for me is the question of why we would bother including the whole calculator in the functional mechanism when we could just include the output that causes us to punch things into the calculator and the input we get from the calculator and come to the same endpoint. A calculator can give me sensorimotor input that contributes to my mental state but I don’t think it will ever be part of my cognitive system.

    ReplyDelete
    Replies
    1. Good points, but the brain has both felt and unfelt states. What is the difference between the external activity of the calculator and an unfelt state in the brain?

      Delete
    2. "What is the difference between the external activity of the calculator and an unfelt state in the brain?"

      I don't think there is a difference (within the context of this discussion) between the two when we are considering cognitive states. Both can also act as inputs to felt states in the brain, which influence and inform feeling but don't feel (in the sense that is possible with neural circuitry, like we do) themselves. It's difficult to explain why I think this. I think that given brain plasticity (i.e. that the brain is able to adapt and change after damage) and that functionality is localized in different places of the brain for different people (say right vs left hemisphere language lateralization) I can accept that people have cognitive states in different ways. And in that sense, a calculator can compute just as our brains do.

      Another thing that comes to mind is the presumable uniqueness of our feelings, given our past experiences and the context we are feeling in. If we consider these factors as contingent on processes our ears, I don't find it difficult to make the leap to think a calculator or another "external" device could be part of our cognitive systems, if these devices are aiding in some form of cognition. However I don't think this is true for feeling.

      Delete
    3. I agree with Vivienne that the external activity of a calculator and the unfelt states of the brain are the same in that they both can act as inputs to felt states in the brain. However, a calculator is a tool that we actively control using and unfelt cognitive states are not. It feels different to think I'm going to punch square root 81 into my calculator to get a quick answer than it does to pull that information from your memory. There's no equivalent of punching buttons on a calculator or hitting keys on a keyboard for an unfelt cognitive state. Unfelt cognitive activity is organically the product of our mind but there's no distinct procedure that we actively need to follow in order to get the result we're looking for. We don't need to do anything besides think to get unfelt processes to generate a felt state. We could have no control of our limbs and still rely on our unfelt processes. Cognitive technology just consists of helpful tools that come and go in our lives. These tools require us to actively MAKE THEM part of the system.

      Delete
  12. “(19) We are not aware of the generating mechanism underlying our cognitive
    capacity, however, only of its outcome: Hence retrieving a word from memory
    or retrieving a word via a Google search feels much the same to us.”

    I disagree here with Dror and Harnad’s position that retrieving a word from memory feels the same as retrieving a word from Google. I appreciate that the readiness, abundance, and accessibility of cognitive technologies mean that we take them for granted in our everyday usage, but I don’t agree with the idea that we don’t still feel a strong distinction between us and the distributed cognitive technologies we use. I would agree that we rely heavily on them, but I am certainly always aware of the boundary between what I really know and what I know how to find on the internet. I also have a tendency to rebel against cognitive technologies that are distributed separately from my body, because I feel some sort of ownership and protectiveness over my intellectual integrity, and once in a while I become very aware, and very resentful, of my utter dependence on cognitive technologies. This rebellion manifests itself as me choosing not to look up a word on google when I know it’s somewhere in my brain, so instead of taking 20 seconds to look it up online I will stew on it for hours with the hopes that it comes to me (which it often does). Sure, this is less efficient and probably a waste of time, but I often feel a strong resistance against my dependency on technology, and a resistance against how intertwined with my mind and cognition computers have become. I feel a conscious awareness of the “generating mechanism underlying [my] cognitive capacity”, and this makes retrieving a word from memory feel much more real than retrieving it from google, even if the outcome is the same.

    ReplyDelete
    Replies
    1. What about things you do interactively with cognitive technology: Which part is mental and which part is not?

      Delete
    2. To Julia:
      I think a problem with your stance could be that you do not really know what it's like to retrieve that word. Eventually, yes it pops into your head but in such cases it's not really through thinking of all the related words you can think of and eventually reaching that one. It just appears in your mind the way Penny Ellis did in Ethan's. It is in that respect that it feels the same. The process of your mind retrieving that word still remains a mystery which is why it's similar to Googling something. We don't really get the mechanics behind it--it just happens. I do understand your reluctance to believe in such a concept. I had the same reaction because it seems like such a foreign system cannot do things the way we can.

      Delete
  13. "We (or rather, our mind-reading mirror neurons) insisted, in the case of the Siamese twins with only one body, that even if Biology were to tell us that they were one single organism, they would still be two distinct cognizers, if they had two distinct minds: They would not have one, shared mind, even though they did have one, shared body."

    Earlier in the article, Dror&Harnad stated that it is ambiguous whether or not a person suffering multiple personality disorder (MPD) had multiple minds. Can the case with Siamese twins be extended to MPD? There are clearly two minds within this person, however they can't both be 'online' at the same time. Because one personality cannot recall personal information beyond its own, these are clearly two different individuals--two different minds. In fact, their behaviors are completely different too, in accordance with the change in identity. Now, onto the migraine test. Each identity, presumably, can experience a migraine. But the multiple personalities cannot experience migraines at the same time--does that change anything about the fact that they're different minds? If personality X experiences a migraine then personality Y takes over, does it still have a migraine? If it does, is it the same migraine?


    "We know what it’s like to have a mind, because we each have one. The rest is our “mirror neurons,” detecting when another mind is in a mental state like our own, because it is doing something like what we would be doing in that mental state."

    Extending upon this. If we have a T3 robot that can also 'mind-read' through our behavior, can we attribute this as to it having a mind? Does something need to have a mind to be able to predict what mental state others are in?

    ReplyDelete
    Replies
    1. Multiple personality (if it's really two minds in the same brain [at different times]) does not touch on the extended mind question: they're two minds, two feelers, not an extended feeler, any more than the Siamese twins are.)

      Ordinary mind-reading capacity (which all of us have) is already part of T3.

      Delete
    2. "Extending upon this. If we have a T3 robot that can also 'mind-read' through our behavior, can we attribute this as to it having a mind? Does something need to have a mind to be able to predict what mental state others are in?"

      If we are saying that "mind reading" occurs by looking at behaviour and making assumptions based on behaviour alone, then no this doesn't require a mind. This type of mind-reading is programmable and part of T3. An example that comes to mind is how Facebook algorithms can "mind read" and display content on a user's newsfeed based on past activity (behaviour) such as likes, shares, etc. This is a type of "mind reading" that doesn't require a mind.

      Delete
  14. “That is in fact the (narrow) meaning of “cognition”: the kinds of things that I and other living organisms can do, using our minds.”

    Dror and Harnad explain how cognitive technology has allowed cognizers to offload certain functions so that their own brains and bodies do not have to produce these functions. However, saying that cognition is defined as doing things using the mind seems a bit vague. For example, say we were able to produce a programmable chip that is inserted into the brain of a cognizer. The chip is able to allow the cognizer to implement certain functions offline (not using their brains) but they are still aware of how it functions and are conscious of the processes it uses. They are still using their mind to control the chip and activate it but it is storing information that they do not need to use their brains to store. Would this still be considered cognition? Furthering this thought experiment, say this chip was also able to produce the feeling of remembering information in the brain. It should be considered part of the mind as it produces feelings and is able to do things that “I and other living organisms can do”, but it is not part of the functional brain and was not created through biological processes. I am skeptical to call this cognition as it is an external resource taken to stimulate cognition. Just as we say that a stimulation mechanism is not the real thing, I don’t believe we can say that something that is able to stimulate cognition in the mind is really cognition.

    ReplyDelete
    Replies
    1. Danielle, since your "chip" is just sci-fi, you can give it whatever properties you like. If it is a causal component of the state that generates a feeling (rather than just an input to it), then it is part of your mind; if not then not (whether or not it's in your head).

      Delete
  15. This debate reminds me of similar questions about identity. For example, if you take a ship and replace one part of it every time something, does it remain the same ship even as you replace each and every part? If if you take all the old broken parts and repair them and make a second ship, which one holds the identity of the first ship?

    In this debate,we are trying to pinpoint the locus of cognition, much like trying to pinpoint the identity of the ship. Are a pencil and paper a part of cognizing mind or just a tool being used by the mind? We know felt cognition (thought) is part of our mind, but unfelt cognition could be in the brain and it could be in a calculator, so where do we draw the line?

    In both these debates, I think it's important to remember the story of Funes, who teaches us that all our categorizations are useful approximations-not exact measurements. With the ships, identity is a category that fails to be useful after so many repairs, but it was an approximation all along, so we shouldn't be troubled by its deterioration. With cognition, we are just choosing the boundaries of our category. Should it be just in the brain? Should it be anything that allows for perfomance capacity? We should remember that our definition will ultimately be an approximation, and that any answer will fail to capture every situation that implicates cognition.

    ReplyDelete
    Replies
    1. Jacob, I like the Ship of Theseus problem in this context, because although it is good at pumping our intuitions about the fuzziness and approximateness (as you rightly point out, along with your Funesean point) of our metaphysical notions of identity ("what makes an individual thing that thing rather than some other thing?") when the thing is an object or even some (other) person, it fails (or, alternatively, it never gets off the ground) for our own feeling of personal identity:

      (1) Fails: Suppose that here you have me, thinking a thought ("the cat is on the mat," or, better, "Cogito Ergo Sum") and over there is a T3 robot that (for the sake of argument -- and so as not to collapse this boringly into the usual other-minds problem yet again) is certified by the gods to be nonconscious: an unfeeling T3.

      Now we start the Ship of Theseus exercise, gradually swapping parts of me with parts of the T3 Zombie, molecule by molecule. I am standing here, saying and feeling "Cogito Ergo Sum" and T3 is standing there, saying (but not feeling), "Cogito Ergo Sum" as the molecule by molecule swap goes on. Now (unless some silly and arbitrary interaction happens), at some definite point I will no longer feel that I am still here, but I will feel that I am suddenly over there (where the T3 Zombie had been), saying and feeling "Cogito Ergo Sum," otherwise exactly as before. The lights will have gone off in my former body, and they will have come on in my neew body.

      No problem. It's still just me, with some swapped parts but -- and this is crucial for the extended-mind question -- as long as the right components are there to generate the felt "Cogito Ergo Sum" (and it includes my feeling that it's me, and my capacity to retrieve enough of my memories to keep me from freaking out), the only difference I will feel is that I am standing there, instead of here, where I was a second ago.

      Nothing fuzzy, approximate, or Funesean.

      (2) Never Gets off the Ground: The other variant is that we do not worry about my long-term memories, nor about what happens or happened an instant later or earlier, and the whole molecule swap operates on regenerating an instantaneous "Cogito Ergo Sum" feeling, like one of Funes's unique instants. There too, the change in spatial location will take place at a particular moment in time, but the question of personal identity will not be at stake, because although a felt state will have migrated from here to there, that felt state will not be me, either here or there. It will just be a point-like feeling state, perhaps persisting, statically, or repeating itself every instant as if for the first time, first at the original location and then at the new one.

      Still nothing fuzzy or approximate, but very Funesean...

      It does remind us, though, that in the special case of our own feeling of personal identity, much of it depends on the feeling of continuity (which could of course be completely illusory, with what is actually being felt changing radically from instant to instant, but always coupled with a calm sense of being oneself, as one (feels as if) one had been a second ago...

      But no support for an extended or distributed mind or feeling state with either (1) or (2).

      Delete
    2. Hi Professor Harnad and Jacob;
      I talked to you about this idea of a 'Cognizer of Theseus' concept after class, and I remember thinking that if this were an ethically acceptable experiment to do, then it would supposedly be possible to use it to localize feeling/consciousness. I wanted to be a little nitpicky and deconstruct the argument further to make sure I understand what it can and cannot teach us. First of all, if we were making the swap, wouldn't we want it to be with a T5, rather than a T3? I'm not sure exactly what we would be swapping if not exact molecules for molecules. Second, I agree that at one point I will definitely stop feeling that I am 'here,' where I started, but where is the guarantee that I will at the same instant feel myself instead being over 'there'? What if my ontological being-as-a-cognizer disappears from my body but does not reappear in the T5 zombie? Would this mean we had not perfected the T5 zombie to begin with? I know you say that as part of this experiment all "the right components are there to generate" feeling, but if they are all there and the robot is NOT feeling (since the gods said so), and all we're doing is swapping molecules, then what is the thing that will cause that robot to feel after a certain number of swaps? Or rather, what reason do we have to believe that it will feel after a certain number of swaps if it didn't feel before? With the Ship of Theseus example, both ships are capable of doing everything that ships can do, which is why swapping is not a problem in terms of functionality. Can this same analogy truly be applied to a human/T5 or human/T3 pair when they have this fundamentally distinct difference?

      Delete
  16. “(7) The only kind of “technology” that might really turn out to be intrinsically
    cognitive, rather than just being a tool used by cognizers, would be a robot that
    could pass the Turing Test (TT) -- because such a TT-scale robot would almost
    certainly have mental states, and hence it would be a cognizer in its own right.”

    I’m a bit confused by Dror and Harnad's suggestion that “such a TT-scale robot would almost certainly have mental states”, because I feel like this isn’t something we inherently take for granted as part of the Turing Test. If a computer can pass the Turing Test, does this mean it has mental states? Surely a computer, even if it is described as “thinking”, doesn’t experience the same kinds of emotions and mental states as a humans (who are largely at the mercy of their neurochemistry) do. I may be mistaken here in my definition of “mental states”, because I automatically group emotions under the heading of “mental states”, but I think when discussing humans it’s impossible to discuss mental states without including emotions, and it therefore becomes hard to draw direct parallels between human mental states and Turing machine mental states.
    This also seems to harken back to the problem of thinking versus feeling, and in this case I think a distinction between the two, and a clear definition of what falls under the umbrella of “mental states”, might be helpful? I’m left a bit confused on this.

    ReplyDelete
    Replies
    1. ‘’I’m a bit confused by Dror and Harnad's suggestion that “such a TT-scale robot would almost certainly have mental states”, because I feel like this isn’t something we inherently take for granted as part of the Turing Test.’’

      The Turing test does not test whether a robot has mental states, it tests whether the robot can behave the way humans do. However, it is very likely that a TT-scale robot needs to have mental states in order to pass the TT.


      Delete
    2. ‘’Surely a computer, even if it is described as “thinking”, doesn’t experience the same kinds of emotions and mental states as a humans (who are largely at the mercy of their neurochemistry) do.’’

      But can you say that a computer is thinking if it does not have mental states like humans do? That would mean then that there is another kind of thinking different from having mental states.

      Delete
    3. Julia, if a system is feeling anything at all, it has a mind: That's what having a mind means.

      Delete
    4. ‘’I’m a bit confused by Dror and Harnad's suggestion that “such a TT-scale robot would almost certainly have mental states”, because I feel like this isn’t something we inherently take for granted as part of the Turing Test.’’

      I agree with what Marion said above. If a robot is able to pass the TT (behave indistinguishably from a real feeling and cognizing human for a life time), then by the other minds theory, it is very likely that this robot has felt states. If something walks, talks, and does everything else like us, then we better assume that it has feelings like us. If you really think about it, we are taking for granted the belief that other people have feelings too. There is no way that you, Julia, can know for sure that I, Alice, am a feeling human being. There's no guarantee that I am not a Zombie infiltrating this skywriting blog, but you assume that my feelings are the same as yours, so that we can share our thoughts and feelings. Bottom line, the TT-passing robot might not have the exact same causal mechanisms that we do, and it might not share 100% the same felt states that we have, but this robot must having some kind of feelings so that it could pass the TT in the first place.

      Delete
  17. “(1) The notion of the ‘extended mind’ -- with mental states (i.e., felt states) ‘distributed’ beyond the narrow bounds of the body -- is not only wildly improbable (as improbable as the notion that the US government can have a distributed migraine headache) but arbitrary.”
    I wanted to comment on this first point in the Introductory Overview because I found it to be important. The “extended mind” concept discussed in the Clark and Chalmers article is vastly improbable, and I would also go as far as to say that it is impossible. I do not think that one can cognize or have mental states outside of the bounds of the brain, let alone the body. It does not make sense and is quite an arbitrary and ambiguous suggestion if one agrees with the idea that cognition exists in and is produced by the brain.
    “Let us first agree that not everything a human being can do is cognitive. Breathing, for example, except in some special cases, is not cognitive; neither is balance, again, except in some special cases.”
    I do not necessarily agree with the idea that breathing would not be considered a cognitive process, as I think that anything that is under the control of the brain is cognitive. I do not believe that something has to be conscious to be cognitive, and it has been shown that a lot of our cognitive processes go on without us knowing that they are happening. Thus, I would not equate cognition with awareness or consciousness; just because something is unconscious and automatic does not mean that it is “vegetative” instead of cognitive. Important functions, such as breathing and the automatic physical reflexes, are still under cognitive control whether we are aware of them or not. Also, we cannot attribute the origin of these functions to any other organ than the brain.

    ReplyDelete
    Replies
    1. I definitely agree with your last sentence that the origin is the brain but I think everyone believes that without a doubt. At the beginning I was very reluctant to entertain the concept that extended cognition is possible. How can something that my mind is doing occur anywhere else but contained in my skull? I think this is challenged though by the example with Otto using his notebook and the usage of other mediums to help us cognize. If Otto is relying on his notebook to do things he would have otherwise been capable of, why isn't it considered cognition? The notebook has instructions and contact information. Strange as a concept it is, it makes a lot of sense that we off load our abilities onto other technologies.

      Delete

  18. “A nonarbitrary way to resolve this is to accept that only mental states are cognitive states, that cognition is only narrow, and that the only place it is ‘distributed’ is within a single cognizer’s brain”

    This sets practical limits based on location on our search for cognition. By narrowly defining cognition, we actually have parameters within which to explore. I don’t completely understand how this is considered “non-arbitrary” though. We don’t yet know what or where cognition is, so it does seem arbitrary to limit it to the brain. In the robot that we reverse engineered, I see how these limits are non arbitrary. We can know which parts were intended to create cognition and implement algorithms.

    ReplyDelete
  19. “ …all just parts of wide, distributed, disembodied cognitive states, taking place here, there, or everywhere: cognizing, with no cognizer (rather like a distributed life, with no organism living it; or a distributed migraine, with no one experiencing it)? Isn’t cognition with no cognizer cognizing it like a feeling with no feeler feeling it?”
    Feeling with no feeler feeling it. To address this you’d have to solve the hard problem well before getting into the possibility of distributed cognition, or a host-less cognitive state. This is impossible. But, there’s something poetic about the idea, so putting aside distributed cognition, that is, if we consider cognitive technology and software agents to be merely an I/O tool to enrich our own cognitive states, we might ask if there is anything to be said about the disembodied datum regarding its cognitive potential? Or, more interestingly, its potential for feeling?
    For instance, if we have a fleet of people entering data for a call center, which is shared over a common server, manipulated and interpreted by a specific program, is there anything cognitive about this newly interpreted data before a cognizing being (in this case possibly the company CEO) sees it? The program has done all that it can do – it has reached the same conclusion as a genius (with a lot of time on her hands) would reach, were she to take on the job - the only difference being that this genius feels what it is like to reach this conclusion and have the results in front of her.
    If we only accept cognition in the narrow sense of being within the skull, than there has been no cognizing up until the CEO checks the output statistics. But until then this output still has the potential to be cognized and felt. To the same extent, the whole process of equating that result holds a potential to be cognized and felt. The difference is that the potential accompanying the former will be realized (by the CEO as soon as he gets around to it), whereas the latter will not, even though it is just valid a candidate for realization.
    So maybe the question should have been, ‘is there anything MORE cognitive about the newly interpreted data than the process of interpreting it?” Feeling (or rather potential feeling) with no feeler feeling it as opposed to potential feeling with a potential feeler to feel it.

    ReplyDelete
  20. “If the mechanism that generates mental states and bodily performance capacity could be more widely distributed in space, and still be integrated somehow so as to generate coordinated mental states and bodily function, then that too would be widely distributed cognition, but that would also be a widely distributed body.”

    It is this claim which, for me at least, really settles the distinctions made, by Dror and Harnad, between distributed cognition, distributed consciousness, and cognitive technology. At first I considered why, for example, a calculator, computer, phone, or car could not contribute to the more generalized mental state of a coupled system; a mental state beyond that which I attribute to my own body. Regardless of the mere speculation that leads me in the prior direction, it is the notion that these technological devices, whether or not fully understood in their design, never satisfactorily incorporate mechanisms that generates mental states, nor performance capacity beyond those that the human cognisor is already capable of. A cognisor can think, perform math problems (if provided with the proper rules), and move/vocalize in order to present its thoughts, but even if these capacities are distributed, as opposed to centralized in the brain, they still function under the control of one body, and one mind. Thus if distributed cognition remains confined to some autonomous system, then, although the cognitive mechanisms might be distributed, the combined functionality of its distributed action will only produce one conscious mind. When we consider the addition of cognitive technology to an existing cognisor, it seems like we are merely extending the reach of a cognisor’s body by means of enhanced input and output capacities. The power, efficacy, and statistically “better” performance that cognitive technology offers a cognisor does not seem to provide any novel feeling that a cognisor could not derive on its own. In this way, it seems as though an autonomous cognising system might distribute their cognitive ‘hubs’ perfectly well, but can only really have one mental state. To hope that any larger system, established by the incorporation of multiple autonomous cognisors and additional cognitive technologies, can achieve its own mental state, is to delve into the insoluble problem of the other mind.

    ReplyDelete
    Replies
    1. Mechanisms have states. Some of their states generate unfelt doings, some of them generate felt doings. The mechanism will include components whose activity is part of a given state, and components whose activity is not part of that state.

      The "extended mind" question is just the question of whether things like books -- or other people's words, or even other people's heads or their feelings -- can be the components of the state that generates a feeler's felt doings.

      C & C say yes, D & H say no, but what do you think (and why?)?

      (There is no controversy about the fact that books, etc. can be sensory inputs to the mechanism that generates the felt doings, but that does not make them a component of that mechanism.)

      Delete
  21. "But is what these (software) systems are doing (whether they are local pieces of hardware or distributed digital data and the software agents programmed to process them) cognizing, or just something that is similar to what ordinarily requires cognizing to do?"

    Distributed cognition is an intriguing question. If individual cognizer comes together, does that makes it a big cognizing entity? Furthermore, if non-cognizer(software) forms a system and consume input and produce output among each other, doing what cognizers can do, does that makes the system a cognizer?

    At the first glance, I thought the answer would be obvious, that the group or system of course is not cognizing. In fact, we should use another word such as "meta-cognizing" to describe this illusion of a big cognizing entity. Because under the hood, each individual in the group are just contributing it's share of "cognition", and this aggregation of "cognition" is what we can see from the outside of the system.

    However, it does not seems so obvious to me now. Dror and Harnad defines "Cognizing" as a "mental state", which is a felt state. We all know what a felt state is. It is something that we can feel. But the problem is, how could we know that if another entity can feel? The mirror neuron in our brain gives us the ability to feel what the others are feeling. But it is obviously just an illusion of what we feel other's are feeling. So in the essence is the other mind problem.

    Furthermore, this makes me question the essence of what we define as feeling. After all, our ability to feel is just a product of some neuron firing in our brain. We cannot know whether or not machines can develop the ability to feel as well through electric circuit because of the other mind problem.

    Thus, I think to answer any of the above question would be impossible.

    ReplyDelete
    Replies
    1. This is not particularly about the other-minds problem: It's about whether the components of the mechanism that generates a felt state can be wider than the head of the feeler, including things like books the words of other feelers, or even the heads or feelings of other feelers.

      Delete
  22. " "Cognition" -- if it is simply defined as (2a) the ability to do the kinds of
    things that cognizers like us can do, plus (2b) the underlying functional
    mechanisms for doing it -- can be arbitrarily defined to be as wide or as
    narrow as we like, and absolutely nothing is at stake."

    After reading the other reading, and now this one, I'm beginning to seriously question why we consider neural computation, and most bodily mechanisms, as an aspect of cognition at all except for the precise instances where these mechanisms interact with a cognizers ability to handle information based on meaning. If my brain rotates an object somewhere using pure computation, does this even count as cognition? Why should it? I don't think it should except for at the interface between my feelings and the neural representation of the object.

    Perhaps the term cognition is defined as above simply as a matter of convenience: we can easily study everything except the hard problem and the other minds problem so we have included them in cognition. I'm beginning to think it might be more practical and reasonable to define these "other things" (things that are not the hard problem and not the other minds problem) as external to cognition entirely. We can examine biological computation, and the mechanisms by which we do what we do, without conflating it with cognition. If we ever get to such a point where this becomes an issue we will likely be right on the edge of the hard problem and much better positioned to answer the questions of cognitive science.

    ReplyDelete
    Replies
    1. I'm not sure I quite grasp what you're saying. So you're saying cognition is performance capacity + the causal mechanism underlying it (from the article). But I'm not sure why this would lead you to believe that mental rotation wouldn't count as cognition? So mental rotation of an object meets the criterion of 2a it's part of our "abili[ties] to do the kinds of things cognizers can do" and 2(b) there are mechanisms that underlie our capacity to perform mental rotation (we just don't know what these are). From taking psychology classes, mental rotation is something that is very essential to human cognition...especially in terms of spatial working memory.

      Ethan, I think the thing I'm most confused about is: in your opinion, what kinds of things should be considered cognition?

      Delete
  23. Harnad and Dror raise a lot of interesting questions in their paper, but the one that struck me the most was this: are “calculators a part of my mind”? Or, in more general terms: is cognitive technology a part of my “cognitive state”?

    Cognitive technology is stuff that does things that we can (in principle) do with our minds. Some examples include calculators (since we can do addition, subtraction, multiplication etc in our heads), pen and paper (they can kind of function like a memory…we can store important things there like addresses) and books (they store a whole bunch of useful info, so also like memory). The people who like the extended mind hypothesis think that this stuff (books, address books, calculators etc) are a part of minds, that what a calculator does is cognition and is moreover part of my cognition, because I am the one using it!

    Harnad and Dror are not that convinced. They think cognition ends where the body of a cognizing organism ends (this definitely includes humans and animals, and possibly some robots of the future, but I am not going to get into that now). They write that although “cognitive technology can undeniably extend our bodies’ cognitive performance powers in the outside world, only their sensorimotor input and output contact points with our bodies are part of our cognitive (=mental) state, not the parts that extend beyond.” Basically, this means cognitive technology allows us to do more awesome stuff than we could have otherwise (it is pretty damn practical that astrophysicists can use fancy calculators to figure stuff out instead of doing everything in their heads) but cognitive technology isn’t part of our minds. So the stuff happening in the calculator that the astrophysics looks at isn’t cognition.

    Why do they think this? Because cognitive states are mental states and it feels like something to have a mental state. For something to count as cognition, the person doing it needs to be conscious of doing it (though not necessarily conscious of how they do it). “The essential thing for having a mind is being able to feel what it is like to have and execute the capacity.” When I translate a sentence from English to French in my mind, I am conscious that I can translate the sentence and that I am translating the sentence (though I don’t know how I am doing it…). In contrast, when I ask google translate to translate a sentence from French to English, I don’t feel like I am doing the translating. In fact, I know I am not. So that cognitive technology isn’t part of my mind.

    ReplyDelete
    Replies
    1. Hi Jessica,

      Quick Q a la B&T: Active Externalism = "being-in-the-world?"

      Delete

  24. This reading articulates a lot of what I was trying to say in my responses to the previous reading. I now realise that cognition can encompass unfelt states-that feeling is, in fact, just a part of cognition, and many cognitive processes, such as recalling memories, are unfelt.
    I’m curious—can one still define being a thinking thing, as feeling the way it feels to have some felt cognitive processes and some unfelt ones? (That being said, I’m still confused about the relationship between thinking and cognition).
    I want to say that the difference between a calculator and an unfelt state of mind is that the mind has the capacity to feel, while the calculator doesn’t. This feels incomplete, however, and I’m not sure how the question could be fully addressed.

    ReplyDelete
    Replies
    1. "(That being said, I’m still confused about the relationship between thinking and cognition)"

      I think thinking (heh) is a part of cognition. Feeling is also a distinct part.

      "I want to say that the difference between a calculator and an unfelt state of mind is that the mind has the capacity to feel, while the calculator doesn’t. This feels incomplete, however, and I’m not sure how the question could be fully addressed."
      How I think of the mind/calculator question which might add some completeness is that there are plenty of differences between a calculator and an unfelt state of mind ranging from mechanism of action to content to capacity to timeframe and inputs or outputs. However the important thing to consider is that the feeling generated in the moment. So those differences don't really matter, just that these unfelt cognitive processes or cognitive technologies are implicated in feeling, but aren't feelings themselves.

      Delete
  25. I am content to agree entirely with points 2, 4, 6-19.

    The general idea of distributed cognition is new to me and I'm not ready to agree with it. How could it be that cognizing includes external objects? How could we comfortably say that everything around us is part of our cognitive abilities? We don't know how we perceive things, how we remember things, how we register sounds. If I see a chair, the chair being a chair doesn't do anything but the recognition of it being a chair feels like something even though I have no clue how I did it. The chair being there does not affect my feeling if I choose not to pay it any attention.
    Sure we interact with non-cognizing and cognizing beings, but with cognizing ones we are just sharing information. Their cognizing though is not part of mine. I have become more and more certain that the mind, cognition, is limited to the physical body of the cognizer. Yes though, this so called "cognitive technology" has affected and extended the power of cognition by increasing the amount of collaboration that happens between cognizers.

    I was stopped by one thing though. I understand that living organisms can have no mental states but how can there be nonliving systems with mental states? If something mental means that it has a mind then how do you have a mind when you are nonliving? If we are to say that it is a robot that passes T3, then that is a dynamical system and therefore an organism which has the property of life... Right? I guess then we must be arguing if the dead can mental states and a mind. And that's just something people will always debate about and perhaps may never prove because they don't want to know the answer it is no.
    Perhaps I'm missing something.

    ReplyDelete
    Replies
    1. Hi Alisa,
      I'm interested in one sentence you said, how you're convinced that "the mind, cognition, is limited to the physical body of the cognizer." This is something that I think will be ultimately my major takeaway from this class, because I too wonder about this a lot - obviously, there is nothing else with the same physical dimensions in the world as humans that can also think, feel, categorize, etc. as we do (if you're not a computationalist). But if the only difference is a matter of physical properties, then shouldn't we be able to do reverse-engineering and build or at least theoretically figure out what is the physical structure or structures interacting in our brains that impart us with these abilities? Is the 'stuff' that makes up the body the same 'stuff' that constitutes thought/feeling? If it were I'm confident that we would eventually be able to localize it, yet despite the many advances of [neuro]science this possibility seems increasingly unlikely.

      Delete
  26. I agree mostly with what Dror and Harnad say in their paper Offloading Cognition onto Cognitive Technology (if I understood correctly), so I will do a basic summary here first (to see if anything is wrong).

    First of all, Dror and Harnad compare what a brain can do with mental states, so basically brain states include mental states: among all the things that we could do, some are felt and some are not felt, and those are not felt are not in the mental states, like breathing and balancing. In mental states, although we feel when we do, it might be still necessary to separate feeling from doing when we talk about distributed body or distributed cognition. When I say doing here, does not include feeling (or to say doing here is unfelt). Doing can be distributed, as I commented in the Clark and Chalmers' paper. It is possible for Otto to find out where the museum is and go to that museum just as Inga can. Also, it is possible for a T3 robot to do what we can do. It is possible for Searle to be given Chinese inputs and give Chinese outputs with some English rules even though he does not understand Chinese. Information that is located in the brain and that is located in a notebook really has nothing different.Therefore, doing is distributed because you and I can do the same thing, as long as given the necessary ability or materials. However, what is not distributed, is feeling. This is also what is missing in Clark and Chalmers' argument. Feeling is only in the cognizer's own head. I feel pain; although I may look painful, you can still not feel the same pain as I do. I feel sore throat; although you may have felt sore throat before as well, the feelings we have are different, and because of the other minds problem, we actually do not know how others feel; we can guess, regarding to our similar experiences, but we can never know their true feelings.

    Also, the superordinate system Gaia attracts my attention and reminds me of the discussion regarding to the robot Ethan we have in class. Why don't we kick Ethan? We said because he might feel pain. Why don't we kick a dog? We said because it might feel pain. We do not really think Gaia is a cognizer because it is hard for us or me to think that the earth feels pain when I kick it. However, although it is hard to imagine, it is impossible for us to prove it. Maybe it does feel pain, and we just do not know it.

    One thing I am a little confused about is the difference between consciousness and feeling. Both words appear frequently, but I can hardly see the difference. It seems that when we do something, we are conscious that we are doing it, but sometimes we are not conscious of how we are able to do it. To me, it is the same thing as feeling I suppose? We feel something when we do something, but we are not feeling the brain process that enables us to do that thing.

    ReplyDelete
  27. I have to admit that although much of the argumentation for distributed cognitive states makes sense, logically, to me—including other statements which follow from it, for example: “If spoken language widened our cognitive powers biologically, didn’t reading and writing widen them technologically in much the same way?”—I still have trouble trouble including these more distal technologies that, under these arguments, ought to constitute part of this distributed cognition, as is follows: “Computers, distributed digital databases and automated algorithms have augmented both the speed and the computing power of our brains, and that newfound speed and power is capable of inducing changes in our mental self-image not unlike the ones that sensorimotor technology can induce in our body image: If being deprived of one’s spectacles or one’s automobile feels rather like the loss of eyes or limbs, being deprived of one’s computer or cell-phone feels like the loss of one’s intrinsic communicative capacity.”

    Of course interpretation of the particular technologies featured above with regards to sense of self is subjective, based on utility, preference, and so on: I myself do feel that, without my cellphone, I am lacking communicative capacities that I am generally accustomed to and am incapable of exercising without—but certainly not that something intrinsic is missing, some part of myself is missing. Perhaps this changes in the future (is this whence cyborgs arose?), but it is not the case now. I do see how this analogy collapses when I compare cellphones to writing faculties, or speech, without which I certainly wouldn't feel myself; but these are abilities, capacities afforded to me by my own body, directly: even in writing, the communication is from my hand, to pen and paper, whereas in terms of the cellphone there is certainly a similar sensorimotor interaction, but the phone does tasks of which I am simply incapable, even at my utmost performance capacity. I would not think of myself as having lost the ability to write (and all the cognitive advantages it presents) if a pen were broken, or missing, or if I had broken a hand; if analogous conditions were applied to a computer or cellphone, I would still have the ability to use them—but certainly not do what they could do, or reap the benefits: and therein, I think, lies my difficulty in collapsing the two.

    ReplyDelete
    Replies
    1. "I would not think of myself as having lost the ability to write (and all the cognitive advantages it presents) if a pen were broken, or missing, or if I had broken a hand; if analogous conditions were applied to a computer or cellphone, I would still have the ability to use them—but certainly not do what they could do, or reap the benefits: and therein, I think, lies my difficulty in collapsing the two."

      My difficulty with this question is more related to the hard problem and my own experience of feeling - do I feel through my cell phone? When it's on my desk and I forget about it? No. When I am talking to someone? Yes. But then there is someone on the other side telling me something. I think what I am trying to get at is that the cell phone is not a part of me, like my ears, so I cannot feel through it - I feel independently of it. It's very much a tool that facilitates interactions with people through language, vision, etc. But it does not give any new modality to feel with.

      Delete
  28. This is what I got from this paper:
    Extended cognition does not exist;
    Because to cognize is to have a mental state
    To have a mental state is to feel. And anything that doesn’t feel, doesn’t have a mental state.
    The ‘other’s mind´ problem does not allow us to know for sure that other people feel, and this issue is unsolvable, but we make the assumption based on our capacity to mind-read other organisms, that they, like us, feel and are conscious beings, in other words that they are cognizers. As Harnad points out “we mind-read through a combination of having a mind and perceiving its bodily performance correlates in others.”
    The idea that cognitive technology can be part of an “extended mind” that leaves the boundaries of our skull, implies that things outside of us somehow cognize in their own right.
    We can only know, and measure how and why we do stuff, this is called “performance capacity.”
    The mind issue, which is really the feeling issue, cannot even start to answer the ‘how’ question, same with the ‘why,’ so how could we raise the possibility that things outside of us have a mind.
    Cognitive technology is simply facilitating things that we can already do, making processes faster and constantly influencing us, the way sensorimotor technology does, but both of these just “extend our performance capacities” and in the case of cognitive technology it also allows to relay some of the burden that cognitive processes involve. So, is not that our “cognitive tools” i.e. our notebook, constitute part of our mind, but rather that they are in influencing how our minds work now and how they will continue to work in the future.

    ReplyDelete
  29. Harnad (2009) offers some answers to the problems risen by Clark and Chalmers about extended cognition. This debate has a particular importance with regards to the exponential development of ‘cognitive technology’, which is the reproduction, acceleration and extension of abilities of our minds. Language somehow belongs to the primitive instances of this category, and one thing it allows us to do is to delegate cognitive work and load. But to address the confusing form it has taken in Chalmer and Clark’s article, it is necessary to first remind that cognition is our ability to do whatever we can do and the mechanisms that allow it, and that only felt (mental) states are cognitive. This means that although technology can complement, enhance and distort our sensorimotor and cognitive capacities, they are still not part of our mental states. Rather, only their ‘input and output contact points’ with our bodies are part of our mental states, cognitive technology is ‘cognitive’ only when it is used by a cognizer, we could even say it provokes cognition but it makes no sense to say that these objects or systems cognize. As Harnad suggests, most of the confusion arises from the misuse of the term cognition, depriving it from its thought, known and felt content.

    ReplyDelete