Saturday 11 January 2014

(11b. Comment Overflow) (50+)

(11b. Comment Overflow) (50+)

10 comments:


  1. Another reading which I really dug but still fuzzy on.

    Some take-aways: don't conflate inputs and outputs for parts of cognition. When we read, we are providing ourselves with phonetic (or iconic) input. When we count on our fingers, we are using something to count. The counting doesn't go down on our fingers, but in our heads. So abacus is out too.

    Vegetative functions can potentially be felt, and are therefore part of cognition. Depends on whether or not we're aware of the function, or if we feel past these more primordial functions (we don't feel the nerves innervating our arms until we pick up the weights for a few sets, at which point the gamma and alpha motor neurons reveal themselves).
    I think the Gibsonian affordances can be categorized with those things which are "ready-at-hand." A musical instrument affords an expert musician specific behaviors. Technology elicits from us specific behaviors, "projects" if you will. Everything is a project, and when we fail to feel the instrument, the technology, be it the keypad, the mouse, the computer screen, the binding of a book, we merge with the object. But I do not think this means we are thinking alongside these objects, or incorporating them into our brains. They share the content of our mind, but not the form.

    Questions:
    "Widely augmented input, or just input and output to narrow old me?" What does this mean?
    If we feel the changed afforded to us by some technology and we make the technology disappear, so that it's "ready-at-hand," have we extended our narrow consciousness?

    "Is language distributed consciousness?" Yeah, is it?

    Still confused on how Epinoetics works. I get sort of get epigentics. Histones have c-tail which can have different amino acids attached to it. Depending on the amino acid, the histone either unfurls or tightens, which allows transcription factors to get at the DNA to start making proteins. Instead of keeping every transcription factor necessary for the appropriate production of all the proteins, we can afford to rely on the environment to do work on the histones. In a way, some transcription factors are outside the genome, operating on DNA from beyond. How do we relate this too thought.

    ReplyDelete
  2. "Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers."

    From the start, the article makes clear to me some of the uneasiness I had with the Chalmers's claim that things we use to augment the mind's cognitive capacities are part of "cognition": external extensions of mind cannot share our mental or felt states.

    "We then do not need the terms "cognitive" and "distributed cognition" at all, and can just talk about relatively complex and wide or narrow functional states, leaving it a coincidence and mystery (at least at this stage) that every single case of what we used to call 'cognitive' also happened to be mental."

    We definitely do define cognitive as mental, and that's why Chalmers has to subtly persuade people to give up this notion, and why we find the notion of the "extended mind" troubling. (In order for the notion to be coherent we have to attribute cognitive states to non-mental things.)

    "(14) Sensorimotor and cognitive technology can thus generate a perceptual change, rather like virtual reality (VR), making us feel a difference in our body image and causal power (perhaps not unlike what the physical metamorphosis from caterpillar to butterfly might feel like, as one sensed one's newfound somatic capacity to fly)."

    Would the difference between this case and VR be that VR is simulated, whereas when using sensorimotor and cognitive technology our capacities are actually being extended (dynamically)?

    "(20) Does the fact that cognizing is a conscious mental state, yet we are unconscious of its underlying functional mechanism, mean that the underlying functional mechanism could include Google, Wikipedia, software agents and other human cognizers' heads after all? That question is left open for the reader."

    If we call cognizing a "mental state", then it functionally cannot include things outside of our heads. As mentioned previously in the article, the inputs or outputs of mental states can "touch" these things, but they are not included within the box.

    "Is anything really autonomous, apart from the universe itself, or God almighty? This is again the question of causal isolation, and maybe we can again finesse it by settling for commonsense approximations[…]"

    I don't have a background in these types of questions, but I know that causality is a generally difficult issue. Is this the best we can do to demarcate things we call functional systems?

    "[…]what makes some of our capacities cognitive rather than vegetative ones is that we are conscious while we are executing them, and it feels like we are causing them to be executed – not necessarily that we are conscious of how they get executed"

    It is interesting that we are relying on consciousness and feeling to determine which functions are cognitive, because ordinarily we wouldn't include an account of feeling in a definition of cognition—we have no idea if feeling helps us do what we do (cognition).

    Generally, I agree with this article in acknowledging the ways in which cognitive technology is used as a tool to expand our capabilities, but that it is hasty and arbitrary to call it part of our mind, as it is not an internal part of our (functional) mental states.

    ReplyDelete
    Replies
    1. In response to your comment on the first quote you mentioned. When you say that, "things we use to augment the mind's cognitive capacities are part of "cognition": external extensions of mind cannot share our mental or felt states." I wonder if that is really of importance when discussing the border between the mind and everything else. Surely, a single neuron just like external augmenters cannot share in felt states. Still, the neuron is grouped into the mind. The main objection I have to this paper’s argument is that it does not address the seemingly arbitrary border of the brain or the skull or the body as the limit to the distribution of the mind. When Harnad says that “We can jettison the work ‘cognitive’ and ‘distributed cognition’ altogether and just talk about relatively complex and relatively wide or narrow functional states, leaving it a coincidence and mystery that every single case of what we used to call ‘cognitive’ also happened to be mental” it seems as though we are ignoring the fact that not all cognitive processes are conscious or felt states. So the feeling that all of our mind is within our body and that external extension cannot share in our mental states is not sufficient.

      Delete
  3. Related to Evan's thoughts on how consciousness is implicated in our definition of cognition, this also came up for me many times while reading and during the class discussions.

    "But surely consciousness itself cannot be the mark of cognition either, because although when we take conscious control of our breathing or our balance that is undoubtedly cognitive, we are not really conscious of how we control breathing or balance. If we suddenly feel we are suffocating or falling over, we “command” our lungs to breathe and our limbs to right themselves, but we are hardly conscious of how our commands are implemented. It is physiologists who must discover how we manage to do those things."

    I agree that consciousness cannot be the only marker of cognition. For mental states (i.e. feeling states), however, I think it could be. Feeling necessitates consciousness - when a person is anaesthetized, they do not feel (or at least they don't remember feeling). But what I have been thinking about is how consciousness and feeling interact. Are feeling and consciousness the same? I am thinking of internal dialogues, and regulation of emotions. How do we consider those in the context of our discussions about mental vs. cognitive processes?

    ReplyDelete
  4. “all instances of biotic systems are distributed subcompoenets of yet another individual mega-organism.”

    This idea really struck me. It was said in this paper that the reason we can’t see an group en entire biological specie as a single organism is because we can’t see that organism as having a mind. It’s true that it becomes harder for us to attribute a certain group as having some sort of cognitive capabilities and this reminded me of the blind watchmaker. How is it that a as a groups of species we are evolving how is it that only the genes and behaviors that promote reproduction and survival are maintained where the other ones that don’t do the same thing are not. How and what accounts for this selection. Can we attribute a mind to the blind watchmaker ? That would make it easier to understand in this aspect as well as in the aspect that we can attribute minds to mega-organisms. There is some sort of processes that might explain how this grouping takes place. These ideas are not very well accepted by the scientific community but some people have taken a large interest in them. They consist of the torsion and morphic fields of information. How information about a specific field, like the information of a certain specie, is stored in a sort of cloud called a morphic field. On the other hand the torsion field would consist of a field that contain all the information that is out there. These concepts kind of made it easier for me grasp the connection between subcomponents of a mega-organism where space would not be a problem.

    ReplyDelete
  5. I was pleased to read that this article discussed some of the issues I had with the Chalmers article with regard to extended cognition. I also questioned the possibility of the internet being able to be part of us the way Otto's notebook was. While never stating it in a kid sib-ly way, I believe this article was agreeing with that statement and for me this is where the problem lies. I'm not sure I can get behind the concept of the internet of being part of what we can do. As previously mentioned, the internet contains things I do not understand (articles about quantum physics for example) but also it contains information I disagree with like religion-based explanations of evolution. It is in this regard that my laptop cannot be my equivalent of a notebook because it contains information I never put there or regarded as true necessarily. This is a relevant point because Otto did not remember putting down the information he did but it was in fact, him and so he can trust himself that the information was true. I did not generate any of the content I have access to. This is also worth discussing because a counterargument could be that the information we search for alone is part of our extended cognition. If so, it's nothing like the notebook! The individual pages of the journal are not what he relies on, it's the whole thing. I do believe in extended cognition in terms of calculators and language. When we bring in computers and their databases, it seems to go too far.

    ReplyDelete
  6. “Cognitive Technology: Tools R Us? Does this settle the question of distributed cognition, or does it beg it? The case for distributed cognition is based mostly on cognitive technology: the argument is that even something as simple as an external piece of paper with a phone number on it is a piece of cognitive technology -- a peripheral device on which data are stored. If the phone number were encoded inside one’s brain, as a memory, there would be no dispute at all about its being part of the (internally) distributed cognitive state of, say, knowing or finding that phone number. Why, then, would we no longer consider that same datum as part of that distributed cognitive state just because its locus happened to be outside the cognizer’s body?”

    It seems like the term “cognitive technology” is a fancy way of saying “memory aid”. As D & H point out, if a phone number were encoded in one’s brain as memory, there would be no question that the phone number was part of an internal, distributed, cognitive state. I think that asserting that writing down a phone number counts as “cognitive technology” is far-fetched—a person could have some phone number in their head but want to write it down just in case, or it could be a completely novel phone number not stored in their memory. Either way, writing down the phone number serves as a way to reinforce an existing memory or store information not already stored in the brain. What is common in both these situations is that some offline source has to be consulted—the paper. This ties into the article section on “Distributed Databases,” which points out that consulting one’s memory or asking Google both yield the same result: information retrieved from any sort of database of cognitive technology is not the same as a mental state. Our memory is one form of storage and so are pieces of paper and Google. I agree with the authors’ point that calling the unconscious state delivering input “mental” is not accurate.

    ReplyDelete
  7. What distinguishes a cognizer from a technology that has the ability to do certain cognitive tasks is that only the cognizer has a "mental state". Cognitive technology is therefore a means for cognizers to offload cognitive tasks, typically making the cognitive task more efficient or easier. The paper essentially attempts to set up the argument that, although cognitive technology is a highly useful tool for cognizers, to say the cognitive technology has mental/felt states is arbitrary and absurd. Cognitive technology is compared to our sensory motor apparatus, where each create a perceptual change that gives us the feeling about our bodies' causal power. The paper argues that ultimately, the causal mechanisms that changes our mental/felt state occurs internally. So, as much as cognizers rely on and interact with cognitive technology, the mental/felt state exists within the cognizer and therefore the mind cannot be extended.

    "(20)Does the fact that cognizing is a conscious mental state, yet we are unconscious of its underlying functional mechanism, mean that the underlying functional mechanism could include Google, Wikipedia, software agents and other human cognizers' heads after all?"

    Question of where causal mechanism and autonomy are located. I think maybe what the discussion on wide/narrow mental states is related to how we understand the role of causal mechanisms in given system. At the interface of conscious mental/felt states there is a causal mechanism in place that accounts for it. The locus of this causal mechanism is within the bounds of the human body and therefore doesn't make sense to say that mental states can exists externally. I think the fact that cognitive technologies have shown to shape our sense of cognition makes it easy to want to attribute cognitive causal power to external sources. The problem with having a broad account of cognition is that everything eventually gets conflated and talking about causal mechanisms becomes difficult.

    "cognition is whatever gives cognitive systems the capacity to do what they can do. It is the causal substrate of performance capacity."

    -not all things human beings can do is cognitive
    -things that are cognitive are consciously felt/done
    -unconscious/automatic states are said to be "vegetative"

    "To have a mind is to be in a mental state, and a mental state is simply a felt state: To have mind is to feel something – to feel anything at all (e.g., a migraine). (Having a mind, being in a mental state, being conscious, being in a conscious state, feeling, being in a feeling state, feeling anything at all -- all of these are synonymous.)"

    The fact that cognitive technology and external sensorimotor inputs can alter our feelings to such a great extent makes me question the validity of feeling in understanding what is mind. Maybe feeling only provides a false sense of agency? Perhaps there's no validity to this train of thought but I'm constantly perplexed at how easy it is for our minds/feelings to be manipulated - it seems to undermine that the causal mechanism for mental states exists within our head.

    "Can there be distributed cognition beyond the bounds of the body and the brain? In particular, can external cognitive technology serve as a functional part of our cognitive states, rather than just serving as input to and output from them?"

    I think this is the key question. As mentioned in the paper, offloading cognitive burdens onto cognitive technologies is very beneficial for humans. I would say that yes, external cognitive technology does serve a functional part of our cognitive states, however that doesn't mean it has to be conflated with mental/felt states.

    ReplyDelete
  8. I initially liked the idea that super organisms could be conscious (Gaia). My justification for the latter was that earth seems to do things with intentionality or purpose. However, after thinking a bit more about it anything can have such a purpose or intentionality. If my phone screen shuts off to conserve power, then it seems as if my phone had the intention of conserving power. But it only seems that way because I am attributing intention to it. On another note however, I can’t imagine Gaia having a migraine. Therefore, it seems as if my first justification for supporting feeling-super-organisms fails. But then again, because of the other minds problem, we cannot conclusively prove that Gaia cannot feel. For all we know, Gaia can feel but in a manner humans cannot understand. (Anyhow, sorry for the sci-fi drift…)

    ReplyDelete
  9. I came to a realization after one of my Skywritings on Clark and Chalmers’ article. So the following might conflict with my previous view.

    To begin, I do not agree with the following:“We must accordingly ask ourselves why we would want to contemplate such arbitrary extensions of what it is to have or to be a mind, hence to be a cognizer and to cognize? Why would it even cross our minds? The answer is again the (insoluble) other-minds problem: Since there is no way of knowing for sure whether any cognizer other than oneself has a mind, there is even less way of knowing whether or not there can be cognizing without a mind, or even of knowing what the actual geographic boundaries of a mind are.”
    I do not think that the extended mind argument arises simply out of the other minds problem. I think it has more to do with the idea that when we use such tools as Otto’s notebook, it too is cognition. I don’t see this being incompatible with ‘cognizing being a mental state’ because I don’t accept the “skin-and-in” view of the mind. The mind may well be extended beyond the brain, so it follows then, that cognition may be extended. Our mind could be composed primarily of our brains, as well as anything else used in cognition. If “cognition is whatever gives cognitive systems the capacity to do what they can do”, and these extended tools help give us the capacity to do what we do, then why should we not consider them to be part of cognition and, hence, the mind?

    ReplyDelete