Saturday 11 January 2014

(1b. Comment Overflow) (50+)

(1b. Comment Overflow) (50+)

9 comments:

  1. Harnard (2005), says that "the software that a machine is running is independent of the dynamical physical level", inferring that the mind or the computational states are independent of the hardware or the body in this case.

    However, there is some sort of interaction between our bodies and our minds that goes both ways. A person who gets angry quite often, or anxious will cause a certain strain on his vital organs like the heart or even his immune system. When the mind isn't healthy this translates into the body through the secretion of hormones and neurotransmitters. Indeed it has been proved that seropositive people for HIV, who have a more positive outlook on their illness show a stronger immune system and live a longer life then people with a negative outlook to their illness.

    On the other hand, a person who eats in an unhealthy manner, doesn't exercise and puts some physical strain on his body, will also have that reflected in his mind. Those computational states will therefore be affected. Indeed, we are what eat, and our body also needs to be taken care of. This overtaxing manifest itself to our brain and our minds through our hearts. One way this happens is through our hearts, its magnetic field and rhythm informs our mind of what is happening in our body. The mind can then respond by being in heightened states of anxiety or even depression.

    ReplyDelete
    Replies
    1. It is absolutely true that raising heartbeat can (implicitly) affect our cognition. However maybe some cognitive scientists might say that those are dynamics and not part of cognition. And we don't care about them as long as we can figure out how cognition works. The thing is we probably won't be able to figure out how cognition works until we figure out how dynamics works. Like Prof. Harnad said, T2 robot won't do, T3 robot might, if it can have all the dynamics stimuli as we do. And only after considering every possible dynamics that could interact and affect our cognition, can we start understand cognition itself. That will probably take a really long time.

      Delete
  2. According to the author, Pylyshyn affirms there exist some ''informationally encapsulated modules'' that are ''cognitively imprenetrable''.

    Trying to verify this affirmation,I was led to the conclusion that if true, the only behaviors that can be triggered by these encapsulated modules would be behaviors like reflexes or breathing. However, we can decide to do voluntarily the gestures we would do under reflex circumstances.The same applies to breathing. Thus even the modules responsible for reflex behavioral responses are not cognitively impenetrable as they can be modulated by our ''will''.

    Another candidate to modularity could be perception modules. It has been said that the persistence of optic illusions can be explained by modularity. I still keep seeing the moon larger at the horizon even if I know it is not larger than the moon at the zenith. However, auditive illusions could be a counterexample. If I am told the words I am supposed to hear in a song, I will probably start hearing them. This shows that cognition can have an influence on perception.

    Therefore, I don't think there exist modules or I don't see examples of behaviors they could be responsible for.

    ReplyDelete
  3. I wanted to comment on behaviorism’s lack of mechanism. I understand that it doesn’t have a mechanism, but it did purport to have a law, or at least some kind of mathematical relationship between inputs, outputs and reinforcement schedules. This allowed it to make predictions about behaviour. The paradigm of a satisfactory explanation in science is Newton’s law of gravitation which is also a mathematical relationship. We can flesh it out and say that two bodies “pull on” or “attract” one another, but is this metaphor enough to describe a mechanism?

    Also, I liked the example of “Who was your 3rd grade teacher?” to show how little insight we have in our own processes, but I would like to point out what I think is a more radical question than “How did you do it?”, namely “How do you know your own answer is right?” We could go and check her school records, that would not tell us why she thought the answer she gave was right. As with language, computationalism may be operating under unchecked assumptions as to what truth is.

    ReplyDelete
  4. Harnad (2009) explains that, as stated by D. Hebb, the object of cognitive science is to ‘explain what equipment and processes we need in our heads in order to be capable of being shaped by our reward histories into doing what we do’, whereas behaviorism previously had dismissed the importance of the internal process for the explanation of human behavior, focusing on input/output and learning history. Pylyshyn suggests that the mechanism of cognition is entirely computation so to explain cognitive processes we just have to find the corresponding programs. Harnad says this idea derives from the Church-Turing thesis (that nearly anything can be computationally simulated) but taking it too far, beyond simulation. Harnad says that Pylyshyn’s insistence on the necessity of a functional explanation of cognitive process is interesting, and as such his critique of brain imagery for not providing this explanation while computational theory can. However, this does not mean that neurobiological phenomena have to be ignored and that they are not part of cognition. According to Harnad, there is a way to find a functional explanation for cognition which includes computation but is not limited to it. The author suggests that the root of the problem is what he calls the ‘Symbol Grounding Problem’: we must explain how we can have symbols of our symbol system (in a human mind or even a computer) ‘connected directly and autonomously’ into the world.

    ReplyDelete
  5. This paper discusses introspection, behaviorism, and computation in relation to their attempts at explaining cognition. Introspection - sitting and thinking about how we think - will get us nowhere. We think we know how we get the answer to the question "Who was your third grade teacher?" but in reality we don't. Saying that we just recall the image of the teacher and apply a name but we haven't explained how we recall the image. As for behaviorism, it simply dismissed the question of how we learn from and are shaped from these “reward histories” as unnecessary.
    Computation is a proposed answer to cognition which says that cognition is simply symbol manipulation. This though does not account for the importance of the meaning of the symbols. As we know from Searle, simply manipulating symbols and giving output according to a set of rules, does not mean we understand. So now we have the problem of symbol grounding. Computation needs to be meaningful – symbols need to have meaning to be useful. So how do we connect the symbols of a symbol system to the things they represent in the world? According to Harnad, the way to begin to solving this is to scale the Turing Test to the robot version because the email version based on computation was proven inadequate by Searle.

    ReplyDelete
    Replies
    1. Alisa, you talk about the trouble with introspection: "Sitting and thinking about how we think - will get us nowhere. We think we know how we get the answer to the question 'Who was your third grade teacher?' but in reality we don't. Saying that we just recall the image of the teacher and apply a name but we haven't explained how we recall the image.".

      I think the key part to what you are saying is that a possible reason that introspection is does not reveal enough about the processes by which we think, remember, or cognize, is because introspection does not allow us to 'see' or 'observe' what is going on in our brains. To me, this implies that there is something going on at a neural, molecular level, that we cannot see that either causes or influences our cognition. Yet just because there is something anatomical, or microscopic that plays a role in cognition does not mean a) that it IS cognition b) that is CAUSES cognition or c) that knowing, seeing, or understanding the process will tell us anything about how we do what we do when we cognize, or what we do when we cognize.
      So in a away, the insufficiency of introspection is a good point but I do not see how it directly relates to the potential explanatory power of computation or to computation at all. It just shows that our first-person perspective, although it tells us more about our own feelings than anyone else can observe about our feelings, it is also limited it giving enough information to us to help us reverse-engineer cognition. Actually, the ability to introspect is something that must be incorporated into a successful reverse-engineering, I think.

      Delete
  6. Is cognition just a form of computation?
    What's cognition? How we do what we do. Therefore, how the brain does what it does.
    What's computation? Manipulating meaningless symbols.
    What does it feel like? Imagery, but that does not explain anything. A homonculus doesn't explain anything either. To discharge the homonculus, envoke the powers of computation.
    "Cognition begins at the level of the virtual machine."
    A machine of any sort has to conduct a simulation where a formal system computes some sort of state for cognition to occur. The simulation is removed from the dynamic world. While the inputs may come from the dynamic world, thought comes from the interraction of digital, discrete parts which operate according to rules. The consequential state is wholly a function of how those parts are situated.
    "Chunking"
    A long computation which explains a complicated behavior can be abstracted into a much shorter algorithm. A bad mathematical theorem fails to tell you anything new.
    Compuation just depends on what the symbols look like and the formal structure of the symbol system. For us, symbols, words for example, fit into sentences not from there shapes, but from their meaning. A symbol system's formal structure means that the shapes can be well-formed. This means that the symbols' shapes have a meaning within the system. You can put stuff in the wrong order. In English, a formal grammar system would have a subject precede a verb. Figuring out which is a subject and which is a verb is a whole different problem, and one that compuationalism cannot solve, but a well formed symbol system will be able to compute. And if it can compute, it can compute anything.
    The CT thesis states that any digital process to be capture by any formal symbol system. Furthermore, a formal symbol system can digitally approximate any dynamic system. So, obviously, since the brain is a dynamic system, whatever it is in fact doing can be captured and simulated by a computer. But that assumes we know the dynamics of the brain.
    A simulation contains all the same ratios and quantities that a real-life event contains. So, no neurons fire in a simulation of the brain, but the simulation can contain the same sort of electro-chemical ratios and quantity of chemicals which ultimately cause the electromagnetic physics of neural firing. But this would just be a movie.
    Oustanding questions:
    "What't the right level of functional architecture?" I don't understand this sentence.
    Why is it homoncular to say that visual illusions are cognitively impenitrable?

    ReplyDelete
  7. Behaviorism ignores how mind works and focus only on how environment shapes our behaviors. The problem is, if it walks like a duck, and quack like a duck, it might not be a duck, it could be a robot duck. A TT passing robot can exhibit the same behavior as human, but that doesn't mean that they possess the same cognition mechanisms as human (Searle's CRA).

    I think computation gave rise to cognitive science, if machine can do what human's can do, then are we machines? More importantly, are we executing algorithm and doing symbol manipulation in our head when we think? If not, how we do it? That's cognitive science.

    "Our minds will have to come up with those hypotheses, as in every other scientific field, but it is unlikely that cognition will wear them on its sleeve, so that we can just sit in our armchairs, do the cognizing in question, and simply introspect how it is that we are doing it."

    In its core, cognitive science is trying to figure out what's going on in our head that enable us to cognize, perceiving the world as we do. And that cannot be answered by introspection. Because the thing ("Homunculus") that enable us to introspect (making those hypotheses) is exactly what we are trying to understand. To understand the homunculus, we need the help of the homunculus inside the homunculus. That leads to an infinite regression. The problem with introspection is that it ignored how it actually works inside our head, but rather relying on having a "Homunculus" lives in our head that can tell us how we think. The mechanism that give us the ability to think about how we cognize, is the same one as what make us cognize in the first place. And it lives inside our brain.

    The many fields that cognitive science is involved are different approaches to this problem: Computation: let's make a machine that can cognize and study how it does that; Neuro Science: Let's see inside the brain and see what happens when we cognize; Linguistics: Since language is such an important part of cognition, let's see language works, etc.

    ReplyDelete