Saturday 11 January 2014

(1a. Comment Overflow) (50+)

(1a. Comment Overflow) (50+)

7 comments:

  1. Pylyshyn says that when building a formal system of the type that might be somehow equivalent to human cognition, "the symbolic expressions have a semantics, that is, they are codes for something, or they mean something. Therefore the transformations of the expressions are designed to coherently maintain this meaning or to ensure that the expressions continue to make sense when semantically interpreted in a consistent way." From my understanding, when we do this, we maintain a correspondance between the objects and a specific ruleset for manipulating their symbolic representations, and so we preserve the meaning of these representations (this is how they come to "represent" something outside of the system).

    Pylyshyn also describes the "3 levels" of computational models, which he claims hold for the mind as well—semantic, symbolic, and physical. My question is, if we have the ruleset for manipulating symbols stored at the symbolic level, is it accessible to our consciousness? Intuition says that we are in some sense conscious of the meanings of our thoughts on the semantic level. Are conscious meanings stored differently than the meanings that are made up of symbol manipulation rulesets? Or is it possible to somehow phenomenologically gain access to these rulesets?

    Further, it is unclear whether this account of meaning hold up if we take a stance of strong equivalence between the mind and some sort of universal computer. If we take this stance, we are claiming that meaning, as instantiated in the human mind, consists of having a certain set of rules guiding the manipulation of a symbol. Common intuitions about what a concept is seem to contradict this definition (but I concede that intuitions gleaned from introspection are not necessarily informative).

    ReplyDelete
  2. Pylyshyn talks about the control issue, how "understanding the nature of control was the articulation of the idea of feedback from the environment to be controlled", and that "when the environment is passive then initiative must come from the device."

    I find this idea to contradict in some sense the claim that "cognition is quite literally a species of computing". How can an initiative be generated internally without any external stimuli if we are talking about computation ?

    We can see that there is a relying of control to different locus in cognition after introduction of a given stimuli. For example, the addition of interpreted inputs in the dorsal and ventral visual pathway to create a more complex signal, which will also be combined to form a more complex signal, and finally create our vision. This processing of information is a definition of computation as Pylyshyn describes it.

    However cognition also involves the concept of feeling, which enables an internally generated initiative. For example a person suffering from anxiety might cause himself a panic attack just by thinking about it without any external stimuli that would lead to the same reaction in another person. This also demonstrates that cognition can result into 2 different outcomes in different persons, which wouldn't happen in any other computation device.

    From this point of view, I think cognition might transcend the boundaries of computation, and therefore, can't be a species of computation.

    ReplyDelete
  3. Pylyshyn describes his classical view as having three distinct levels of organization. First, the top level, is the semantic level or knowledge level at which we explain why we do what we do. Second we have the symbol level where the content of knowledge is encoded with symbols. It is here where I find issue with the sentence: "the codes and their structure, as well as the regularities by which they are manipulated, are another level of organization of the system," (p 57). Is this to say that the manipulation of the code is then done at the semantic level? It is expected that the algorithms by which we manipulate the symbols on the symbol level are somewhere in between our knowledge and the computation that creates the knowledge. The third level is simply the physical level, i.e. the hardware on which the system runs.
    Pylyshyn goes on to discuss an example of a calculator asking why certain computations take longer to compute than others (p 57-58). In order to answer this question he tells us that we must “refer to how numbers are symbolically encoded and to what particular sequence of transformations of these symbolic expressions occurs,” (p 58) which I take to mean refer to finding how the numbers are encoded and then finding the particular algorithm which acts on those encoded symbols. According to Pylyshyn, “this is an explanation at the symbol level.” How is it that he can state that symbol manipulations are on another level when defining the system organization of the classical view but here tell us that the symbol manipulations are in fact on the symbol level. To me this is unclear. While it is easy to see that computation is symbol manipulation, there must be a separate level on which the symbols are stored or encoded and another on which they are manipulated since the manipulations, what I could call a method in a computer program, is called first and the symbols to be manipulated are called second.

    Another interesting thing I found in this article is when Pylyshyn says: “how the human visual system does in fact compute that function is a question whose answer depends on further empirical considerations. Notice, however, that simply knowing some of the properties of the function that the visual system computes allows one to understand why perception is generally veridical even though… we know that the step from activating sensors to perception involves a fallible process,” (p 64). I am concerned with the notion that vision involves a fallible process. He attempts to define this as an “inferencelike process that is insensitive to general knowledge of the world.” If this is true, how does one create an algorithm that can account for this fallibility without running the “danger of simply mimicking fragments of behavior” (p 65)? While I agree that more neurological research is needed to better understand the potential algorithm employed by the visual system, I am not certain this is possible, particularly if there is a set of interacting processes that lead to vision.

    While Pylyshyn convincingly argues that computation is a necessary and useful tool to create and test models of cognition, I am not sure I agree that all that cognition is, is computation. Pylyshyn himself concedes that “it could turn out that certain phenomena might not arise from symbol processing, contrary to earlier assumptions,” (p 86).

    ReplyDelete
  4. At the very end of his text, Pylyshyn (1989) writes that within the computational realist view that: “it would also not be entirely surprising if some of our favorite candidate cognitive phenomena got left out. For example, it could turn out that consciousness is not something that can be given a computational account.”

    Although my knowledge of computation is quite limited, it seems to me that if cognitive science is to understand “prototypical phenomena of perception, problem solving, reasoning, learning, memory, and so on.” And that consciousness is present throughout these phenomena, it therefore would be essential to not exclude consciousness in the study of cognition. Spoon et al. (2008)* empirically showed that the outcome of a decision in a motor task can be encoded in the prefrontal cortex and the parietal cortex a full 10 sec before the subject became aware that the motor decision was made. From this perspective it seems that entering consciousness is a step that the computer would have to take in order to follow the same process as the human brain, and therefore allow us to understand how the human brain does what it does. In this case, Pylyshyn’s model would need to include consciousness within the computer’s algorithms. It seems a big mistake to me to exclude consciousness (although I do understand that the complexity of trying to give consciousness a computational account).

    *Soon, C. S., M. Brass, et al. (2008). Unconscious determinants of free decisions in the human brain. 
Nature neuroscience 11(5): 543-545.

    ReplyDelete
    Replies
    1. The Causal Role of Consciousness (Feeling)

      (For those who are interested in reading the Soon et al article on unconscious determinants of decisions, lots of copies are free on the web. (Always check google scholar.) Here's one:

      Soon, C. S., M. Brass, et al. (2008). Unconscious determinants of free decisions in the human brain. 
Nature neuroscience 11(5): 543-545.

      That research originates from the work of Libet, which we will discuss later in the course. Here's a quick overview:

      Yes, voluntary action seems to be preceded and caused by unconscious brain processes. But that makes the causal role of consciousness even harder to explain. (The "hard problem.") It just confirms that our capacity to do what we can do (the "easy problem" of passing the Turing Test) may be all that cognitive science -- whether computationalist or hybrid computational/dynamic -- can ever hope to explain causally.

      (That's a lot, but it's certainly not everything. And most of us would say that it leaves out the most important thing of all about cognition: that cognition is conscious -- i.e., felt rather than just done: We are not just the survival/reproduction machines implied by Darwinian evolution.)

      Delete
  5. About time and flow:
    Pylyshyn explains that a computer’s ability to be “programmed to compute any formally specified function” gives it “a maximal plasticity of function”. He believes that early skepticism towards computation as a model of cognition was based on a misunderstanding of this fact:
    “For example, the Gestalt psychologist Wolfgang Kohler (1947) viewed machines as too rigid to serve as models of mental activity. The latter, he claimed, are governed by what he called dynamic factors, [...] as opposed to topographical factors, which are structurally rigid.”

    I do not think that Kohler and Pylyshyn are talking about the same kind of ‘plasticity/rigidity’ and this equivocation allows Pylyshyn to gloss over an important limitation of machines. What Pylyshyn means by ‘plastic’ is merely ‘versatile’: machines can do nearly anything that can be formally described. The rigidity which Kohler reproaches machines is their inability to flow continuously from one state to another in the way fields do. Indeed, Turing machines are discrete-state machines and so their processes are inherently choppy. Given high processing power, computers can achieve apparent fluidity, but they do so by running on virtual discrete time which iterates fast enough to fool us. If our brains are also running on some virtual discrete time, or if time is discrete, then why is perception (and everything else we do) continuous? If continuity is conceded, is this not fatal to the strong equivalence thesis?

    That being said, if it is indeed the case that a satisfying explanation of cognition must offer a mechanism (as opposed to, perhaps, some other, non-linear, metaphor), then the best we can do are computer programs (assuming mechanisms must be formally describable).

    About language:
    “In classical symbol systems the meaning of a complex expression depends in a systematic way on the meaning of its parts (or constituents). This is the way ordinary language, formal logic, and even the number system works, and there are good reasons for believing that they must work that way in both practical computing and in modeling cognition.”

    If by ordinary language we mean the average utterance, by the average person, then it is true to call it a classical symbol system as described above. Grammaticality is but one of many dimensions which allow an utterance to be understood and it is often quite unnecessary (think Facebook, Twitter, etc.). So it must be clear that we are dealing with an objectification of language which inflates the importance of its syntax and undermines that of pragmatics.

    ReplyDelete
  6. In Pylyshyn's article, he point out how computation can define human cognition. Computers can be programmed to do just about anything, at least at a simulation level. Because computer is so powerful, computationalists' view on cognition is that computation = cognition and how our cognition works is no different than the intelligent machine that we created, more specifically, a TT passing machine. If a machine passes Turing Test, it could take input and produce output that is undistinguishable to human's.

    What wrong about that? It's one thing to simulate something, and a different thing to be something. i.e. The algorithm that programmers wrote to pass TT is not the same as the corresponding mechanisms in our brain. This is proved by Searle with his Chinese Room Arguments. The reason is, a machine can pass TT without knowing a thing what it's doing, it's simply following the rules blindly, no intelligence involved. To rephrase this, computation alone is nothing but manipulating meaningless symbols, no meaning/semantics is happening in the machine.

    ReplyDelete