Saturday 11 January 2014

1a. Pylyshyn, Z (1989) Computation in cognitive science.

Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overfiew: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)


Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

64 comments:

  1. As the first comment here, I'm not completely sure of the format/length that these comments are supposed to take... But I'll give it a shot.

    Pylyshyn takes the time to point out that this "new and more abstract notion of mechanism" (i.e. focusing on the imitation of certain unobservable internal processes rather than on the imitation of movements) is "completely divorced from the old-style 'mechanical' considerations."

    I find his claim that his own pet theory of cognition as computation is "completely divorced" from the Descarte-esque theory of hydraulics or other similarly behaviorist theories far too strong. Whether we reduce cognition to computation or clockwork, we are left with a model which is able to in some way interact and interpret the world around it. Moreover, I think of the clockwork-computer distinction as a gradient. The familiar example of an aplysia, for instance, reacts "mechanically" to stimulus (shrinking away when touched), yet with its handful of neurons, can be argued to be undergoing cognitive processes in an extremely simplified form. Given that hydraulics or computers would prove equally adept at modelling this action from an external input-output dimension, we cannot cleanly split the two. Scaling up, it is not inconceivable to think that a properly tuned clockwork automaton would end up being very much like a computer in its range of actions and in its ability to be programmed to react appropriately to input. In short, I do not think Pylyshyn's sharp division with previous work as apt, because both he and his mechanical predecessors sought to model input-output functions via the means available, and that the replacement of hydraulics with 1s and 0s does not inherently presuppose some kind of break in continuity.

    ReplyDelete
    Replies
    1. Computation is not clockwork. It's something much more specific, and formal -- as we'll learn from Turing next week.

      Delete
  2. Pylyshyn (1989) states three major questions in regards to designing new architectures. His third question, “how to withhold and release the making of decisions until the appropriate times” (p.69), caught my attention while reading. My thoughts below may not correctly capture the article, as I have no computer science background and I found some aspects of it difficult to understand, especially as the article progressed. Pylyshyn states that the third question is important, especially in linguistic problems. He discusses how evaluation of procedures should be withheld, in order to successfully identify the referents of the utterance. However, if the aim were to design architectures to model cognition, this may not be the case. In my linguistics classes, we discussed garden path sentences such as “The horse raced past the barn fell”, in which the reader or listener will evaluate the sentence with each piece of new information, and then will realize that the sentence structure he or she had mentally constructed was incorrect upon reaching the word “fell”. At this point, he or she will re-evaluate the sentence to correctly interpret it. These garden path sentences possibly show that we do not always “withhold the evaluation of procedures” “until the appropriate context is available”. Although I may have misunderstood his third question, I do not see how this last point is vital in designing new architectures.

    ReplyDelete
  3. Horst (2009) discusses the scope of the CTM, and outlines some debated positions on whether computational processes of the mind could occur below awareness or not. The author states that "many advocates of CTM apply the theory, not only at the level of explicit judgements and occurrent desires, but also to a broad array of infraconscious states as well". Horst does not seem to provide his own opinion on the matter, but the conscious/unconscious debate may certainly be an interesting one within the field of cognition.

    Although some convincing arguments are brought to explain dispositional states (via RTM), I wonder how more diffuse state like depression could be explained from a computational standpoint. Depression could be argued to be a mental state that occurs without necessarily any negative emotional-producing inputs. In other words, someone may wake up and still feel miserable, even though this is not a "syntatic representation" that would normally cause depressive emotions. Perhaps then could it more analogous to an "error in the system"?

    This unconscious/conscious debate also made me wonder whether computation accounts for interaction between the two states. When, for instance, an unconscious state/emotion/though is suddenly made conscious, a person might modify his/her behaviour accordingly. To give a clearer example, I might be suddenly aware that I am in a nervous state (from my tone and my body posture), and I might try to change my attitude in a more relaxed fashion. Thus, the complexity of the conscious and the unconscious mind would be, for me, a limitation of the computational theory of the mind. From then, we would have to wonder whether computer have awareness or not, which is another debate...

    * Also, by unconscious, I do mean "below awareness" (not the Freudian version)

    ReplyDelete
    Replies
    1. The Mark of the Mental is Feeling

      Florence, you've hit on a key point. Lots of things have internal states, including living organisms like humans or snails, man-made devices such as cars or computers, and physical objects, like atoms and solar systems.

      But what makes (some of) those internal states mental states (which means roughly the same thing as cognitive states) is the fact that they are felt states -- which means they are happening inside feeling things. (And, until further notice, that means only certain living organisms -- and not even all of them, since plants are alive but all evidence seems to be that they don't feel.)

      Explaining how (and why) anything at all can feel is the "hard problem" of cognitive science. (The "easy problem" is explaining how and why organisms can do all the things they can do.)

      A solution to the easy problem is not a solution to the hard problem but -- as we'll learn from Turing -- it's the best that cognitive science can ever hope to achieve.

      In general, computation is just something formal, and has next to nothing to do with cognition (except that cognizers and the machines they build are so far the only ones that do computation).

      But when computationalists propose that cognition is computation, then of course the question of feeling arises. Unfelt computation is not cognition. (Searle makes very clever use of that fact.)

      More about all this in the weeks to come.

      Delete
    2. Thank you for taking your time in answering our comments. I will look forward to heard more about this debate in the weeks to come.

      Delete
  4. In Pylyshyn (1989), he argues that in order for a computer to pass the Turing test successfully it must demonstrate "plasticity of behavior entailed by symbolic systems". And that at the same time, the computer must be programmed to behave according to a specified function.

    I think this takes away a bit from the idea that computers can imitate intelligence. The human brain can act in more adaptive ways. When a computer encounters a problem that it does not have a program for, what happens? It will either stall or loop, depending on the problem at hand. The brain on the other hand can run scripts that it is not programmed to do, per se. Its plastic behavior will result in novel solutions to never before encountered problems. How can we capture this aspect of cognition with a computer?

    In addition, computers are programmed by humans for each function. How can we capture this mechanism of programming previously non-existing functions? Is it possible to program a machine that can program all other machines in an adaptive manner?

    ReplyDelete
    Replies
    1. Computer programs can be pretty flexible too. There's nothing about computation that suggests it cannot be as flexible as humans can be.

      And as for who wrote the code (if cognition is computation): no reason it could not have been shaped by Darwinian evolution as coded by our DNA...

      (These questions will all come up again in connection with Turing's famous paper.)

      Delete
    2. To draw on this some more, some human functions may not be able to be reduced to specified functions: human error. Cognition is not a mechanical process with distinct steps so describing it as a form of computation does not seem right. When we make mistakes, we have our own particular version of stalling and looping. For example if one types something too quickly and makes a mistake or being at a loss for words and having to leave a pause. How can we teach how long is considered an uncomfortable lapse? A computer can have plasticity installed to some degree and can exhibit something like entropy in a conversation but symbolic systems is not necessarily always something that can be reduced into code. Pylyshyn addresses this fact in his conclusion but he implies his hypotheses are true until we are able to account for the above-mentioned natural occurrences.

      Delete
    3. What can pass the TT remains to be seen. And whether a computer alone could do it, or just a hybrid computational/dynamic system also remains to be seen. But to pass it means to be able to do all those things we can do, indistinguishably from any of us, for a life time if need be. And that includes being able to make mistakes (see the "Lady Lovelace objection" in the Turing paper).

      Delete
  5. Pylyshyn (1989) supports the idea that "cognition is literally a species of computing." I don't see the evidence or logic behind this claim. Simply because an identical outcome may be reached by a certain process, clearly does not imply that equivalent outcomes were reached by similar processes. So in regards to the Turing Test, simply because one can approximate the outcomes of human cognition through computation (hypothetically), does not mean that human cognition itself is based on computation. However, that is what I understood (perhaps falsely) Pylyshyn to be equating.

    To reiterate, the outcome for a Turing Test to be successful is for the machine to be indistinguishable from a human, by a human, and the computer would accomplish this through computation. This possibility doesn't necessitate that humans themselves base their cognition on similar computational processes.

    ReplyDelete
    Replies
    1. The Problem of Underdetermination

      Yes, it's logically possible that a system could have capacity completely indistinguishable from human cognitive capacity, yet the underlying causal mechanism generating that capacity would differ from the human one.

      But, by exactly the same token, the "Grand Unified Theory of Everything" that physicists will eventually come up with, and that predicts and explains everything in the universe, could differ from the true causal explanation of the universe.

      But the chances are slim. It's hard enough to come up with one complete causal explanation of everything, let alone two. And in any case, if the theory predicts and explains everything, we can never be any the wiser about whether it is the right one or not.

      This is called "underdetermination," and it applies to any scientific theory. We'll go into it more when we get to the Turing Test. We'll see that computationalism has a more serious problem than underdetermination: the symbol grounding problem...

      Delete
  6. Pylyshyn (1989) explains that “computers are… sufficiently plastic in their behaviour to match the plasticity of human cognition” (p. 52) and that “[computers] are also the only known mechanism capable of producing behaviour that can be described as knowledge dependent” (p. 52).
    Having no background in computer science, what I got out of his article is that the study of human cognition can be equated to the study of computation. The above quotes helped me grasp the relationship between cognition and computation. At first, I was hesitant to believe the notion that the internal processes of my brain during cognition could be simulated as a computer program. This idea to me is quite scary as it simply portrays my brain as a system of symbol processing and manipulation. But as I thought about the above quotes more in depth, I began to see from Pylyshyn’s perspective. To me, it makes sense how a computer scientist could view cognition as a computer program that is manipulated by input. Since human cognition and computers are plastic, they are molded by the knowledge they are dependent on. Take, for example, my temperament. The physical hardware can be viewed as my DNA or my physical brain. The software or my cognitive processes are dependent on the input I receive such as the knowledge I gain through my interactions with friends or family. My behaviour or my output is reflective of how my cognition software has been molded by the input and reacts to the new information at hand. Pylyshyn confirms this and says that “a symbol processing mechanism can produce any arbitrary input/output function” (p. 54). So, I can be programmed to react (produce output) in a certain way to a certain thing based on the input I received.
    But how does the brain associate that input with something? Perhaps I missed this in the paper, but what I am curious to know is if the brain is equivalent to computation as a system of symbol processing and manipulation, how does cognition associate those symbols with meanings? For example, take a simple encounter between my friend and me. My friend gives me a piece of candy. My programming has been molded to associate the piece of candy with something good and I begin to like my friend because he gave me a piece of candy. My software could simply tell me to like my friend more if I receive a piece of candy from him, but how does cognition/computation work in a way that relates the object I have received as candy in the first place? Is my programming simply telling me, “if you receive this particular object, then do this specific action”, or do I understand what the object actually is? Is it a different module of cognition/computation? Or is it even a part of cognition/computation at all?

    ReplyDelete
    Replies
    1. According to computationalism, it is not just true that cognition can be simulated by cognition, but that cognition is cognition. You are right that the problem of computationalism is how symbols get their meanings. That's the symbol grounding problem.

      Delete
  7. Addressing Pylyshyn, Z (1989) Computation in cognitive science.

    Caveat: How computation works is almost as confusing to me as cognition itself. Not sure if that’s a +1 for computation and/or a -1 for me.

    “At the most abstract level the class of mechanisms called computers are the only known mechanisms that are sufficiently plastic in their behavior to match the plasticity of human cognition. They are also the only known mechanism capable of producing behavior that can be described as knowledge dependent.” (p.52)

    If we are willing to include implicit knowledge, it seems that there are non-computational mechanisms which are knowledge dependent, the simplest example I can think of being the flush-valve mechanism in one’s toilet. The valve which let’s water drain out of the tank is opened by the lever attached to the floating ball which rises with the water level. Is this feedback loop not knowledge dependent if one allows for implicit knowledge? In more biological terms, wouldn't the reflex arc fit the bill as well? Forgive me if my interpretation of “knowledge” within the context is too broad.

    Moving along, I found the distinction Pylyshyn makes between Weak Equivalence and Strong Equivalence to be insightful. Brains and computers have different functional architectures and, while establishing weak equivalence is useful in its own right (showing how one can engender the same outputs from the same inputs gives us a huge clue as to what the underlying process that brains utilize might consist of), the principle of strong equivalence seems to me to be the necessary parameter for establishing an explanation of human cognition. The same algorithm can be instantiated on different architectures/hardware, but humans are not just humans because of our input/output relation, but because of our input/output relation is constrained within temporal limitations. We are, in the end, heuristic machines, and so it seems a little strange to ignore the variable of time, whether it’s in relation to the real time behaviors exhibited by humans or the long process of “programming/calibration” that infants and children go through to attain what we call normal human cognition.

    It seems to me like the task at hand, namely finding a computational theory of cognition, is even more difficult than what is presented. “The classical view assumes that both computers and minds have three distinct level of organization.” (p. 57) Can the brain/mind processes truly be split so cleanly?

    “For example, in psychophysics we assume that if a measure (such as a threshold) changes systematically as we change the payoffs (that is, the relative cost of errors of commission and of omission), then the explanation of that change must be given at the knowledge level - in terms of decision theory - rather than in terms of properties of sensors or other mechanisms that are part of the architecture.” (p.79)

    Does top-down modulation that acts directly on primary sensory neurons or even 2nd order neurons at the level of the spinal cord somewhat complicate this 3 level framework? Considering our species’ brain architecture must have evolved through the process of natural selection, I have a hard time believing the design at hand wouldn't be as messy and possibly counter-intuitive as everything else evolution seems to have produced.


    ReplyDelete
    Replies
    1. The hardware-independence of software

      A reflex loop is not cognition, it is just dynamics -- just as a toilet flusher is not cognition.

      On weak equivalence (same input/output) vs. strong equivalence (same internal "steps" between input and output): When we get to the Turing Test, one question will be whether it requires weak or strong equivalence. (The answer will be that it only requires weak equivalence, but that it must capture dynamics, including timing, too; and dynamics is not computation.)

      Marr's three levels ("computation"/algorithms/implementation) actually means behavioral-capacity/software/hardware, and it only makes sense if computationalism is true. Otherwise software (algorithms) is not enough: dynamics are needed too, both external input/output dynamics and internal dynamics, e.g., internal rotation of sensory input.

      If computationalism is true, only the software (algorithms) are relevant to explaining cognition; the implementation details (hardware) have to be there, but they are irrelevant. The same algorithm can be implemented in countless different ways. Hence the neural details are necessary, but irrelevant to explaining cognition.

      (PS: "Stevan says": Computationalism is not true.)

      Delete
  8. “… the issue is whether we can be content to leave it as an implicit assumption—largely conditioned by the functional architecture of currently available computers—or whether we ought to make it explicit and endeavor to bring empirical criteria to bear in constraining it.”
    In this reading by Pylyshyn I could not avoid the reflex of trying to put things in biological terms, but as he clarifies by the end of his chapter reaching a conclusion of this sort is not within his goals. The aim of hypothesizing an appropriate model for cognition proves complicated, the problem arises I believe from the moment we realize—in irony—that the same thing we are trying to explain is being used to uncover that explanation, just as it usually happens in other natural sciences. The intricacies of computing, themselves being the result of the workings of the mind allow us to pose the question of whether a computer model can work at the same level than a human brain, but more importantly what is this level supposed to be? How to measure a brain working at a 100% and the corresponding cognitive architecture?
    I don’t believe we will ever be content at leaving “cognitive architecture” be a tacit assumption conditioned by the functional architecture of computers, and we’ll keep striving for equivalents of the how-to and intermediate states of the mind, but in the context of Pylyshyn reaching empirical criteria to constrain what he calls “cognitive architecture” doesn’t seem (personally) feasible, I cannot avoid to think that this would need a biological understanding of the brain that we have yet to achieve.

    ReplyDelete
    Replies
    1. "Cognitive Architecture Meets the Homunculus"

      I don't think you've quite caught what is meant by computation and "cognitive architecture."

      Try some of the other Pylyshyn readings. Computation means formal symbol manipulation. This should become clearer in the Turing and Searle weeks.

      "Cognitive architecture" is Pylyshyn's analogy to when you use, say, a mac, with software that "emulates" Ii.e., makes it look to the user like it's a) PC rather than a mac.

      Pylyshyn applies this analogy to the brain, suggesting that hardware is emulating some kind of user architecture, and that's the "level" at which cognition occurs, not at the bottom binary/digital level.

      (But Pylyshyn's analogy is flawed. Ask me about this in class. The flaw is that we are the users of the macs and the PCs but it is a homunculus fallacy to think of us as the "users" of our brain's "cognitive architecture.")

      Delete
    2. Thank you very much! I think the clarification helped me work on my other skywriting (hopefully jaja), I will give a try to the other readings.

      Delete
  9. “[Computers] topographic structure is completely rigid, yet they are capable of maximal plasticity of function.” (54)
    From my understanding (which I apologize is limited with regards to computer science), Pylyshyn is arguing the view that cognition is computation. He holds this claim based on the notion that the brain is the physical hardware that manipulates input through certain algorithms in order to generate an output. The above statement confuses me because in order for something to be maximally plastic in its function, the topographical structure should be plastic as well. If one is to hold that cognition is computation, how can they explain the difference between a brain that is continuously changing and maturing compared to a computer program to which the hardware is permanent and rigid? In order to hold that cognition is computation I believe one must show how experience can factor into a computer program because this is what makes cognition so malleable and interesting. Experience has a huge effect on the output/behaviour generated by an intelligent being and I think it is only once we are able to program a computer to change its basic structure based on experience that we can fully claim that cognition is computation. Until then there will always be a fundamental difference between the hardware of a computer and the hardware in a human and this is what will cause a difference in output between the two. The points I raise make me believe I am an advocate for strong realism because as Pylyshyn says, they believe “a valid cognitive model must execute the same algorithm as that carried out by subjects.” (72) Therefore, in order for cognition to be represented by computation, I believe the computation should deploy the same algorithms that cognition does and should account for structural plasticity based on experience.
    Perhaps I’ve misunderstood claiming cognition to be computation and equating the two, but I believe that to say that cognition is computation then we are saying that cognition should be able to be represented by a computer program at some point in time. It seems that Pylyshyn is claiming that what occurs in an intelligent being i.e. cognition is something that can be reduced to a computational network. I just do not see this being accurate until a computer is able to change its basic structure, not just function, continuously as it receives different input.

    ReplyDelete
    Replies
    1. If computationalism is right (cognition = computation) then only the algorithm (software) is relevant to cognition. How it is implemented in hardware is not. And there is already (trivial) software that can change with "experience" (i.e., with new input, new data). But it is not clear (even if computationalism is true) why "strong" equivalence is better than "weak" equivalence, if both can pass the TT. (But don't fret too much about it, because computationalism is not true...) See the T2, T3, T4 hierarchy in reading 2b.

      Delete
  10. “But whether or not they [psychobiological data] are included has no bearing on the indeterminism arguments because cognitive models are not models of how the brain realizes processes in neural tissue, they are theories that describe cognitive mechanisms that process cognitive representations” (Pylyshyn, 1989).
    Pylyshyn makes a significant point concerning the study of cognitive processes, which is that cognitive mechanisms can and should be studied in their own right, and that we should not simply opt for essentialist data just because they are grounded in biology. He also explains how biological data is subject to similar inaccuracies as psychophysical data. This is an important argument, as research in psychology is often separated based on whether the researchers have taken a top-down or bottom-up approach, which can be equated to what Pylyshyn calls the “high road” and the “low road,” respectively. I think that research in cognitive science would be most effective if these two camps were able to work more harmoniously. It is also significant that the focus not be on aligning the top-down with the bottom-up approach, as this can limit the intellectual creativity of the former; biological data may not necessarily reflect psychophysical data. I also feel that bottom-up researchers can be quite dogmatic when it comes to researching cognitive mechanisms; if a neural correlate has not been discovered, they are skeptical of the validity of that cognitive mechanism. Obviously, this can severely limit research in cognitive science. Also, bottom-up researchers often begin their research with a theoretical assumption in mind that lends itself to top-down research. Thus, the “high road” and “low road” may not be as dichotomous as is frequently imagined, and perhaps uniting the two would be beneficial to the study of cognitive mechanisms.

    ReplyDelete
    Replies
    1. Explain top/down and bottom/up to kid sib...

      If top/down means sensorimotor grounding, then an ungrounded symbol is just a meaningless squiggle without the grounding.

      Delete
  11. “Substantial components of such phenomena could, for example, require a non-computational explanation, say, in terms of biochemistry or some other science” (Pylyshin 86)

    The similarity between Pylyshin and Skinner is striking here. As noted in Harnad’s paper, Skinner “had always dismissed theorizing about how we were able to learn as being either unnecessary or in the province of another discipline (physiology), hence irrelevant.” Skinner deemed the “how” of all mental processes to be unimportant, while Pylyshin merely downplays the importance of certain mental processes: those that are not computational. Harnad argues that the similarity in this dismissal is not “damning” (because at least Pylyshin cares about providing a functional explanation of some phenomena), but whether or not it is damning, it is certainly dissatisfying!

    The objective of cognitive science is to explain how and why organisms do what they do. In his paper, Pylyshin redefines this objective as explaining how and why organisms compute what they compute. But he recognizes that there are things that we do that we may not be able to explain by computation, such as “consciousness,” and “certain kinds of statistical learning … the effects of moods and emotions …” Shouldn’t cognitive science care about these things too?

    I think Pylyshin does not want to get involved “biochemistry or some other science,” even if doing so would help us understand some processes that I would deem cognitive, because “we can no more directly observe a cognitive mechanism by the methods of biology than by the methods of psychophysics.” Pylyshin writes that “cognitive models are not models of how the brain realizes processes in neural tissue, they are theories that describe cognitive mechanisms that process cognitive representations.” I am not really sure what this means, but I think Pylyshin is suggesting that biology may not reveal anything about the “underlying process” at stake but merely “the way in which the process is physically instantiated.” In other words (I think) biological explanations do not necessarily provide causal mechanisms for mental phenomena.

    But what if no computational process underlies the physical process (eg, of consciousness)? In that case, shouldn’t cognitive scientists be satisfied with a physical explanation?

    ReplyDelete
    Replies
    1. Pylyshyn is a computationalist. He thinks something sharply separates "cognition" from other things an organism can do. Let's call those "vegetative" functions (like heart-beat, temperature control, breathing).

      But even within functions that don't seem vegetative at all (like seeing or hearing) Pylyshyn wants to insist that some of it is "cognitive" and some is not.

      And in the end, it's almost circular: "It's cognitive if it's computational (and above the level of the our "cognitive virtual architecture") and not if it's not."

      And he gives "cognitive penetrability" as his test for whether something is cognitive or not: "Can it be changed by something you learn or know" If not, it's not cognitive."

      That's all kind of arbitrary. But it hardly matters, since you need it all to past the Turing Test. And once you've got a system that can pass it, who's going to quibble about which part is "cognitive" and which part is just "vegetative" -- or, for that matter, whether it's "strongly" or just "weakly" equivalent with what's going on in our heads?

      Pylyshyn's right, though, that if cognition is computation then the hardware is irrelevant.

      I'd add, though, that even if computationalism is wrong, you may not be able to reverse-engineer the brain so as to design a mechanism that can pass the TT -- by just studying the brain.

      Delete
  12. The cognitive architecture of a system is unique in it’s form and function through the algorithms it is able to execute. The goal of computation in cognitive science is to explain thoughts/behaviour on a level of rules/logistics (algorithms) which play out in one’s brain (cognitive architecture). Pylyshyn (1989) discusses that one of the methods to testing for strong equivalency (where the human’s and the given system’s behaviour are made to be the same) is through cognitive penetrability. To Pylyshyn, in order for a system to allow for a strong equivalency it must be cognitively impenetrable, whereby “the input/output behavior of the hypothesized primitive operations of the cognitive architecture must not itself depend on goals and beliefs, [instead it must focus] on conditions that change the organism’s goals and beliefs” (81).
    Since the goal of computation is to get at understanding what rules/logistics/algorithms are involved in producing a given thought/behaviour then the idea of cognitive impermeability is a very important one because it seeks to get at the most fundamental principles of how things such as goals and beliefs (known as representation-governed processes) arise. In trying to get at the workings of how these representation-governed processes come about, I find it extremely difficult to conceptualise how one would be able to draw the line between these two things, and furthermore to assign causality between them. For example, in Sherif’s social conformity study (where participant’s were asked to give the distance of two lazer points) the majority of participants gave a distance based on what other’s had said the distance was instead of the correct measurable distance they could see. Social conformity therefore occurred amongst majority of the participants, however, not all. How would it be possible to get a computational system to behave in a way as the polarity of the participants (who either did or did not socially conform)? Also, I wonder what the level of difficulty would be to remove goals/motivations/beliefs within such a situation where somebody responded in a way they perhaps wouldn’t have if they were alone.

    ReplyDelete
    Replies
    1. See my reply to Jessica above, about cognitive penetrability. It's not a very reliable guide -- and rather arbitrary.

      Delete
  13. Pylyshyn, Z (1989) Computation in cognitive science.

    “Regardless of whether one takes the high road or the low road, in cognitive science, one is ultimately interested in where the computational mode is empirically valid – whether it corresponds to human cognitive processes” (p. 79)

    “For the entire system to run, it has to be realized in some physical form. The structure and the principles by which the physical object functions correspond to the physical or biological level” (p. 57)

    It seems to me that it would be a big jump to have the Computational Theory of Mind (CMT) that can be married with the undeniable physical processes that occur during cognitive processes. Brain imaging shows how different brain regions are active during cognitive tasks. A theory on how the "stuff that goes on inside of our heads" works would have to account for the nervous system as well. The CMT on it's own seems more like a heuristical approach to explaining cognition. If biochemistry/physiology play a role in consciousness/memory/decision making etc. then perhaps a theory of mind has to explicitly include their roles as well.

    ReplyDelete
    Replies
    1. No doubt the brain (and its owner) can pass the Turing Test. But figuring out how the brain does it, and showing you understand the causal mechanism that generates our Turing capacity by designing a system that can pass the TT is not so easy (even though it's the "easy") problem. And it's not clear what in brain function is relevant to passing TT and what is not. See reading in week 2b for the hierarchy of TTs fro T2 to T4.

      Delete
    2. “A stronger claim might be that the model realizes some particular function using the same method as the person being modeled. The notion of a method is not very well defined or consistently used, even among computer scientists. However, it generally entails something more specific than just input/output equivalence.”

      There seems an inherent contradiction in the empirical status of using computation as a model of cognition. In order to understand how we do what we do, cognitive science demands a replication of our functionality. However, the only way to substantiate this replication as a valid model is to prove that its functionality is occurring by the same method. But it is seemingly impossible to do this if the method is unknown and in fact, the subject of investigation. Pylyshyn highlights that strong equivalence may be impossible to achieve and measures used in cognitive science are interpretable. While the turing test is sufficient for an input/output proof of equivalence, I would have to be convinced that only one single method for how we do what we do exists. If not, how can solid methodology be put in place for a computational model of cognition?

      Delete
  14. “Cognitive algorithms, the central concept in computational psychology, are understood to be executed by the cognitive architecture. According to the strong realism view that many of us have advocated, a valid cognitive model must execute the same algorithm as that carried out by subjects. But now it turns out that which algorithms can be carried out in a direct way depends on the functional architecture of the device. Devices with different functional architectures cannot in general directly execute the same algorithms. But typical commercially available computers are likely to have a function architecture that differs significantly in detail from that of brains. Hence we would expect that in constructing a computer model of the mental architecture will first have to be emulated (that is, itself modeled) before the mental algorithm can be implanted.”


    There are a few problems that I have with computational psychology, which have persisted through many readings over several courses. While I agree with the first half of this passage, that it is definitely necessary to have the same functional architectures to execute the same algorithm assuming that there is at least one functional design that corresponds to a specific algorithm. The first problem that I have is how reductionist this theory of psychology is; I find that if we are reducing our cognitive ability to a set of algorithms there is a flattening of the full picture. To me, the only thing that could really convince me of computational psychology is empirical evidence. There is so much variation between people are their psychologies that it does not seem plausible that all of us have the same mental architecture and the same capacity for mental algorithms. There is too much behavioural, emotional, and psychological difference between people for me to truly consider the brain to act as a computer. If one chooses to argue something like “we each have similar but slightly different mental architecture and therefore similar but slightly different capacities for mental algorithms”; this reduces the central point computational science to something that could hardly be tested empirically (providing that we have enough technological and scientific progression to attempt it in the first place). There are many steps that you would have to fulfill in order to gain any experimental data; you would have to define in theory an individual mental architecture, prove that it exists as you have defined and then there would be the matter of translating all cognitive processes to individual algorithms (they could not be the same across people due to the different mental architecture). I think it seems like an endless task, plus if all of us have unique mental architecture with unique capacity to execute algorithms another problem is presented: how do we have any shared cognitive processes (which we clearly do)?

    ReplyDelete
    Replies
    1. Surely the first job of cognitive science is to explain our generic cognitive capacity -- the one just about all of us share. Explaining individual differences comes later; it's just the fine tuning compared to the "easy" problem of designing a system that can do anything and everything a generic person can do.

      Delete
  15. This is the first time I’ve been exposed to the theory that “cognition is a species of computing carried out in a particular type of biological mechanism” and my intuition is to be opposed to it for a reason that Pylyshyn sums up nicely when he writes, “The basic source of uneasiness comes from the fact that we do not have the subjective experience that we are manipulating symbols.” However, intuition can’t really be trusted when the question you’re trying to answer has to do with what’s going on in your head when you think. While I agree that the plasticity computation is capable of producing makes it a good candidate for a cognitive mechanism, I don’t think I know enough about it to fully understand whether it would work or not. I guess I’m inclined to agree that computation is a great metaphor for cognition and admit that I found Pylyshyn’s argument that three levels of organization, the knowledge level, symbol level and biological level, explain intelligent human behaviour rather persuasive.
    If the mind is a symbol processing system, are things like the fine-tuning of cognition associated with cognitive development in childhood and the cognitive deterioration associated with dementia accounted for at the symbol level or the biological level (or both)? Is cognitive development akin to acquiring more state of the art hardware to run your software on? Or can the advances made by growing children be explained as the symbol system/software somehow improving itself?

    ReplyDelete
    Replies
    1. Pylyshyn doesn't propose that cognition is computation as a metaphor: He really means it. But pass the Turing Test first, then worry about the fine-tuning (development, aging)!

      Delete
    2. This post links two ideas that I am having a hard time resolving. If computation is the theory that "cognition is a species of computing carried out in a particular type of biological mechanism" and if we manipulate symbols, does this imply that we actively manipulate symbols/ we are in control of the manipulation? To me, this is the question that tempts me to reject computationalism.
      I'll try to explain better. Let's say that feeling (which is self-awareness) allows us to be in control, and to actively manipulate symbols. This is the ultimate difference for me between my mind and a computer. Unless we can build a computer that decides things for itself, controls things, manipulates symbols according to its own will (and not according to the program we have written), then we will not have feeling. But then, I automatically doubt myself and ask, is it possible to have a computer that does in fact "think for itself"? What constitutes "thinking for oneself"? Am I just "weasel-wording" feeling?

      Delete
    3. These are interesting questions. I think that even if we do build a robot that seems to manipulate symbols according to its own will we wont know for sure that we have feeling because of the other minds problem (But if it's a Turing Test passing T3 or above robot we presumably wouldn't have any more reason to doubt that it feels than we do to doubt that other humans feel). I'm curious as to whether or not a robot that is programmed to be capable of learning could be considered to be acting according to its own will? To me, learning constitutes thinking for oneself but the fact that it would be the product of a program makes me question that a little.

      Delete
  16. One point, in particular, that interests me about Pylyshyn’s computational argument is how he relates computer language and human language. For instance, he writes that the semantic level of organization explains “… why people, or appropriately programmed computers, do certain things by saying what they know and what their goals are and by showing that these are connected in certain meaningful or even rational ways.” This makes sense in programming, where the logic and specificity of arguments reigns supreme— there is no semantic ambiguity of “math.pi” in Python because it will return the value of pi and nothing else. In regards to humans, however, I question the degree to which computing principles apply to the language module within cognition.

    In particular, human language offers an example of saying things that are not connected in a meaningful way. One example that comes to mind is Grice’s maxims for conversation (quantity, quality, relation, manner) and how humans can also flout these maxims. Pylyshyn argues that we need a “knowledge level” to convey goals and beliefs so that his concept of cognitive penetrability can then go on to explain behavior; however, a flouted maxim means that a speaker violated the “knowledge level.” I wonder how effectively a computational model could account for language processing when faced with neither meaningful nor rational input.

    ReplyDelete
    Replies
    1. Let's first see whether a computer program can pass the TT, and then worry about flouted Gricean maxims...

      Delete
  17. I am not sure I fully understand the proposed relationship between the levels of organization of the Classical view.

    "the classical view assumes that both computers and minds have at least the following three distinct levels of organization: 1. The semantic level (or knowledge) level. At this level we explain why people, or appropriately programmed computers, do certain things by saying what they know and what their goals are and by showing that these are connected in certain meaningful or even rational ways.
    2. The symbol level The semantic content of knowledge and goals is assumed to be encoded by symbolic expressions. Such structured ex­pressions have parts, each of which also encodes some semantic content. The codes and their structure, as well as the regularities by which they are manipulated, are another level of organization of the system. 3. The physical (or biological) level. For the entire system to run, it has to be realized in some physical form. The structure and the principles by which the physical object functions correspond to the physical or the biological level." (p.57)


    For instance, when it is said that "To determine whether certain empirical evidence favors certain hypothe­sized architectural properties of the cognitive system, the natural ques­tion to ask is whether the evidence is compatible with some other, different, architectural properties. One way to do this is to see whether the empirical phenomena in question can be systematically altered by changing subjects' goals or beliefs. If they can, then this suggests that the phenomena tell us not about the architecture but rather about some representation-governed process--something that in other words would remain true even if the architecture were different from that hypothesized. " (p. 81)

    Couldn't one argue that any stimuli leading to a change in state must have been perceived (i.e., if the physical architecture did not allow for the perception of the stimuli then no causal change would have been possible), and that thus the successful perception of the stimuli (e.g., as determined by the change in output following a belief induction) informs us about the architectural capacity to detect and process the stimuli in question?

    Would it then suggest that the levels or organization are hierarchically organized such that lower levels (e.g., physical) necessarily influence higher levels (e.g., the ability to perceive symbols)?

    ReplyDelete
    Replies
    1. These are Marr's levels, and as I suggested in class, there are problems with them.

      "Level 1" is not "knowledge" level. (Kid sib has as little idea of what "knowledge" means as what concept or idea or thinking or representation means: He asked you about cogsci in order to try to find out.) Level 1 is all things that people can do.

      "Level 2," is the software or algorithm level (if cognition is computation), and software is just meaningless symbols -- squiggles and squoggles. The symbols may be semantically interpretable, but that interpretation is in the head of the interpreter, not in the symbols. (That's the symbol grounding problem.)

      "Level 3" is the hardware level. Software has to have hardware in order to run. But the details of the hardware are irrelevant. Lots of different hardwares could run the same software.

      But if computationalism is wrong, the all bets are off, and these levels are not relevant (or relevant only in a limited way).

      No, not every stimulus that changes our state is perceived (felt). In fact most aren't felt. And we have lots of states, not all of them cognitive (some are vegetative). But this applies to both kinds. (Besides, how and why (some) internal states are felt states is the "hard" problem -- way beyond Pylyshyn.)

      Probably the only way to ground symbols is bottom-up.

      Delete
  18. Pylyshyn (1989) comments that “although in a sense we all have behaviour, not all behaviour is of the same kind from the point of view of theory construction” (82). This distinction about behaviour does not only apply from the perspective of theory construction. Two individuals do not necessarily follow the same process to achieve the same outputs. The biochemical changes within the brain may be similar, even to the point of being identical, but arguably these changes are more related the architecture of the structure. Similarly, computers follow the same type of mechanical changes regardless of the type of program being run. Although it would be nice to achieve a universal understanding of cognition in the same way we have universal understandings of other aspects of the body (ex: the cardiovascular system) cognition may not function so identically across individuals. I am by no means suggesting that we should stop trying to understand how cognition occurs in a universal sense, since the obvious alternative of understanding cognition on an individual level is practically impossible. However, there may be no universal cognition as specifically as we would like there to be.

    ReplyDelete
    Replies
    1. Explain generic human cognitive capacity first, then worry about individual differences. The "easy" problem is already hard enough without that.

      Delete
  19. When describing the high and low road approaches of computational methods in cognitive science, Pylyshyn highlights Marr’s three levels of cognitive processing as “the level of the computation, the level of the algorithm, and the level of the mechanism” and goes on to highlight Marr’s point of view that “if one begins by hypothesizing a particular algorithm… without first understanding exactly what the algorithm is supposed to be computing, one runs the danger of simply mimicking fragments of behavior without understanding its principles”. Furthermore, when discussing his own empirical basis for computational models, Pylyshyn indicates that “a [strong] claim might be that [a] model realizes some particular function using the same method as the person being modeled” and that “if the computational system is to be viewed as a model for the cognitive process… it must correspond to the mental process in more detail than is implied by weak equivalence”
    It seems that Pylyshyn himself takes a high road approach in modeling cognition in a computational framework, adhering, first, to the classical framework described by semantics, symbols and the physical level, and progressing with more rigor towards a mechanistic approach grounded in strong equivalence. It seem, however, that in achieving a strong equivalence (a model in which two systems abide by the same function, program and computing language) some degree of mimicry occurs in so much as the model is only ever as cognitively aware as the system being modeled. In my most humbled, and not entirely informed opinion, computational modeling seems to be a subset of ones total applied “cognitive power” and exists as a part to a more complex whole.
    Although computation seems to have dynamic properties, allowing for immense plasticity in its function, it seems to lack the conscious (for lack of a better word) predictability of its inputs so that it can interact with a dynamic environment. If our dependence on environmental cues and conditions is necessarily the case, then the question of “how we do what we do” must depend on this interaction.
    According to Pylyshyn’s notion of strong equivalence, correspondence between a computational and cognitive systems must reside within the same functional architecture (or language) such as a cognitive architecture for example. This scaffold for a mechanistic understanding of cognition seems extremely theoretical. I do not entirely understand how cognitive architecture can be defined for a model system, if that model is not presently a cognitive being that exists in a physical (perhaps living) vessel. Because the functional architecture, to the extent of my knowledge, seems to exist in abstract, it does not necessarily convince me that it is capable of mechanistically predicting a cognitive behavior or action.
    I also wonder about unconscious behavior, and functional brain imaging of activity that occurs below conscious awareness. If behavior occurs in the presence of unconscious stimuli, and computation is the prerequisite for that behavior, then does computation code for more than just conscious aspects of cognition?

    ReplyDelete
  20. “Knowledge is encoded by a system of symbolic codes, which themselves are physically realized, and that it is the physical properties of the codes that cause the behaviours in question.” p. 61

    The above quote from Pylshyn’s article comes from his reference to the Language of Thought. This selection helped me grasp more what Pylshyn meant by “computation is cognition”. From my understanding, he is saying that the system behind cognition has different levels. There is a “knowledge”/semantic level, and all the knowledge that we, humans, have is represented by symbols. By manipulating and combining the symbols in our heads(?), we can account for all human behaviour. The interesting thing about the knowledge level is its malleability. For example, information from the environment can change what we know. But would this create new symbols or change the meaning of the symbols that already exist? The idea computation as cognition leads me to further wonder: if a robot could be built that was the same on all levels, including knowledge, could every single behaviour of a particular human be predicted?

    While I do think that it makes sense that computation could very well be part of cognition, there still seems to be something missing. Who or what controls all these different levels? Who/what is responsible for determining what goes into one’s knowledge level and what does not? (I guess, in other words, this goes back to “how do we know what we know?”) Furthermore, what is responsible for combining all the different symbols together to trigger the correct output (what runs the computation)?

    I was a little bit confused as what Pylshyn meant by “cognitive architecture”. Were my above questions related to what he meant? I kind of thought of cognitive architecture as the hardwired part of cognition that couldn’t be changed (the machinery and the operator of the machinery), and computation would be, I suppose, what the software of a computer does.

    ReplyDelete
  21. Pylyshyn (1989) describes that “The possibility of a computer being able to successfully pass what] has become known as the Turing test is based entirely on the recognition of the plasticity of behaviour entailed by symbolic systems, which can be programmed to behave according to any finitely specifiable function” (pg. 55).
    However, this raises questions about our understanding of finite, and whether the number of possible combinations and permutations of human nature are finite. Are human emotions/thoughts/words finite, or infinite? To accept the premise that a computer could ever pass the Turing test appears to necessitate accepting the fact that human capacities are finite. If not, then it seems like it would be possible to identify any computer as a computer (rather than a human) with adequate time and insight.
    This seems to contradict many common notions of human nature, which pose such claims as ‘every person is unique’; if every human is unique, it should logically follow that as an infinite number of genetically unique humans are born, they cannot all still be characterized and predicted in terms of finite bounds. This would suggest that there is in fact not limited, finite possibilities for what/who/how a human can be, but rather infinite bounds on human functions.
    Perhaps some day we could create a computer that systemizes enough randomness to make up for this gap of the unimaginable, but I still have difficulty imagining this.

    ReplyDelete
  22. “In psychology there is a great deal of interest in very different control schemes – one that might change psychologists thinking about range of possibilities for converting thoughts into actions.” (Phylyshyn 1989)

    Interesting that cognitive psychology already shares some control flow terminology with computer science, such as loops and parallel processing. It is equally tempting to draw comparisons between hierarchies of program subroutines and hierarchies of neural networks. Though the psychological tests that utilize the former terminology concern very conscious, deliberate states of mind and the latter is an admittedly uninformed comparison, it can at least be used to illustrate the intuitiveness of including control issues into cognitive science.
    Research into control flow deals largely with efficiency and maximizing processing speeds. It goes without saying that cognitive models should also take process maximization into account – seeing as the brain, if taken to be a computer, is more powerful than any other. It seems far-fetched that a processor of this speed can operate purely linearly. But if our cognitive subroutines are, in fact, projecting to multiple other modules depending on the environment, this should undoubtedly be impacting our effort to shape a standard model of thought.

    ReplyDelete
  23. In "Computing in Cognitive Science", Zenon Pylyshyn tries to find some connections between computation and cognition. He compares some characteristics of computation with those of cognition, and he suggests that cognition is some kind of computation which takes in an input and gives out an output through some kind of function (designation). To be frank, Pylyshyn clearly explains the similarities between computation and cognition, and from his article, I can see why he believes cognition is some kind of computation. However, some questions remain there while I was reading.

    From my personal point of view, cognition and computation overlaps, but cognition is not only computation, or, to be clearer, computation cannot explain all that about cognition because scholars have found that formalized systems have limitations. According to Gödel's First Incompleteness Theorem, if a system is formalized and powerful enough, there exists some statement that is true but cannot be proved by the system. To put it in a different way, a powerful and formalized system, like a computer, cannot perform every possible task. Therefore, I would not think it is possible for computation to account for everything about cognition because it seems that not everything that a human brain is able to do can be performed by a formalized system.

    This idea is also present in the limitation of Turing Machine. On page 73, Pylyshyn says that "[Turing] machine is universal, in the sense that it can be programmed to compute any computable function". I am not quite sure what Pylyshyn means by "computable function", but I do know that there is some problem that is undecidable over Turing Machine: the Halting Problem. The Halting Problem is a problem of determining whether a program will finish running or run forever, and Turing proved that it is undecidable, which means there does not exist a general algorithm to solve the halting problem. I believe this means that Turing Machine cannot do everything that we want it to do.

    I am not sure about whether I am on the right track of looking at computation and cognition, or maybe my definition of computation is different from Pylyshyn's, but these are the two main problems or questions I have regarding to the article written by Pylyshyn. To be honest, it is hard for me to accept Pylyshyn's analogy between computation and cognition because I could not prevent myself from thinking that computers cannot do everything and thus computation cannot account for all of our thoughts. I do believe some part of cognition is similar to computation, but I think there is more to that.

    ReplyDelete
  24. Pylyshyn briefly discussed “one of the most influential champions of the high road (of computational methodologies)” David Marr in his paper. “Marr went even further to advocate that one should not worry about developing a system that exhibits the performance in question until one has at least attempted to develop a theory of the task ...” (pg.65) Marr may very well have taken an extreme position on the debate between the high and low roads of cognitive computational methodologies, as we have seen over the years that the most (probably the only) efficient way of testing the reversely engineered theories — with help from the neuroscience, psychology and biology fields of research — is to develop, in parallel, forwardly engineered machines. This is where artificial intelligence comes into play. It is the perfect testing ground for such primitive theories of cognitive science. But Marr does have a point when he argued that “if one begins by hypothesizing a particular algorithm used by an organism without first understanding exactly what the algorithm is supposed to be computing, one runs the danger of simply mimicking fragments of behavior without understanding its principles or the goals that the behaviour is satisfying.” This appears to sum up perfectly where behaviourism went wrong. Skinnerian scholars, while they gladly embraces the triumph of their reward/punishment-feedback theory, are somewhat reluctant to delve into the underlying mechanism — a cause-effect device — that produces this input/output function.

    ReplyDelete
  25. Pylyshyn (1989) provides a description of the classical view of computing and cognition. Simply put, he says that computers possess cognition since they are able to compute. The author then elaborates on the cognitive architecture, which is organized in three levels; the semantic level, the symbolic level and the physical or biological level. Although I partially agree with Pylyshyn’s statement, “For the entire system to run, it has to be realized in some physical form”, it makes me wonder about the emotional level. This is an important aspect that seems to be missing. Don’t our emotions have an influence on our cognition? Our ability to control our emotions and to express them appropriately in response to certain stimuli has yet to be explained. This is where computers and human beings differ. If this emotional factor were incorporated, computers would no longer possess cognition. Furthermore, the processing of sensory information has a significant impact on the output. For instance, if there is a threatening stimulus in the environment, our brain initiates a fear response (emotional), which is imperative to survival. All in all, the three-level organization that is said to define cognitive architecture does not seem sufficient when considering our emotional capacities.

    ReplyDelete
  26. In Pylyshyn very thorough review of computationalism, what really struck me was how easily the frameworks used to study the theory could be imported to cognitive science at large once you brush off some of the more superficial computationalist claims.

    For example, Pylyshyn makes heavy use of what he calls functional, or cognitive, architecture, defined as the "level at which data structures of the model are semantically interpreted, with the semantic domain being the cognitive one"
    Later on he clarifies this definition by holding that a functional architecture is what determines the possible range of algorithms that can be implemented within a system, as it is what determines the system's primitive operators for symbol manipulation.

    If we strip down the idea of a functional architecture from its computational "primitive operator" roots and look at it in the weaker form of "a set of properties that determines the range of operations (digital or analog, computational or otherwise) a system can perform", we then have a very robust framework from which to analyze cognition. A recurring issue with pylyshyn's paper for me is that he takes these abstract frameworks and then presupposes computationalist principles in his attempt to define them. There is nothing, however, theoretical stopping a functional architecture from defining the range of analog or non-computational operations a system can perform

    ReplyDelete
  27. "computers are the only known mechanism that are sufficiently plastic in their behavior to match the plasticity of human cognition.... this extreme plasticity in behavior is one oft he reasons why computers have from the very beginning been viewed as artifacts that might be capable of exhibiting intelligence."
    The various references in this paper to the idea of 'plasticity' resonated with me due to a guest lecturer that came to my BASC 201 class last semester. Any other students who took that class with me surely remember Professor Tobias Rees, who came in to discuss his anthropological investigaiton into the development, in France, of plasticity as a neurobiological concept. He stressed that until very recently, plasticity was not even in any neuroscience textbooks, but now, because of the work of biologists and neuroscientists like Alan Prochiantz, it has come to be one of the defining features of the brain as we know it. It is interesting that people like Turing already had been thinking of the power of plasticity as it related to machines as early as 1950 - several decades earlier than the parallel discovery in neuroscience. I just thought it was interesting to note how these two disciplines of neuroscience and computer science seem to evolve in a similar direction, and wonder to what extent we can even further integrate them into the unified field of cognitive science, especially if we are to accept the idea of the brain literally as a computer.

    ReplyDelete
  28. Pylyshyn (1989)
    "If the knowledge-level description is correct, then we have to explain how it is possible for a physical system, like a human being, to behave in ways that correspond to the knowledge-level principles while at the same time being governed by physical laws."(p. 61)

    Looking at cognition through the framework of computation is a useful measure of knowledge based behaviours, however, the link between computation and more subtle internal processes (such as beliefs, goals, etc) is not as obvious. Pylyshyn's focus is on setting up the parameters on what would constitute a viable cognitive system based on computational analysis. While I see the relevance in looking at cognition as a "species of computing", there seems to be an explanatory gap between our understanding of the physical laws occurring in an organism and behaviour. I have a hard time wrapping my head around this problem as it seems to lead to causal over-determination (my sense of agency in causing my behavior and the physical laws causing my behavior) or oversimplification by saying that everything happens within the realm of the physical. Cognition as species of computation is useful in helping us understand the mechanisms behind many of our behaviors, yet I'm ultimately left unsure as to whether complex and plastic computational systems will be able to demonstrate more nuanced aspects of human mentality.

    ReplyDelete
  29. ‘’We can no more directly observe a cognitive mechanism by the methods of biology than we can by the methods of psychophysics (or for that matter by introspection)’’. Although Pylyshyn (1989) affirms by this statement the little utility of these methods for assessing cognitive models, I believe that it is as true for these as it is for any method; cognition can be understood only in and by the interaction of various perspectives. In fact, in my opinion these different methods correspond more or less to different levels of organization of which he gives an idea in the classical computational view: ‘’semantic/knowledge level, symbol level and physical/biological level’’. Although this schema is open to question, it seems to me as an interesting skeleton to build on for the understanding of cognition in particular and, by the expansion of its categories, of the world in general and it is the concurrence of these levels that allow for suitable explanations. The observation of a similar organization of computers is indeed a fertile ground to draw upon, however, the validity of the adequacy between the highest levels in a computer and a human brain is very limited, as the latter cannot be restricted to the (known) possibilities of the former. Among other things, I think what distinguishes them is that in a human being the communication between the different levels is more fine tuned, which means that more reciprocal influence and interaction occur.

    ReplyDelete
  30. To explain computers, Pylyshyn draws upon Newell (1980): "An interesting insight into one characteristic that is essential or a device to be universal or programmable. For a mechanism to be universal, its inputs must be partitioned into two distant components, one of which is assigned a privileged interpretation as instructions or as a specification of some particular input/output function, and the other of which is treated as the proper input to that function. Such a partition is essential for defining a universal Turing machine. Thus there can only be arbitrary plasticity of behaviour if some of the inputs and outputs of the system are interpreted (or, as Newell puts it, if they have the power to designate something extrinsic)."

    While the partition between two intrinsic components is said to be a compulsory feature of a Turing machine, it would seem that the plasticity of behaviour deemed so essential for a universal mechanism is also reliant on a partition between the latter unit and the external world. I initially found it difficult to consider the degree of or even existence of intelligence without that second partition. However to draw analogy to biological phenomena such as delusions or hallucinations—and presuming, in these cases, intelligence to be preserved—I took such errors of interpretation to be attributable to a flaw within the interpretation component itself, or ascription of proper functioning to an input unit, that is in fact, not; the partition between input unit and all things extrinsic need not be considered. That said, given Pylyshyn's warning that the computers we use to be insufficient models for considering intelligence in universal mechanisms, I am still curious about and grapple with the conception of how non-biological systems develop awareness of their physical levels.

    ReplyDelete
  31. According to Pylshyn, Kohler stated that the plasticity of the human mind was " governed by what [Kohler] called dynamic factors, -an example of which are self-distributing field effects, such as the effects that cause magnetic fields to be redistributed...as opposed to topographical factors, which are structurally rigid."
    I'm not too sure if I understand how dynamic factors and topographic factors work, and I'm not too sure how the topographical biological processes of neurons (which are themselves plastic) can correlate to the to the workings of the mind. Although Pylshyn stated that the question of cognitive science had moved towards bridging the gap between internal process and instances of behavior. How does one bridge the gap between the processes of neurons and the processes of the mind. If you look at it the other way, how can the mind, which is dynamic, have an effect on the human body, which is topographical?

    ReplyDelete
  32. In this paper, Pylyshyn argues that cognition is a form of computation. The classical view of cognition and computation assumes that both minds and computers have three levels of organization: semantic, symbol, and physical. It is on the symbol level that I find a problem. Minds and computers are similar in many ways, but in order for Pylyshyn to argue that cognition is computing he needs to make a parallel between the two possible manners of encoding. Computers use symbols, so he says that that’s what minds use too. Symbols are the mind’s way of expressing itself on paper (or, say, on a computer), but that does not mean symbols exist in the mind. At the beginning of this paper, Pylyshyn states that computers have had a profound influence on the study of cognition. If cognitive science had not been studied before computers existed, I doubt that cognitive scientists would still posit that the mind comprises a symbol system.

    ReplyDelete
  33. “The commitment to the construction of a model meeting the sufficiency condition, that is, one that actually generates token behaviours, forces one to control the problem of how and under what conditions the internal representations and the rules are invokes in the course of generating actions” (p. 66) Pylyshyn 1989

    My curiosity lies at the original source of control in a computer, and at the nature of its usage/fluidity, as it travels throughout the system. Say we have a hierarchical control structure in which control is sent directly to a specific subroutine, and then control is sent back to the original source with the message that the goal has been completed. Control is made out to seem like a magical baton that is passed onto whichever subroutine is the most qualified for the job. What is it about this magical baton that provides a given routine with the coveted control?

    If control is seen to be a limited resource, then what decides how much of this resource exists – is it like electricity, and each subroutine has a certain capacity before it can’t handle any more current? What about the physical reality of these systems – does the same method apply to both metal circuit boards and biological tissue? Is the routine with the greatest capacity the one that holds executive control? Or is there some little man up there who can only do one thing at a time, so wherever his attention lies is where there is control?

    PS: My knowledge of computer science is sparse.

    ReplyDelete
  34. Pylyshyn, in an attempt to explain cognition as a computational process, investigates the notion of the cognitive primitive, the basic unit of computation within a functional architecture, which exists as some combination of the symbol and knowledge levels of the classical model. The notion that there can exist (strong) equivalence between computational algorithm and cognition hinges on these primitives as defining features of the cognitive architecture. Perhaps it is a category error to try and compare the implications of these primitives with neurological evidence, but since they lie (at least) at the symbol level, they must be, in some way, determined by the physics of the neuron. As such, I will try to illustrate two ways in which I believe the cognitive primitive to be reconciled by neuronal data.

    First, Pylyshyn is careful to assert that the cognitive primitives may be different from computational primitives, and that no assumption should be made that the two have any equality. This reminded me of a notion that occurred to me earlier in the paper - that the "distinction is made between processor and memory" made in classical computing need not apply to the brain, where "neurons that fire together [processing], wire together [memory]" (the primitive processing and storage operations happening at the symbol level are effectively entwined). Because we do not have to assume that cognitive primitives obey the same rules as computational primitives, it allows us to create some cognitive architecture where processing and memory are not as distinct as they are in computer architecture, allowing us to maintain cognitive architecture as a model.

    Pylyshyn's restriction on the cognitive primitive - that it may not be subject to (top-down) cognitive penetration - seems to me to fail to be reconciled with our understanding that many neurons receive top-down inputs from higher brain areas. How can the (symbol-level) primitives that rise from the physical neurons ignore entirely these top-down modulatory effects on the computational effect of the neuron? Luckily, the removal of this restriction does not damage the computationalist perspective. In programming, primitive operators can sometimes be applied to non-primitive data, and given another biological factor - development, the bootstrapping problem implied by this cognitive penetration can be reconciled. If, in development, top-down processes are hard wired to initial states before computation begins, then they could modulate the physical level, and therefore the symbol level, and eventually themselves, through these cognitively penetrable primitives.

    ReplyDelete
  35. Pylyshyn crafts a strong argument for computation as an aspect of cognition throughout this paper. Certainly cognitive architecture may serve as a useful way to consider questions in cognitive science. Likewise, computational models of cognition can be useful tools to understand such cognitive architecture. His methods for assessing strong equivalence are particularly intriguing, though I do wonder if all aspects of cognition may be broken down into constituent "basic representational states".

    "It would be both surprising and troublesome if too many of what we pretheoretically took to be clear cases of cognition ended up being omitted in the process. But it would also not be entirely surprising if some of our favorite candidate cognitive phenomena go left out. For example, it could turn out that consciousness is not something that can be given a computational account." (p. 86)

    Unfortunately, rather than addressing the limited role of computation in cognition, Pylyshyn seems to suggest that unless phenomena can be given a computational account then it should not be considered as part of cognition. This is where my opinion diverges from Pylyshyn's. I do not believe he gives proper justification for adopting this pruned definition of cognition. If Pylyshyn had declared that only the species of bird that can't fly count as real birds, this would not convince me either. If Pylyshyn wishes to exclude consciousness and other phenomena from the definition of cognition, he ought to provide stronger reasoning for such omissions. Until then, it makes little sense to adapt our definition of cognition simply because Pylyshyn would like us to.

    In other words, though Pylyshyn did not succeed at convincing me that all cognition can be given a computational account, he did provide a reasonable and well justified way to think about computation as an aspect of cognition.

    ReplyDelete
  36. Pylyshyn's view that it is important to consider the functional architecture of an operating system is an interesting additional point to his argument that computing systems must emulate the functions of our own human physiological processes. Evolutionarily speaking, it is assumed that our body's system has evolved, though randomly, in order to satisfy certain needs and to function in a more adaptive way to the environment. However I am not sure if this comparison is strong enough to account for why understanding why something works is fundamental to understanding cognition and computation.

    ReplyDelete
  37. This comment has been removed by the author.

    ReplyDelete
  38. "The possibility of a computer being able to successfully pass what [has] become known as the Turing test is based entirely on the recognition of the plasticity of behavior entailed by symbolic systems, which can be programmed to behave according to any finitely specifiable function (71)."

    Language is recursive. We can take a finite number of sounds and convey an infinite number of thoughts. The turing machine can meet us halfway. For any question we pose to the automoton, it will have the symbols required to respond appropriately. But you cannot program an infinite number of responses. At some point, the automoton will fail to respond to my question. Then again, Canadian life expectancy is eighty-one and a quarter years so a computer may have enough programmed responses after all.
    Designation is how we solve the symbol-grounding problem. It sounds like, to have things designate extrinsic objects means to ground the symbols in the computer. Designation suggests some sort of intentionality, which is often the hallmark of mind.

    This designation business, is he trying to broach the symbol grounding problem? One does not simply broach the symbol-grounding problem.

    Manipulating symbols and churning out responses, it seems to lack reflection. Even if computation did a good job of explaining how humans manipulate symbols, it fails to explain how we consider our own behavior, or own motives.

    Taking for granted our cognition is dangerous. Computers do not stop, they just operate. A lot of humans do that too. They are irresponsible.

    Can a computer be responsible?

    ReplyDelete
    Replies
    1. I guess what a computationalist would argue is machine can learn, just as human. So if some of your questions are hard to answer, which is most likely to be the case for human as well, they would just learn it as a human would do.

      So if a machine has the ability to learn, which is one of the cognitive ability of living creatures, it not unreasonable to think that it has all the other cognitive abilities as well. Symbol Grounding Problem, however, is the only thing stand in the way between human cognition and computation, machines just simply cannot get semantics without sensory mechanisms to interact with the world like we do. We ground our symbols by seeing, hearing, smelling. That's how we know we understand anything. Only when equipped with these abilities could machine be able to get meanings at all. What will happen then, we don't know. Maybe machine will gain all the coginition ability and take over the world.

      Delete
  39. After going back to the beginning of this course and re-reading Pylyshyn and also many of my colleagues comments, I found myself trying to create a full picture in my mind of why cognitive science came into being, and why computers make valuable cognitive models. With these thoughts came one massive question: why do we need other models for cognition at all? While I understand that one of the main ideas behind cognitive science is that machines can teach us about our own brains and consciousness, it seems that most of the readings we did for this class had the similar purpose of exploring the similarities between computers and human cognition, and deciding what separates them. While this is surely interesting, it doesn’t actually seem breach the subject of computers teaching us things about our consciousness at all. While I have learned a significant amount about the similarities and differences between computers and brains in my 4 years of cognitive science, I’m still not sure I’ve really learned anything about the human brain from examining and understanding computers. Maybe this is something I’ve missed, or maybe it’s more of a future goal of cognitive science?

    ReplyDelete