Saturday 11 January 2014

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 



Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

48 comments:

  1. Can we really make the jump, as proposed by Searle in his thought experiment regarding the running of the Turing Test in Chinese, that "since Searle himself is the entire computational system, there is no place else the understanding could be. So it’s not there"?

    I would argue that there may, in fact, be understanding in this case - the understanding is simply external to Searle, in the sense that the correct input-output formulation is apparently able to mirror all cognitive requisites (permeability, logical follow-through, and the rest of it). I would think of Searle's actions as the actions of a small network of neurons; say, one that fires when it identifies a cat. The network receives inputs from the host of categorization rules built up ("learned") over the course of one's life, and outputs to whatever action is required once this cat is identified. The network has an understanding of what the appropriate response to this stimulus is, but does not have an understanding of what the stimulus itself is. In much the same way, Searle is a subnetwork of understanding; he would understand what the appropriate Chinese output is, but it is the sum of Searle and the massive encyclopedia of rules (whose writing presumably had to involve someone who actually understands Chinese, at some point) which produces the "understanding" we seek. So it is too bold a claim to say that there is no understanding because the subnetwork or subcomponent we are currently looking at does not have understanding of the entire concept, because there is no subnetwork which on its own has understanding of an entire concept; there are almost always other networks, rules, beliefs, or knowledge implicated in "understanding", and we cannot fault one component of the network for not having explicit access to all of them.

    ReplyDelete
    Replies
    1. Wait for Week 3 to discuss Searle!

      Hi Dia, thanks for being the first to skywrite!

      Well you've managed to raise most of the themes in the course in one comment! We'll deal with Searle in two weeks (after Turing). For now, let me just ask: if Searle does not understand Chinese in the Chinese room, who is the one that is understanding, and where he or she?

      And doesn't if feel like something to understand something? Who is feeling that feeling in this case? And where are they? (The last week of the course will be about whether there can be such a thing as "distributed cognition.")

      So let's defer discussion of Searle's Chinese Room for two weeks and focus for now on Pylyshyn, computation, and computationalism.

      (Btw I recognized you as the only Dia in the calls, but for others, please make sure I know who you are, for credit for the skwritings.)

      Delete
  2. “Vocabulary learning – learning to call things by their names – already exceeds the scope of behaviorism, because naming is not mere rote association: Things are not stimuli, they are categories. Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli, as in paired associate learning.” (p.3)

    I agree with the statement above, as behaviorism is not able to explain how it is that we learn to categorize objects. When we encounter a random dog on the street, we identify it as a dog; it doesn’t need to be the exact same size nor colour as any other dog we have seen, but we will immediately recognize it as such. Some sort of cognition/computation needs to occur – something behaviorism fails to account for. We are not born with these categories – in a study by Keil (1986) preschool children were shown an image of a raccoon, which was then surgically operated to look and smell identical to a skunk, then asked whether the new picture (skunk) was a raccoon or a skunk. Preschool children would wrongly categorize the animal as a skunk, despite having previously having stated it was raccoon. Although this experiment is used to show that certain objects have an ‘essence’ to them that cannot be altered, I think it also demonstrates that categories are not something we are born with; it is something we learn as a result of the language and culture in our environment. When I visualize categorization, I immediately imagine filing cabinets with folders (filing cabinets for “animals” with a folder for “dog”), and a homunculus will appropriately find the right category for each instance – but, then, this comes down to the homunculus problem discussed in the paper – what is going on inside the little man’s head? How do we categorize so efficiently, and, often, accurately?

    ReplyDelete
    Replies
    1. Explaining categorization capacity

      The challenge is to provide an internal, causal mechanism that can learn to categorize things correctly ("do the right thing with the right kind of thing"). Then you've explained categorization,

      The problem is not just learned vs innate categories, because even for innate categories you need a causal mechanism that can do it.

      And Keil's experiment is more about what we believe the essence of a thing is than about what is the mechanism underlying the capacity to categorize (or learn to categorize) it.

      Delete
  3. Harnad (2005) explains that “by reporting our introspection of what we are seeing and feeling while we are coming up with the right answer we may (or may not) be correctly reporting the decorative accompaniments or correlates of our cognitive functions -- but we are not explaining the functions themselves.” (p. 4)

    The above quote perfectly explains to me why cognition is simply more than just introspection. Please correct me if I am wrong and I apologize if I am simplifying the argument too much, but it seems to me that what our mind comes up with in terms of mental imagery to explain how we come up with things (e.g. remembering the face of your third grade teacher to remember his/her name) is simply a by-product of our cognitive processes. In a way this reminds me of how a computer works. By clicking on an image file, the image will pop up on the screen. The computer monitor shows the result of the processes going on within the hardware/software and does not show the actual processes involved in the opening and retrieval of that image file. So, clicking of the image file does not explain how the image file was actually retrieved.

    “But if symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?” (p. 7)

    This above quote is asking the same question that I was asking in my previous post and the later sections in the paper go more in depth into the problem of symbol grounding. Searle’s Chinese argument raises some good questions and says that computation in insufficient in explaining cognition. If cognition does run exactly like computation, then cognition would simply react to stimuli without attaching meaning to it. I believe cognition can be partially explained by computation. But, it seems to me that there is something more. Is attaching meaning to symbols still part of cognition/computation? Or is it possible that there is a separate system responsible for attaching meaning that is working in tandem with computation/cognition?

    In this paper, the section “is computation the answer?” said that vocabulary learning is not based on rote memorization, but on identifying and categorizing objects. Stimuli needs to be processed, have its features extracted, and have its features learned (for future categorization). But going back to the Pylyshyn’s (1989) paper, couldn’t it also be argued that by processing stimuli, extracting its features, and learning its features, this is simply just modifying the rules that are used to categorize objects? Pylshyn (1989) explained that computers are plastic. So, I am assuming that Pylshyn (1989) would argue that learning vocabulary and categorizing objects is based on ever changing rules in the cognitive system. So I guess the question should be how does our cognitive system modify these rules?

    ReplyDelete
    Replies
    1. Meaning, Doing and Feeling

      Good comment, Reginald.

      The punch line will be that some of cognition is surely computation (the brain would be foolish not to use the power of computation) but cognition cannot be just computation.

      Sensorimotor processes are not computation: Photons hitting the retina are not computations. Nor is movement. Yet they will turn out to be part of the mechanism for grounding symbols (giving words meanings). And how meaning is embodied and generated in the brain is very closely related to how the brain identifies or learns categories.

      But there is still a huge gap between doing and feeling.

      If someone tells me "the cat is on the mat," I may be able to identify a cat as a "cat", and identify a mat as a "mat", and I may be able to say whether it is true or false that the cat is on the mat. But naming, and stating true or false things, or even picking up the cat and stroking it are all just things I do, or can do. Is the meaning of words just the things you do and can do with words and the things they stand for? Is meaning just know-how (and its underlying mechanism)?

      For the Turing Test -- and the so-called "easy problem" of cognitive science -- that's all there is to meaning.

      But there's more to meaning than that. If you know what something means, that's not just all the things you can do with the words and the things they stand for. It also feels like something to understand what words mean.

      But explaining how and why it feels like something to understand what words mean is the "hard problem" of cognitive science. (So even solving the symbol grounding problem will not fully explain meaning. In other words, meaning is not just grounding either. We are not just brilliant, moving, talking Zombies with a lot of know-how.)

      Delete
  4. Harnard (2005) raised an interesting question about the Turing Test (TT) when he stated that cognitive scientists should aim at "the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself can mediate the connection [...] between its internal symbols and the external things its symbols are interpretable as being about".

    This passage reminded me of a TedTalk from Daniel Wolpert (you can watch it here https://www.youtube.com/watch?v=7s0CpRfyYp8; it's a very fascinating topic), who argued in a very convincing fashion that the ONLY reason why humans have brains is in order to produce complex movements. As the neuroscientist explained, movement is the only way in which we can act upon and interact on our environment (whether it's speech, facial expression conveying emotions, etc.).

    However, as Wolpert demonstrated, robots with agile movements and precise gestures are much harder to create than these with solely brute computational power. As human, we can move, but we also possess a self-awareness that the movement may not be perfect; that way, we can adjust our posture accordingly. This made me wonder whether computers have an awareness of their own limitations.

    From my understanding of the process of computation, a computer possess an algorithm to act upon input of symbolic syntax, and may learn from past experiences and thus change its formulation accordingly (my knowledge in computation is very limited, so forgive my potentially-nonsensical utterances). However, if the computer were to accomplish a task with very limited knowledge, would it be able to recognize its own limitation? Could a computer make an "estimation" of the degree of uncertainty and adjust accordingly? Thus, I would agree with Harnard (2005) that the true TT would be one that incorporate movements.

    ReplyDelete
    Replies
    1. Doing vs Feeling

      Yes, a lot of cognition is "movement." So is a lot of categorization: "Doing -- and being able to learn to do -- the right thing with the right kind of thing." Doing, is, after all, moving.

      So, yes, it's a huge challenge to design a robot that can do all the things we can do. And computation alone is not enough.

      And that means the robotic version of the Turing Test, not just the original email (or "chatbot") version.

      But don't fall back on the homunculus! The causal mechanism for our cognitive capacity (whether computational or hybrid dynamic/computational) has to generate (and hence explain) all of our plasticity and flexibility. You can't get that by supposing there's a "self-awareness" that is doing some of the work: that capacity has to be explained too.

      The only part left out is the part that has to be left out, because we have no idea how to explain it, namely, how and why that "self-aware" Turing-Test-passing robot (or ourselves) can feel rather than just do.

      Delete
  5. Harnad (2005) talks about the "mediated symbol-grounding" of objects described by symbols of language onto their actual physical instantiations (p7). The brain is able to make this link from the symbol to its referent.

    This makes me think further on this mapping. Taken outside of the visual symbol system of language, how is it that the brain maps sounds to the symbols to its referents? How does it account for the fact that a designated sound will not be the same every time (in the case of accents)? Some sort of computation can account for this phenomenon. The parameters for each designated sound could be widened to allow for different accents, pitches for each sound. But for an ambiguous sound, what determines if it sounds more like one word than another? What picks and chooses the "heard" word?

    ReplyDelete
    Replies
    1. How the brain recognizes sound categories -- include speech-sound categories -- is one problem.

      How the brain understands the meaning of words is another problem.

      Both are categorization problems.

      Delete
  6. I agree with Harnad that “cognition cannot be all computation,” and it is very clear that there is not one single thing or place in our head that permits us “to do what we can do.”

    Departing from this physiological sense that there is not one place in the brain where “the soul sits” or where the homunculus lives or were understanding and planning specifically occurs, is perhaps the reason why computation is a good approximate model of what cognition is, because parallel (if I’m allowed to say so of computation) things can occur at the same time, leading to specific outputs. I think this supports the idea of why neuroscience had to be admitted into the realm of cognitive science, even if an explanation of this sort is not aspired to.

    Considering as Harnad says that “mental states” are actually “computational states” but that this does not resolve the problem between computational and dynamical processes, takes us back to the intervening internal process, I get the sense that what we may think of, or feel as an “intervening internal process” (to which we are blind) is really the sum of the process/the whole computation itself accounted for, but then again; would have the problem of allocating to something or to someone the solution of this “sum” or could we take it as such?

    ReplyDelete
    Replies
    1. Computation is not parallel but serial.

      According to computationalism, the process intervening between input and output is computation,

      Delete
  7. I agree wholeheartedly with the conclusion reached in the "Newton Still Available" section. Having said that, disagreements are much more interesting and I found a couple along the way that might engender further discussion.

    (I split my comment into 2 since I apparently reached a character limit. But hey, participation points! Am I right?)

    Behaviorism Begged the Question.

    “Beware of the easy answers: rote memorization and association.... Surely we have not pre-memorized every possible sum, product and difference?”

    Surely, we haven’t, but doesn’t “the fact that our brains keep unfailingly delivering our answers to us on a platter” at least help clue us in to the idea that some cognitive phenomena might be largely explained by associations in long term memory, while other cognitive phenomena call out for a different mechanism or set of mechanisms.

    I’m reminded here of Daniel Kahneman’s distinction between System 1 and System 2, which he characterizes as Fast Thinking and Slow Thinking respectively. When asked to subtract 2 from 7, the answer “5” simply pops into our heads, and this could happen while multitasking. While if you were asked to divide 182 by 14 (or any other reasonably difficult non-memorized association), the answer wouldn’t be delivered on a platter and multitasking would render Thinking Slow impossible. Granted, these systems aren’t independent, but I think more headway can be made using “easy answers” when it comes to explaining one side of this dual process, namely the unconscious automatic implicit fast thinking of System 1.


    Is Computation the Answer?

    “To learn to name kinds you first need to learn to identify them, to categorize them (Harnad 1996; 2005).”

    Couldn't categories initially shaped by a paired associate learning between the first member and the actual name of the category? This, to me, makes sense. Children will learn “ball” and then might call the Sun a “ball”, but through negative reinforcement will stop overextending the category to all round shaped things.

    “The stimuli need to be processed; the invariant features of the kind must be somehow extracted from the irrelevant variation, and they must be learned, so that future stimuli originating from things of the same kind can be recognized and identified as such, and not confused with stimuli originating from things of a different kind.”

    Do not multiple associations of different stimuli to the same category name do the needed work of strengthening the associations between the category name and the invariant features of the category members, while simultaneously inhibiting associations between the category name and the variant features of the the category members? To continue with the category [ball] (lets assume the first member was a basketball), receiving negative reinforcement when calling the Sun a “ball” and receiving positive reinforcement when calling a soccer ball a “ball” will strengthen the association of the sound “ball” to the invariant feature of being spherical and small enough to roll, while inhibiting variant features such as color or texture (assuming the basketballs and soccer balls in question have distinct enough colors and textures). Also, isn't the category (or any category) not actually as sharply defined in cognition as we’re making it out to be? The Sun could still be named “a ball of gas” and nobody would flinch, as if the variant features, which are normally inhibited are excited through context. If I asked most people what are the invariant features of their category [ball], I bet they’d mention its spherical nature and its ability to be played with, but american footballs aren't spherical while bucky balls cannot be played with. (See sub comment to continue)

    ReplyDelete
    Replies
    1. (Continued)

      This might not necessarily make the problem of categorization any easier (I personally think it does), but it seems to allow for a larger role to be played by cell assemblies/Hebbian learning in category formation and categorization of things which can be seen as two sides of the same process of gradual association of category members to their respective category names.


      Introspection Will Not Tell Us.

      “It is now history how Zenon opened our eyes and minds to these cognitive blind spots and to how they help non-explanations masquerade as explanations. First, he pointed out that the trouble with "picture in the mind" "just-so" stories is that they simply defer our explanatory debt: How did our brains find the right picture? And how did they identify whom it was a picture of? By reporting our introspection of what we are seeing and feeling while we are coming up with the right answer we may (or may not) be correctly reporting the decorative accompaniments or correlates of our cognitive functions -- but we are not explaining the functions themselves. Who found the picture? Who looked at it? Who recognized it? And how? I first asked how I do it, what is going on in my head; and the reply was just that a little man in my head (the homunculus) does it for me. But then what is going on in the little man's head?”

      If we can agree positing “a little man in my head” incites an infinite regress, shouldn't we assume asking questions such as “how did I recognize” are really shorthand for “how did my brain recognize”? Is this not similar to how we tend to speak of evolution by natural selection of genes within a gene pool? We use layman's language which cuts to the point, such as “beavers’ teeth grew to adapt to the thicker wood in the area” rather than “the genes which happen to positively influence the growth of the beaver’s teeth within the gene pool happen to have been selected through generations...and so on”.


      Turing Sets the Agenda.

      “So Searle simply proposes to conduct the TT in Chinese (which he doesn't understand) and he proposes that he himself should become the implementing hardware himself, by memorizing all the symbol manipulation rules and executing them himself, on all the email inputs, generating all the email outputs. Searle's very simple point is that he could do this all without understanding a single word of Chinese. And since Searle himself is the entire computational system, there is no place else the understanding could be. So it's not there.”

      Are we not leaving the pieces of paper with their undoubtedly uncountable number of rules out of the picture? It seems to me to run the thought experiment honestly, we can’t assume a human could actually explicitly memorize that many rules. Either Searle, who only understands English, can’t and we simply assume the rules are external to him (on pieces of papers, books, etc) or we assume Searle can but then Searle wouldn’t be human. Either way, it seems to me the crux of thought experiment breaks down. In the latter case, there’s a lack of appeal to intuition since the hypothetical Searle has supra-human cognitive abilities, while in the former, Searle himself is not the entire computational system.

      Delete
  8. Harnad argues that, to provide a viable explanation of what goes on in our minds, cognitive scientists should “scal[e] up the Turing Test, for all of our behavioural capacities.” The purely computation/email version of the TT won’t suffice, we need a “full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities of the robot itself.”

    I think Harnad believes that the email version of the TT will not suffice because, contra Pylyshyn, he believes that cognition is not merely computation – there is a sensorimotor aspect (and perhaps other aspects?) of cognition too.

    In acknowledging the importance of sensorimotor capacities for cognition, Harnad perhaps moves cognitive science forward by moving it backwards. Pylyshin notes that early efforts to imitate life were focused “primarily on the imitation of movements,” whereas the most recent efforts focus “on the imitation of certain unobservable internal processes.” Harnad claims that the unobservable internal processes aren’t the only ones that matter.

    But is imitation enough to provide an explanation? Would a full robotic version of the TT explain “how and why do organisms do what they do”? The robotic TT could provide a causal mechanism for behaviour, which insulates it from the criticism that Harnad leverages at behaviourism. But Pylyshin notes that there is a distinction between explanation and imitation, between emulating an algorithm and direct execution. He highlights that we must use “computational models as part of an explanation, rather than merely to mimic some performance.” Perhaps the robotic TT would provide one causal mechanism to explain behaviour, but how will we know if it is the one that explains the behaviour of organisms?

    ReplyDelete
    Replies
    1. If my understanding is correct, in order for one to built the full robotic version of the TT one would have, no only to know the series of input/output association allowing imitation, but would also require a full understanding of the causal structure that enables human to experience the wide range of unobservable states that may in turn affect behaviour. This full robotic version would thus not only be imitating behviour but also computing non-observable states.

      On that note, while a full robotic version of the TT one would most certainly be a better test of our understanding of cognition I wonder how feasible it would be without prior theorizing and modeling of smaller-scale computations (e.g., building a system that can emulate responses to a discourse, achieve categorical perception or cross situational learning). Is the argument that the TT will only be the tip of the iceberg or is it intended to discourage smaller-scale attempts in favor of larger-scales ones?

      Delete
  9. “But it [mediated symbol-grounding] won’t do for cognitive science, which must also explain what is going on in the head of the user; it doesn’t work for the same reason that homuncular explanations do not work in cognitive explanation, leading instead to an endless homuncular regress” (Harnad, 2009).
    Harnad illustrates how mediated symbol-grounding is inadequate in explaining cognition, as it does not explain what is really going inside the brain of a person; it is vastly similar to the idea of the homunculus, which also fails to explain what is really going on. I think that it is crucial for cognitive science to refrain from such evasive conclusions as the homunculus, as they do not answer the actual question of interest, which is what cognition really is. The idea of the homunculus reminds me of the “organism” in Woodworth’s Stimulus-Organism-Response (S-O-R) formula of behaviour; the “organism” mediates the relationship between the stimulus and response. Woodworth was referring to the “state” of the organism, which could be seen as the brain state of the organism when presented with the stimulus. This behaviourist theory, like the notion of the homunculus, may be too reductive. Although the S-O-R formula better accounts for cognitive mechanisms than Skinnerian behaviourism, it fails to explain what exactly happens in the “organism.” Biological data based on neuronal processes has also been used as a method of explanation, but it is again too reductive and disjointed from cognitive theories. I agree with Harnad that other non-homuncular functions (besides computation) should be pursued, such as dynamic functions that can explain cognition. Instead of relying on ambivalent homuncular theories, behaviourism and biological data, it may be more interesting to look into dynamic processes.

    ReplyDelete
  10. “Computation is rule-based symbol manipulation; the symbols are arbitrary in their shape and the manipulation rules are syntactic, being based on the symbol’s shapes, not their meanings. Yet a computation is only useful if it is semantically interpretable.” (Harnad, 2009, 7) From Psylyshyn’s (1984) article the semantic level is that level of explanation to why certain thoughts/behaviours arise (57). In the creation of a cognitive architecture/computation I am not quite understanding how for example codes of 0’s and 1’s would be able to translate to a whole system of thought processing without the use of a body which receives so much information. For example if a musical piece (as a symbol) makes someone get up and dance, I would assume (however with out knowing the real process of this) that the music one feels through all their senses will arouse that person; their memories, tastes, state of mind will also play a role. I am really confused with how this would be able to occur within a computed robot… It would be possible to make up a string of 0’s and 1’s to make the robot dance, however would that robot be dancing for the same reasons a human would?
    Also, Harnad (2009) explains that the symbol grounding is “the ultimate question about the relation between the computational and the dynamical components of cognitive function” (6). Does the question I have asked above (about the human versus robot dancing) relate to the symbol-grounding problem?

    ReplyDelete
  11. Harnad asks, “How can the symbols in a symbol system be connected to the things in the world that they are ever so systematically interpretable as being about: connected directly and autonomously, without begging the question by having the connection mediated by the very human mind whose capacities and functioning we are trying to explain?”

    This pinpoints a problem with the computation theory of cognition. Throughout life we are constantly learning, our vocabularies are expanding and we are having new experiences. These are things that we think about. If the mechanism that generates thought does so by computing, then the new things/goals/ideas in our lives need to be promptly connected with symbols that refer to them so that they can be manipulated by our brain computer to form thoughts. The way we gain new symbol-meaning associations and the way existing symbol-meaning associations are maintained is unclear to me. Once we’ve done all the symbol manipulation necessary to generate a response to a question like “Who was your 3rd grade teacher?” can we know what the symbol we’ve generated as the answer actually means if we don’t have some kind of symbol-meaning dictionary in our brains?

    ReplyDelete
  12. "The root of the problem is the symbol-grounding problem: How can the symbols in a symbol system be connected to the things in the world that they are ever-so-systematically interpretable as being about: connected directly and autonomously, without begging the question by having the connection mediated by that very human mind whose capacities and functioning we are trying to explain! For ungrounded symbol systems are just as open to homuncularity, infinite regress and question-begging as subjective mental imagery is!

    The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities. Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters."

    If I understand this part correctly, Hardan is arguing that a fully behavioural robot with perceptual and motor capabilities would overcome the symbol-grounding problem. Instead of the referential loop from symbol to meta-symbol that takes place in an ungrounded system, this new fully-functional robot would have symbols grounded in its experience of the sensory motor world. However, I hesitate to call the robot's sensorimotor capabilities 'experience', and this solution skirts the true nature of the symbol grounding problem.

    When the robot stands before a red wall, wouldn't it have to have a human built "wavelength-color" module that says "Yes, this one is red based on the wavelength", and then isn't that more symbol-referencing to outside the robot where the only ground is human experience? It seems to me the only way to build a robot that can act behaviourally is to translate our experience into symbols that we then hand over to the robot as part of its architecture. Perhaps a robot could be made with processes that take very few instructions and very few symbols from its human architects, but nevertheless, everything within the robot's intelligence would be an extrapolation of those symbols.

    The only way to finally ground the robot's symbols would be to build a consciousness that does its own feeling and moving. So until we solve the hard problem of consciousness, our robot's symbols will remain ungrounded, and no amount of imitation of human behaviour or capacity will change that.

    ReplyDelete
  13. It seems Pylyshyn’s determination that, “… what was to count as cognitive was what could be modified by what we knew explicitly; what could not be modified in that way was ‘subcognitive,’ and the domain of another discipline,” goes too far in dismissing implicit processes as being separate from cognition. As Harnad writes, this “cognitive impenetrability criterion” puts too much stock into the ability of computation to (seemingly) account for many behaviors and processes. Whereas Pylyshyn pushes for separate entities, like a hardware/software distinction, Harnard’s argument that there is not a divide but different (i.e. dynamic) forces at play lends itself to explaining why the Turing Test failed Searle’s Chinese Room test. Cognition cannot be solely computational, but since cognitive science has not yet discovered the right combination of other processes at work, the field has not yet fully moved on from basing cognitive distinctions on the original Turing Test. While I agree with Harnad that cognitive science should “scale up” the TT for other behavioral capacities, I wonder how the field would actually go about doing so.

    ReplyDelete
  14. “…the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”
    I agree with the need for a fully robotic Turing Test, I believe it would move the field of cognitive science in a very substantial way – even if it doesn’t come up how we expect. One thing that I remain skeptical about is how it could ever be possible to create a robot that I (personally, perhaps others share my view) could equate with a human mind. I am not about to consider the exact mechanisms (technology, engineering etc) of building a robot that functions at this capacity but rather I question building a robot with a hoped answer in our mind. The way it would be programmed would be in a way trying to emulate the human mind, while this is a great starting point I question whether it is even possible to confirm that the programmed robot is similar to a human mind (due to ethical implications, unless a different type of neuroimaging can tell us exactly how the human mental architecture would be programmed). I understand that scientific inquiry is required to create a test, but I am skeptical for this reason: if it IS possible to create a robot that compares to the human brain, it would be created by humans and there may be an irremovable bias within the robot. A potential solution to this bias within the robot would be to create a robot that could then create another version of itself (just a second robotic “mind”) which could then go through the TT.

    ReplyDelete
  15. Harnad (2005) comments that Pylyshyn “went a bit too far” in his attempt to successfully match together computation and cognition, labeling everything that is not computational “subcognitive” (5). To expand on this idea, the model for which he is advocating limits Pylyshyn. Just because “subcognitive” structures are currently outside our understanding does not mean they are not part of cognition. Our understanding of cognition at this time matches (mostly) with computation, however we could be missing part of the picture. In other words, our idea is currently half-baked. Pylyshyn should realize that at one point, the idea of a computer would have been completely inconceivable for people. An accurate understanding of cognition may still be far away from where cognitive scientists are now. In fact, it could be so far that we don’t even realize that we are missing something. Like computers were inconceivable at one point in time, the eventual understanding of cognition could be inconceivable at this time. A more advanced understanding of cognition could not only account for “subcognitive processes” within the framework of computation, but perhaps more interestingly provide a complete shift in framework that will re-write how scientists look at cognition.

    ReplyDelete
  16. I agree with Harnad (2005) when he says (of Zenon’s desire to separate the cognitive and the noncognitive) “I think his attempt to formulate an impenetrable boundary between the cognitive and noncognitive…. was not as successful as his rejection of imagery as non-explanatory, his insistence on functional explanation itself, and his promotion of computation’s pride of place in the explanatory armamentarium” (pg. 6). I too see particularly significant merit in Zenon’s rejection of imagery as non-explanatory, and his insistence on functional explanation, but like Harnad I disagree that a definitive line can be drawn to separate cognitive from noncognotive. While Zenon “relegated everything that was non computational to the “noncognitive”” (Harnad, 2005, pg. 5), I do not believe that cognition and computation are synonymous, but rather are separate concepts that often share significant overlap of characteristics. While there are certainly many cognitive functions that could also be described as ‘computational’, this still fails to include the ‘x factors’ of cognition (i.e. the unexplained phenomena that separate human cognitive abilities from computerized abilities), which are not necessarily strictly “computational”, but are certainly “cognitive”.
    In addition, I believe that my unwillingness to accept an impenetrable boundary between the cognitive and noncognitive is mirrored in my unwillingness to accept a dualistic theory of the mind and body, which draws similarly unrealistic divisions. Both ideas seem too strict to be applied to the nature of humans, and I think I may be generally unwilling to accept “impenetrable boundaries” where human thought/cognition is concerned.

    ReplyDelete
  17. (I see that someone else has discussed this quote already, but I am also going to talk about it. And hopefully I will bring something new to the table.)
    "Vocabulary learning … is not mere rote association: things are not stimuli, they are categories. Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli ... To learn to name kinds you first need to learn to identify them, to categorize them (Harnad 1996; 2005)." This is absolutely true according to contemporary language acquisition theories. In order for infants to successfully map novel words onto meanings (in other words, to map labels onto real-world referents), they tend to first generalize the stimuli to categories/kinds of things, rather than just specific, individual things. Additionally, word learning is subject to these biases:
    Mutual exclusivity: only one name can be applied to a thing.
    Whole object assumption: a new name refers to the whole thing rather than its part(s).
    Taxonomic constraint: names are used to group together things with the same internal properties, not with what’s attached to them externally.
    I am not a programmer myself, but I’d imagine it shouldn’t be too difficult to design a machine that can execute these three rules (and the rule of generalization, of course), which would be on the semantic/knowledge level of the classical view of computing and cognition. And voila! We have a machine that can learn new words…but not so fast. So far we have discussed names that refer to things with bodies, shapes, textures. Things that can be seen and/or touched. Things with physical referents. What about abstract things? Feelings, concepts, ideas. If machines cannot feel, or have a mental state, then how do they name abstract things? Sure they can define “sad” as “feeling sorrow”, “sorrow” as “deep distress”, and on and on it goes, until we run head-on into the symbol grounding problem. I agree with Harnad's point that "We cannot prejudge what proportion of the TT-passing robot’s internal structures and processes will be computational and what proportion dynamic. We can just be sure that they cannot all be computational, all the way down."

    ReplyDelete
  18. My first instinct when I thought about a possible post to write would have been to address the information presented in the final section "Newton Still Available," but lots of other students have done that so I think I'll mention something else, which may be a bit more tangential to the rest of the paper but I think nevertheless important and worth addressing. So Turing believed that if we could build a machine that could pass the Turing Test, then we will have built a system that not only cognizes but that will help us understand how it is that cognition works. And Searle presents an example of what such a machine would look like if all its components were embodied within his own person. What are the implications of this machine for language, and what we could prove about language if we could build such a machine. I'm thinking specifically of Chomsky's poverty of the stimulus argument - that young children aren't exposed to nearly as much language as we would expect them to be, in order to learn as much language as they end up learning, and that therefore there must be a Universal Grammar that they are born with. How would this UG come about in a Turing machine? Would it be something that were pre-programmed? But then, wouldn't the machine still need to be exposed to some natural human language, the way newborns are exposed to it, in order to develop its own full-fledged linguistic ability? If we expect that building a Turing Machine will reflect all cognitive processes that humans already embody, and can shed light on how cognition works, then, given that linguistics is part of cognition, wouldn't we expect this human-like Turing machine to 'learn' language the same way humans learn language? And yet in all examples I can think of, be it Searle-as-a-Turing machine, or the ever-entertaining Siri, I don't see how the machine has been through the same process that humans go through when acquiring language. On the other hand, you could argue that when the programmers build the machine, they fix glitches, and that in a sense is how the machine gets feedback to the point where it has a nearly-perfect, marketable language ability, but I'm not sure to what extent that is analogous to the truly organic acquisition of knowledge that humans have. So I guess what I'm wondering ultimately is whether or not a perfect Turing machine, based on possibly a scaled-up Turing Test that Harnad suggests, could shed as much light on language acquisition as it hoped to shed on cognition in general, and whether or not it could determine once and for all the poverty of the stimulus theory.

    ReplyDelete
  19. “The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities.” (8)
    I agree with Searle that the email version of the Turing Test is insufficient as it is only evaluated by its performance. As Chomsky noted, competence underlies performance with regards to language. It seems to me that in order to evaluate cognition and the human capacities a Turing Test is able to achieve we cannot only look at performance because the competence of the agent is just as important. This is where I fully agree with Harnad with regards to developing a “full robotic version of the Turing Test, in which the symbolic capacities are grounded in sensorimotor capacities” (8). Although competence cannot ever be fully tested and compared to that of human agents, this would allow a better evaluation of competence as one could observe the algorithms and symbol processing in order to determine the capacity this machine is able to achieve. The second issue I have with the Turing Test is the issue of consciousness, as this is such a central element of cognition. Until there is a way of testing consciousness experimentally within a machine, I do not see how a Turing Test can be an accurate test of cognition. I understand that there is no physical manifestation in humans of consciousness, but there are still sensory traces that show we are able perceive and feel. This could be a huge advance in robotics if we are able to show the same types of traces in machines because I think we would be one step closer to determining exactly what consciousness is and how to employ it in a machine By this I mean not just the physical traces of the commands we give it and how it implements them, but that the machine is actually able to evaluate and feel from the environment –giving it a form of consciousness.

    ReplyDelete
  20. “The only way to [ground symbols], in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities. Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”

    As shown by the Searle’s Chinese Room Argument, the email version of the Turing Test is insufficient. Dr. Harnad, from your commentary, are you saying that the level of the Turing Test that would be sufficient for capturing cognition would be a robot that could act independently and live, essentially, as a human? (Except maybe of course the robot may not look like a human). If so, I would have to agree that this is necessary, but is it sufficient? What about consciousness/the ability to feel? How can actually test if the robot can feel? We know what it is like for ourselves to feel, but is it even possible to test if the robot knows what it is like to feel? (In fact, we don’t really even know what it is like for others to feel.)

    The idea of the “ideal” level of the Turing Test reminded me of the movie “Her” where the male lead actually falls in love with an intelligent operating system, and many others in the society also befriend or have relationships with these operating systems. While we have the impression that the OS in “Her” behaves is incredibly human, especially because she can change and, presumably, feel. However, we always wonder whether she can truly feel or if she is programmed to think that she knows what it is like to feel? If you think that you know what it is like to feel, is that sufficient and is that the same as feeling?

    ReplyDelete
  21. This comment has been removed by the author.

    ReplyDelete
  22. Trying to reconcile the two following quotes from Cohabitation: Computation at 70, Cognition at 20 by Steven Harnad

    “Pylyshyn and Skinner were right in insisting that the details of the physical (hardware) implementation of a function were independent of the functional level of explanation itself.” (p. 5)

    “There are other candidate autonomous, non-homuncular, functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics: not propositions describing them nor computation simulating them … perhaps also real parallel distributed neural nets rather than just symbolic simulations of them” (p.8)

    It is in trying to reconcile these two points that lead me to a certain degree of confusion. On the one hand, I understand that we cannot grasp cognition by introspecting because it only explains how we do the things that we already know, and does not provide an explicit testable measure in order to validate how we produce behavior. Furthermore I see the behaviorist flaw in not explaining the mechanistic “how” of cognition. Yes, both skinner, in placing the load on physiologists, and Pylyshyn, who explains that the semantics and symbols used in software are not limited by specific hardwares, both eliminate the physical domain from their respective theories. But, how is it that the physical vessel for a cognitive being remains independent of the functional steps of cognition? The language of our brain requires electrical currents that flow through biological structures, in the same way that computers require electrical engineering in order to process binary. More so, these physical implementations (hardware systems) directly interact with the rest of the physical world and govern the way we perceive it. How then can dynamic sensory-motor processing or internal spatial analogs occur without a dependence on the hardware that is accumulating the sensory (both temporal and spatial) data?

    It is too my current knowledge that the extent to which one can perceive extra-personal space is limited by the ability of our nervous system to process incoming sensory information. In this way, perception is in some ways an incomplete depiction of reality, but a representation none the less and determined by all the information that our sensory organs manages to accumulate. To my still developing/limited mind (I approach this argument with some degree of humility), it seems intuitive that any semantic representation, including the dynamic ways in which we cognitively manipulate them seem to depend on our hardware’s perception of the world. In this way, I cannot see how non-computational methods of cognition are independent of their physical implementations. Although cognitive science tries to explain the functional steps necessary to explain how we cognize, it still seems like hardware must be a primary step in this chain in order to explain how further cognitive frameworks (including semantics and symbols) are derived.

    ReplyDelete
  23. (part 1 out of 2)
    In "Cohabitation: Computation at 70, Cognition at 20", Steven Harnard explains several dominant views regarding to cognition: behaviorism, introspection and computation. It is in general a very clear and easy to understand article, in which Harnard explains and compares the differences among different views. In this article, I agree with Harnard most on that mind and body are separate like software and hardware of a computer, and I have doubts regarding to what Harnard says about the process of learning a language.

    On page 6, in section "Computation and Consciousness", Harnard says "But first, let us quickly get rid of another false start: Many, including Zenon, thought that the hardware/software distinction spelled hope not only for explaining cognition but for solving the mind/body problem: If the mind turns out to be computational, then not only do we explain how the minds works (once we figure out what computations it is doing and how) but we also explain that persistent problem we have always had (for which Descartes is not to blame) with understanding how the mental states can be physical states: It turns out they are not physical states! They are computational states." I also agree with the rest of the paragraph, but it would be too long if I quoted all. Basically Harnard says that mind and body are different and I could not agree more with it. I also believe that Harnard makes a very good analogy between the concepts of software and hardware and the concepts of mind and body. Inspired by the distinction between mind and body, I realized another distinction between what we believe we think and what we actually think, or, to put it in a clearer way, what we consciously believe and what we unconsciously believe, and I think both of them are parts of cognition but it is hard to distinguish between two processes. For example, most people showed no racial or sexist biases on their self-reports or though easily observable behaviors in the lab, they might show increased activity in parts of their brain associated with feelings of threat or strong emotion when they see pictures of or think about people from a particular racial group or gender (Mitchell et al., 2009), which means that although they don’t seem racists, they actually are, and I believe it is a very crucial aspect to look at.

    ReplyDelete
  24. (part 2 out of 2)
    On page 3, Harnard discusses the learning processes of vocabularies, categories and syntax. I recalled an example from my linguistics syntax class (sorry I am unable to find the exact example that my professor gave to us) that a preschool child was talking to his father and he made a grammar mistake; his father tried to correct him with the grammar mistake and the son said he understood, but a moment later, he made that mistake again without noticing it. I remembered my linguistics professor tried to make a point that grammar was not easy to learn and children processed their distinctive understanding of language before they really learned grammar at school. I recalled this example when reading page 3 in Harnard's article. He says, "The answer in the case of syntax had been that we don’t really “learn” it at all; we are born with the rules of Universal Grammar already in our heads." I then believe what he says is correct, but meanwhile I wonder what then, changes the way that children think about language; what makes them think that how they speak is wrong. I really hope that I can get an answer for this question.

    Furthermore, I also want to comment on a sentence on page 7 when Harnard talks about symbols and their referents and says, "That’s fine for logic, mathematics and computer science, which merely use symbol systems. But it won’t do for cognitive science, which must also explain what is going on in the head of the user." I already said a similar thing in the first skywriting commentary, but I want to make my point again because this sentence supports my idea in some way. It suggests that logic, mathematics, and computer science are more fundamental and less complicated than human brains. Although human brains, or more specifically, neuron networks are similar to a real computer network or a web in the way that neurons are connected with each other and they send information back and forth, I believe human brains are much more complicated than computer networks or webs. Human brains are more likely to produce random or unpredictable results (or I would say, the results that human brains produce are not always the same). Human brains have the ability to adjust its function and make changes so that every time people's thoughts are improved and updated instead of stable all the time.

    I apologize for being somewhat disorganized in my commentary, but the information in each article needs a lot of processing, and every time I wrote, new ideas kept popping up. I guess I will try to do better in the next skywriting.

    ReplyDelete
  25. "Can cognition be just computation?"

    I agree that cognition is more than just computation and that the sensorimotor dynamics play a key role in formulating cognition. There is definitely an intuitive sense that behavior stems from an interplay of perceiving the outside world through our senses and the internal processing of information and experiences. While there is a good functional purpose in studying how cognition as well as sensorimotor dynamics effect human behavior, developing some type of android to pass the Turing Test still seems far from reach given the complexity of human biology and human behavior.

    ReplyDelete
    Replies
    1. I agree with your comment as well. Take consciousness, an important part of cognition, can consciousness really be computed? Moreover, how we analyze every situation comes down the sum of our internal and external experiences. The complexity of the human mind seems to be too great for a TT to equalize it.
      Moreover, the TT seems to completely bypass the main notion of consciousness, that is, to feel. A perfect Turing machine can compute the right outputs, given the same inputs as the human without feeling anything at all. It is therefore cognizing (computing) without be consciousness. So how are you to simulate the cognitive processes within a human brain without simulating one of the main aspects that influences all these processes.But I think consciousness is a key role within cognition, just as our sensorimotor dynamics is as well.

      Delete
    2. So if T2 does not get at consciousness (feeling) does T3? or T4? (That's why it's called the "hard problem.:)

      Delete
  26. "As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation,
    namely, dynamical functions such as internal analogs of spatial or other sensorimotor
    dynamics: not propositions describing them nor computations simulating them, but
    the dynamic processes themselves, as in internal analog rotation; perhaps also real
    parallel distributed neural nets rather than just symbolic simulations of them. "

    While I share many of the same doubts as you do about computationalism, I am having a hard time wrapping my head around the idea of analog. The definition itself is simple, referring to a system modulated by continuous, rather than discrete, amounts. The problem for me is how an analog system can truly be implemented. I may just be misunderstanding the concepts, but in my attempt to build everything up from simple to complex I become hung up in a transition from digital to analog scales, and I'm wondering 1. how to bridge the two 2. what level of complexity is necessary for analog function.

    Disregarding the question of whether matter itself is analog or continuous (or both), let us start at the brain. The smallest unit of change that I can think of the voltage potential mediated by ion channels. Ion channels pass ions in discrete quantities (you can't pass half a sodium ion), which incrementally change the membrane voltage until an action potential is caused. The action potential itself is a discrete unit due to its all-or-none nature, and as such is encoded temporally. Action potentials release NTM in discrete packages of vesicles, which affect ion channels and then the cycle gets repeated. The neuron itself, of course, Is a discrete and repeated unit as well.

    how can this system that is made up entirely of discrete elements perform analog operations?

    ReplyDelete
  27. In computational theory of mind, it seemed like if mankind keeps digging (researching) then they would reach the core, hence finally understand the mechanism in cognition (such as in a robot that can pass the Turing test). However, the eventual need to know how a symbol becomes interpretable to us cannot be explained by a 100% computational model, as it cannot tell us how to interpret or reproduce identical links made between symbols and their meanings.

    Now this is where the new part came in for me: what else on earth could be doing it?

    “As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics” (p. 8) Harnad 2009.

    I am interested in exploring the idea that dynamical analog processes happen in the brain, which serve to bridge the gap between symbols and their meanings. However, it is important to be conservative about how much information can be conveyed in a model. Just because we can construct analogical models does not mean that we have a complete understanding of what is being modeled.

    This idea of dynamical analog processes vaguely reminds me of mirror neurons, in the sense that a part of you [the cognizer] is interpreting so much information from the symbols it is processing that the event is as vividly understood as if it were one of your own experiences, as it is the dynamic process itself. In any case, there are still a lot of questions to ask and to answer.

    ReplyDelete
  28. "So what is still missing, then, if computation alone can always be shown to be noncognitive and hence insufficient, by arguments analagous to Searle's? Searle thought the culprit was not only the insufficiency of computation, but the insufficiency of the Turing test itself [...] There is still scope for a full functional explanation of cognition, just not a purely computational one." Searle circumvents the symbol-grounding problem (in only one certain system) instead of overcoming it; I would agree that following Searle's mandate of examining dynamics of the brain is not the best solution, nor even a viable one if cognitive science is to succeed in its full inquiry as to the comprehensive nature and extent of cognition. The proposed extension of the Turing test to a robot with sensorimotor capabilities satisfies the relationship of physical to semantic levels in a way that some iteration of our contemporary computers as intelligent machine cannot: Pylyshyn's "special kind of systematic way" of semantic transformation is now grounded in symbols that correspond to cognates of physical experience. This is a concise illustration of solving the hard problem in cognitive science—how to bridge the gap between computation and the full experience of consciousness to which we are subject; the scope of cognition, as it were.

    ReplyDelete
  29. "Introspection can only reveal how I do things when I know, explicitly, how I do them, as in mental long-division. But can introspection tell me how I recognize a bird or a chair as a bird or a chair?"

    If we accept Pylyshyn's 3 levels of computation and the mind (the “classic view”: semantic, symbolic, and physical levels), the problem with introspection becomes that of lack of phenomenological access to the symbolic level. On the symbolic level are the symbolic expressions and rules of manipulation. In the case of doing long-division, we consciously apply the ruleset on the semantic level. In cases where we are not consciously aware of the process of performing a computation, it seems to follow from Pylyshyn’s account that we are performing a computation according to the rules stored in the symbolic level, to which we do not have phenomenological access.

    ReplyDelete
  30. I feel that I am likely confused by the definition of dynamics, but I take it to mean some system that varies according to input in a changing way, rather than a functional way, such that it is not well defined or understood. It seems to me that dynamics falls into some of the same issues as computation does. Why is it that a function that I don't understand leads to my behaviour mores than a combination of predictable symbols? With regards to Searle, if I create a robot to handle all dynamics and feed him the inputs from the Turing Test, and I take data that he brings me, preform computation on it, and send it back, then I pass the test, but still haven't understood anything.

    ReplyDelete
  31. I’m having some trouble grasping the quick dismissal of computationalism as a support for mind-body duality. I know this is mainly because the subject was brushed off in light of the greater picture at hand. Regardless, it seems to me like the insignificance of the cognition-duality argument breaks down to an imposed hardware-software or cognitive-vs-subcognitive barrier. But that barrier seemed perfectly natural – as natural as it is to understand a PC on both a dynamic and a computational level – preceding the introduction of the symbol grounding problem. I don’t quite follow why, specifically, the semantic interpretation of cognitive symbols makes this mind-body barrier any less natural. If we can accept, as Zenon did, that neural networks can be dynamic while still giving rise to a computational cognitive processes, and if that, for sake of argument, is enough to act as a description of true Cartesian duality, than why is it problematic to accept that the dynamic system designated by the symbol-grounding problem is equally conducive to dualism? Wouldn’t you just be redefining the barrier of “where the mind ends and the body begins.” Of course this becomes extremely circular, but it still gives one a sense that any computational account of cognition, even if it is only partially computational, really comfortably supports mind-body dualism.

    ReplyDelete
  32. "The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities." (p. 8)

    This approach leads me to a number of questions. To what degree must one scale up the Turing Test such that all of our behavioural capacities are accounted for in a robot? Once we do that, how do we ensure that all the behavioural capacities are a good representation of human behaviour?

    It sounds as though we would have to build a robot that could blend into human society seamlessly. Suppose we were to create a robot capable of all our behavioural capacities but that looks like a robot. Even this perfectly-cognizing robot would fail the Turing Test almost immediately.

    Passing the Turing Test involves an element of deception quite separate from the element of cognition. In the email test, this is not a large issue since a computer sits between the robot and the judge to protect the identity of the robot. In a scaled up test, however, we must also confirm that the robot can do many other tasks (like walking, yodelling, and crying) such that it is impossible for a personal computer to form a barrier between the robot and the judge. Can the Turing Test be scaled up while still protecting the identity of the robot?

    One may argue that adding an exterior human appearance to the robot might protect the identity of the robot. But how will it pass the Turing Test if it scrapes it's knee? What about if it gets stabbed? Must we add internal organs to the robot? At what point does this robot become human?

    We must determine whether or not the Turing Test scales well before making the assumption that it will work for all of our human behaviours. As it stands, we still can't reliably fool large numbers of humans for extended periods of time with the email test, so it is hard to determine how scaleable the Turing Test is.

    ReplyDelete
  33. I find it hard to grasp an understanding of cognition and our ability to categorize and act on stimuli without first focussing on the physiology behind it all. Although the research of neural processes behind such complex mental tasks is intensely complicated, and perhaps nearly impossible at this moment in time, I cannot help but wonder all the while reading these logical (although logic itself, a human construction, is arguably subjective, biased and limited) explanations whether or not any of it can be accepted with confidence if it does not have neuroscience at its core. For example, when considering this statement during the discussion of Chomsky's lesson regarding vocabulary learning:

    Even “individuals” are not “stimuli,” but likewise kinds,
    detected through their sensorimotor invariants; there are sensorimotor “constancies”
    to be detected even for a sphere, which almost never casts the identical shadow onto
    our sensory surfaces twice.

    Is there evidence that detection of stimuli is significantly governed by its sensorimotor invariants? Is there a clear neural pathway that shows that all sensory-motor experience of chairs lead to one specific area of the brain (involved in memory) which is devoted to the common properties of what a chair is? Although this might have been proven to be true, I can't help but ask myself the same question about each theory.

    ReplyDelete
  34. As I progressed through Harnad’s Cohabitation: Computation at 70, Cognition at 20, I was struck by the response to Searle’s Chinese room argument. I completely agree with Searle when he claims that the Turing Test (over email) does not explain cognition as it only considers the performance of the system. However, I question the following statement, “Searle’s very simple point is that he could do this all without understanding a single word of Chinese. And since Searle himself is the entire computational system, there is no place else the understanding could be. So it’s not there.”

    Is Searle the entire computational system? Is his competence level the only one being considered? Perhaps Searle is simply one component of the entire computational system and therefore, his competence alone does not explain cognition. One must consider the entire condition to evaluate the competence of the system (i.e. the instructional manual leading to the memorization of the Chinese symbols, the email outputs…). This idea may seem abstract but my point is that the entire system could have possibly reached a level of competence that goes beyond Searle himself. Although he (being one component of the entire system) does not understand Chinese, maybe the model as a whole does.

    ReplyDelete
  35. I strongly agree with Harnad (2005) comment: “One was the implication that words and propositions were somehow more explanatory and free of homuncularity than images. But of course one could ask the same question about the origin and understanding of words in the head as of the origin and understanding of pictures in the head.” Although Pylyshyn’s critique of visual imagery has been praised in the past, Harnad brings us a good point. What makes the words or propositions so different than images? Moreover, when thinking of words or numbers, the images of each object does occur in your head. When adding numbers mentally, each number is imagined to be able to complete the addition. The same goes for symbols; visual imagery is included in the processing of symbols. The little humonculus is therefore still in charge here. Yet as stated, the homunculus does not explain the mechanisms behind this imagery, which is the key point we are after. For this humonculus to disappear, we would then have to find a way to render sudden visual imagery into a computational explanation. Would that be possible?

    ReplyDelete
    Replies
    1. For the homunculus to disappear, we have to reverse-engineer a mechanism that can do what we can do. It can do internal rotation, if need be, but whether (and why, and how) it feels like something to rotate internal images -- or, for that matter, to view (as opposed to just detect, process and respond to) external images -- is the "hard problem" (of consciousness).

      Delete
  36. How can we account for differences between the ways individuals cognize? Perhaps some people visualize addition or multiplication in their heads, or some people imagine time passing linearly, or the seasons in a circle. How can reverse-engineering account for the differences in the ways people think, observe, imagine, feel? This is not the other-minds problem of what others are feeling and if they are feeling the same thing. I am assuming that there are even just minor differences in how we represent things in our heads, how we think and observe. Would a reverse-engineering be general so that it could account for these differences? But then would the reverse-engineering be successful enough? Or are these differences in how people cognize not at a computational level but rather at a dynamic level (output)?

    ReplyDelete
  37. As someone who considered themselves a behaviorist, reading this article poked holes in many of my beliefs and showed me that the questions Skinner tried to answer only opened the door to more questions. Expressing progress (learning for example) in behaviorist terms is not enough. Saying that we have become capable of doing something because we did and once and were rewarded for it explains future success and skill gaining. It leads to questions like why did we do the original good thing in the first place? How are we able to interpret a response as an award? Nothing is really gained by taking a behaviorist stance.
    At the very least, it was progress in that behaviorism is based on some kind of action and not introspection. Trying to assess how we can do what we do won't be answered by an academic sitting in an armchair. Behaviorism tried to take some kind of action against this but unfortunately, it wasn't the right one.

    ReplyDelete