Saturday 11 January 2014

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

54 comments:

  1. I'll only dwell on a pretty small issue here, partly because it’s hard to find big issues when doing a short critique of a short critique, and partly because I mostly agree with what you’re saying.

    “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state -- nothing to do with the mind.”

    Supposing that “Searle’s Periscope” is conceivable, which it is for computationalists, is it really all that bad? You presuppose that it would be a terrible consequence for the credibility of computationalism if it managed to brush aside the problem of other minds, but there’s nothing inherently inconsistent with a theory of cognition to do so. Indeed, the enormous difficulty of getting a pair of cognizers to have identical computational processes of worthy of a result of this magnitude. If “A” was emulating “B’s” mind by matching computation, the hardest piece of the bullet to bite would be how A would perceive itself to actually be “B” while their computations matched. However, loss of ego has been documented countless times, so the perception of being something else really isn't as farfetched as it sounds.

    ReplyDelete
    Replies
    1. I have no problem with Searle's Periscope. He's right! If someone claims that cognition is just computation and that a T2-passer can think, Searle can use the crucial properties of computation -- that it is just symbol-manipulation, and that the programme has to be physically implemented, but the details of the physical implementation do not matter: if it's the right programme that's executing, it will think -- he can use the Chinese Room Argument to prove that's false: He can memrize and execute the programme, thereby becoming "the system" -- yet not understanding Chinese.

      So neither does that computer over there, also passing T2 executing the same programme.

      That's the power of Searle's Periscope on the other-minds problem.

      But it only works for T2, not T3 (or T4).

      And only for pure computation, not a dynamic or a hybrid computational/dynamic system.

      Delete
    2. "He can memrize and execute the programme, thereby becoming "the system" -- yet not understanding Chinese."

      I cannot help but think that the bigger issue is that he would not have the feeling of understanding, regardless of whether or not there is such a thing as understanding. One could argue that feelings are not always correct: just because we feel like we understand English does not mean we have any more understanding of the language than a computer (it does not even mean that understanding is a real thing). Likewise, Searle seems believe that, as a human, he has some divine entitlement to understanding things (i.e., knowing more about the meaning of symbols than their shape, relationship to other symbols, and rules for manipulating them) instead of just feeling like he understands things.

      For all we know we might just build semantic matrices in our mind the same way a computer would build semantic matrices in order to know how to label an image with a descriptive sentence or translate articles. These could be entirely implementation-independent but then the real issue still remains: where does the feeling of understanding come from? Do computers feel like they understand too?

      Delete
  2. Let me clarify that I’m not positing that computationalism will lead us to London, simply that Searle’s CRA is simply wrong in what it is attempting to prove. I consider computation and its hardware/software distinction to be a rather important source of inspiration for thinking about cognition, so, for now, I only have faith in Weak AI. On to the disagreements...

    “Now just as it is no refutation (but rather an affirmation) of the CRA to deny that T2 is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the "right" kind of implementation, whereas Searle's is the "wrong" kind. This just amounts to conceding that tenet (2) is false after all.” (Harnad)

    I’m not sure if you’re referring to the Systems Reply when you speak of “special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the "right" kind of implementation, whereas Searle's is the "wrong" kind”. If so, this seems to completely miss the point of the Systems Reply, which isn't about how the program is implemented, but rather what physically encompasses the implementation itself. If this isn't referring to the Systems Reply, well then, oops… maybe someone can clarify.

    ‘By the same token, it is no use trying to save computationalism by holding that Searle would be too slow or inept to implement the T2-passing program. That's not a problem in principle, so it's not an escape-clause for computationalism. Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental (Churchland 1990). It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity").’ (Harnad)

    It might not be a problem in principle when it comes to considering that computational states are mental states. But in practice, those computational states wouldn't seem like mental states, and isn't that really how we recognize mental states? If I choose to read the reply charitably, the issue of speed and complexity does play an essential role in the CRA. Searle’s reply to the Systems Reply relies on our relatability to the Searle who wouldn't be understanding Chinese while executing the seemingly arbitrary instructions. The problem here is the hypothetical Searle being unrelatable, since he’s super-human in that he memorized an enormously large number of rules which he then applies fast enough to keep up with the Chinese interrogator who is communicating in real-time human pace (since we assume the Turing Test is being passed in the CRA). We categorize things in our world based on resemblances (invariant features) and so, just as an extremely slow moving group of water molecules (such as an iceberg moving very slowly across land) might not be considered a river, a slow implementation of a program wouldn't be human thinking from our perspective as fast-river-like-thinkers.

    (Continued below)

    ReplyDelete
    Replies
    1. (Continued)

      “This decisive variant did not stop some Systematists from resorting to the even more ad hoc counterargument that even inside Searle there would be a system, consisting of a different configuration of parts of Searle, and that that system would indeed be understanding. This was tantamount to conjecturing that, as a result of memorizing and manipulating very many meaningless symbols, Chinese-understanding would be induced either consciously in Searle, or, multiple-personality-style, in another, conscious Chinese-understanding entity inside his head of which Searle was unaware.” (Harnad)

      This argument really makes clear where I disagree with Professor Harnad. Unlike Professor Harnad, my incredulity makes me question the Intuition Pump itself, not just the Systems Reply. In fact, the Systems Reply generously allows for the incredible assumptions Searle makes in order for his CRA to get off the ground and then rebutes it on Searle’s own fantastical terms. Using the seemingly unrealistic implications of the Systems Reply (“Chinese-understanding would be induced either consciously in Searle, or, multiple-personality-style, in another, conscious Chinese-understanding entity inside his head of which Searle was unaware.”) to deny the Systems Reply seems to miss the point. The source of the insanity is actually Searle positing a Super-Searle to begin with who could memorize all those rules. If we allow for that, the Systems Reply is on point. I’d rather avoid making inferences based on ridiculous axioms, so let’s look back at what is being taken for granted, namely Super-Searle and our relatability to his lack of understanding Chinese and question that.

      Delete
    2. 1.
      No, when I say it is "special pleading" I mean it is arbitrary and ad hoc to say the computer implementation of the same T2-passing code would be the "right" implementation whereas Searle's implementation would be the "wrong" implementation.

      That's certainly not the same thing as the System Reply, which says that Searle's the right implementation, but Searle himself, and his report that he does not understand Chinese, is only part of the "System" that would really be understanding the Chinese: That was hardly possible when Searle was supposed to be reading the programme off the wall, with the "room" and everything in it, including Searle, being the System doing the understanding.

      But once Searle memorizes the programme, so everything is inside him, then Searle is the whole system -- and if he's not understanding Chinese, there's no one else in there understanding it either.

      (And remember that Searle's Argument (and Periscope) only works because it feels like something to understand Chinese. And Searle would know that he does not feel that feeling of understanding those squiggles and squoggles. And there's no one else home!

      So that's when (some) computationalists tried the come-back that, well in that case, the Searle implementation is somehow the "wrong" implementation....

      No, I don't think either timing or complexity is relevant, and especially not the time it would take Searle to memorize the programme (how long should it take? and what difference does it make to the Argument? The TT-passing code is imaginary why can't learning it be imaginary too!).

      The time it takes to execute the memorized computations would have a better chance at plausibility and relevance, if it weren't that email is not a real-time conversation but an offline one. (Real-time audio conversation is getting closer to T3.) Timing certainly matters to T3, but T3 is immune to Searle! and it can't be just computation.

      And "complexity" is about as vague as one can get! How complex does a computation have to be to become felt? Is it a sudden phase transition?

      Delete
    3. 2.
      It is true that the Chinese Room Argument consists of conjectures and counter-conjectures. The computationalists conjecture that T2 could (1) be passed by computation alone and (2) that the system would understand. That's the opening premise, and that's what's on trial in the trail of inferences and counter-conjectures that follow: Searle infers that if (1) and (2) were true, then he could execute the computations and not understand. Computationalists counter-conjecture that then the "System" consisting of the programme on the wall would be understanding, even though Searle was not. Searle counter-conjectures that even if he memorized the programme he would still not be understanding. Computationalists counter-conjecture that in that case there would be another mind inside Searle, understanding...

      Now weigh the conjectures and counter-conjectures and assess which steps are plausible, given the first premise (computationalism can understand via T2 along), and which ones are just increasingly ad hoc special pleading in order to save the original computationalist premise, come what may...

      If you are not convinced, forget about T2 and Chinese and imagine someone who has memorized the rules for encrypted tic-tac-toe: He just learns the squiggle-squoggle recipe, not that he is playing tic-tac-toe. He has time to memorize and execute the computations, but he does not understand what he is doing.

      But it's just more of the same for T2 Chinese (except that the computationalist premise that T2 could be passed computationally at all, unlike the premise that someone could learn and play encrypted tic-tac-toe, is probably wrong; hence its conclusion that the System would understand is probably just sci-fi -- so Searle will never need to be fast enough to learn all those extra squiggles and squoggles...)

      Delete
  3. What’s right: Harnad pinpoints the assumption of computationalism which Searle’s thought experiment reveals as misled: the independence of software and hardware. Indeed, by giving Searle all the symbols and manipulation rules of Chinese, we are effectively giving him the software of a “speak-Chinese” program, and yet we would not consider him as understanding Chinese (e.g. he wouldn’t laugh at Chinese jokes, nor be able to go and order a salad on his own, etc.). The point is that when we use words to refer to things in the world, we are implicitly talking about the relationship between us and that thing, a relationship that was established through live experience. This is true of tangible things, the understanding of which should be accounted for before we move one to more abstract concepts. And so an arbitrary symbol cannot stand for something in the world on its own: there needs to be an account of the relation between the subject and the thing (or between the symbol and the thing), a causal connection of some sort that will involve the hardware. And since the hardware is a dynamical machine, then cognition cannot be all computational (i.e. a discrete machine).

    What’s wrong: Searle was wrong to assume that his thought experiment precluded any part of cognition to be computational. If in addition to learning all the Chinese symbols and rules for their manipulation he also learned by heart an English-Chinese dictionary, then he could use Chinese as spontaneously and world-referentially as he does English (with some culture-related caveats). Then we could say that his competence and performance (read cognition) of Chinese is subserved mostly by symbol-manipulation.

    While I mostly agree with the criticism that computationalism does not account for, or undermines the relevance of, the relationship between symbols and their referents in the world, I think we are going too far in assuming that a sensorimotor relationship fully accounts for what we mean when we say “feeling”. [Am I right in saying that Pr. Harnad considers the presence of this sensorimotor relationship between world and person/T3-robot as sufficient for assuming that “feeling” is happening (although complete certainty is ruled out by the other-minds problem)?]

    If so, then what is missing is an account of affect: the sense that some things are good and others bad. This sense is arguably much more important than being able to account for the ability to categorize things as to what they are: first worry about whether it’s going to kill you, then worry about whether it’s red or blue. If an interest in self-preservation is the prerequisite for action, and eventually intelligent action, then we will stop looking for feeling and consciousness in robots.

    ReplyDelete
    Replies
    1. It's simpler than that. Searle was right that cognition cannot be all computation. He was wrong that cognition could not be computation at all.

      But grounding symbols in sensorimotor (robotic) capacity -- i.e., grounding T2 in T3 -- is not necessarily enough to ensure that the T3-passer really understands. It feels like something to understand, and because of the other-minds problem, it's impossible to know for sure that anyone or anything else understands (or feels anything at all). (Affect is just another one of those feels-like-somethings that you can never know for sure about.)

      So Turing's message is this: Turing-Testing is all we have -- and all we ever had -- for either our everyday mind-reading or cogsci's reverse-engineering

      So if you can't tell apart someone or something from someone with a mind, don't deny of one what you affirm of the other. The TT is the best you can do.

      Yet you can do better than T2: T3, or even T4.

      But computationalism can't.

      Delete
  4. “Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function (2001)”.

    I agree that Searle was perhaps going too far when he argued that some particular biological structure was necessary for intentionality (or consciousness). This seems to be in total opposition with the second tenet of the “strong AI” believers; that a computational state is implementation-independent. In kid-sib terms, this means that there must be some physical form in order to create human-like cognition, but the exact details are irrelevant.

    To me, these two arguments appear to sit on extreme opposites, with one claiming that the physical details of the implementation are not important, and the other one claiming that such details are crucial (and that they also need to respect a certain biological structure).

    It seems that Harnard clings to the weak equivalence of cognition, in the sense that there might be other ways to “reverse-engineering cognition without constraining us to reverse-engineering the brain” (2001). There may be different algorithms for the same input and output (in other words, to have understanding and consciousness exactly like humans do, without resolving to the brain). However, I do believe that reverse-engineering the brain might be a crucial step in order to reverse-engineer cognition in other ways.

    ReplyDelete
    Replies
    1. It's not clear that all features of brain function are relevant to cognition. And it's certainly not clear that the brain is just executing algorithms.

      Delete
  5. I guess I unwittingly summarized most of this article in my skywriting for the original Searle paper; basically saying how the CRA supports the ‘Granny’ belief that “we are not (just) computers.” This is something I readily believe; it’s why we need both neuroscience and philosophy, computer science and psychology, in order to try to understand cognition. One claim made by Harnad that I did not understand was that “Searle’s … wrong that an executing program cannot be PART of an understanding system.” Based on my reading of Searle’s paper, I would not have guessed he would have made this claim - anyone can see that humans are capable of making computations, in simple mathematical calculations if nothing else. Furthermore, Searle’s Periscope alone proves that Searle must believe computation is a part of human cognition - it shows that a human CAN do computation, in the form of manipulating Chinese symbols, and that this is indeed an aspect, at least, of cognition - so I’m surprised that Harnad would interpret Searle’s position to mean that executing a program like this can’t be part of an understanding system.

    I wanted to think aloud a bit more on Searle’s Periscope - if I understand correctly, the whimsically accurate title describes the way in which Searle has bypassed the other minds problem (as well as Descartes’ cogito argument?) and still concluded that computation is not cognition. But what else can we learn from the periscope? Does it show that the hard problem is impossible to solve?

    ReplyDelete
    Replies
    1. Of course Searle would not and could not deny that people compute. But I doubt that Searle thinks that even our brain's ability to compute (manipulate symbols) is generated by computation. Maybe he is wrong about that. A computer's ability to compute is certainly generated by computation. But being able to compute is just one part of our cognitive capacity. And we know that the way we do maths is not just computational (symbol manipulation); language even less so.

      Searle's Periscope can only penetrate the other-minds barrier for the special case where we suppose that a mind if just a computer program and that the same mental state will happen in every implementation of that program. Searle shows that for the mental state of understanding Chinese, it would not.

      That only works for computation, because only computation is implementation independent in that way. It doesn't work for T3, because dynamics are not implementation-independent, the way software is independent of hardware. So Searle can't become "the system" and report there's no understanding going on there.

      Delete

  6. When Harnad restated the three propositions from Searle’s Chinese Arguments, questions about language acquisition surfaced. It would be interesting to see if learning software can be implemented so that a machine can universally learn any language based off of the input. Linguists have various theories about how language is acquired by all children, yet I have rarely discussed or read any papers in my classes about reverse engineering the acquisition process itself.

    “Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a) and reverse-engineering has only two kinds of empirical data to go by: structure and function (the latter including all performance capacities).” – Harnad

    Language is tied in with our cognition and it is debated whether language has an influence on our understanding and perceptions of the world. By creating software that can replicate findings in studies would give us more concrete ideas about language acquisition itself. It seems to me that there are several ideas and theories about how language acquisition occurs, but nothing is decided. However, there would be limits, as “the CRA shows that cognition cannot be ALL just computational, it certainly does not show that it cannot be computational AT ALL.” Using these computational models would not completely capture the ideas, but it would provide a way to study it in a more concrete manner and allow us to learn from it (I guess this would be ‘weak AI’?).

    ReplyDelete
    Replies
    1. Computer programs so far are not too good at language-learning. (Of course the light-years from passing T2 too.)

      Delete
  7. "
    The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!): Normally, if someone claims that an entity -- any entity -- is in a mental state (has a mind), there is no way I can confirm or disconfirm it. This is the "other minds" problem. We "solve" it with one another and with animal species that are sufficiently like us through what has come to be called "mind-reading" (Heyes 1998) in the literature since it was first introduced in BBS two years before Searle's article (Premack & Woodruff 1978). But of course mind-reading is not really telepathy at all, but Turing-Testing -- biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences. But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states."

    Regarding the other minds problem in general, and how we solve it (that the behavior of other humans and animals can best be made sense of if we allow/assume similar mental experiences for them). In these cases we know their behaviors are emergent from their biology and chemistry, which we also know is not the case with computers. The sources of "cognition" are inherently different.
    I suppose what I'm getting at, is that we know a calculator does not feel, and not simply because it's behaviors are unlike that of a human. A dog's behaviors are different from a human, and we still know/assume it feels. But at some point with computers/machines, when their behaviors are sufficiently human-like (such as when they pass a version of the Turing Test), we allow for the possibility of their feelings. But if we know the TT machines behaviors are based on computations, can we not exclude them from the other minds problem, assuming a certainty of understanding of the programming behind their behaviors?

    ReplyDelete
    Replies
    1. Have a few months conversation with a pen-pal you never see, and see if you can probe his feelings. Unless he's severely autistic, you will probably succeed. The pass T2, a computer would have to be able to do that too.

      Dogs don't talk, so with them there's only the equivalent of T3. And, yes, they, and just about all mammals (and also birds) are sufficiently like us that we can use T3 to infer that they feel.

      Delete
  8. " There was even a time when computationalists thought that the hardware/software distinction cast some light on (if it did not outright solve) the mind/body problem" (Harnad)

    What is striking to me, is that over time the whom to which we are willing to "allow/assume similar mental experiences" (Andras) varies. We now find it outrageous, but other species, and even our own but with different skin tones, gender, ancestors have been over time assumed NOT to have similar mental experiences. The brings into question the very nature of our assessments about what consciousness, mindfulness and thinking are.

    How can we be sure that we, here and now, "know" what "understanding" is, what "consciousness" is and especially, how can we be so sure that we know what constitute "cognition"?

    ReplyDelete
    Replies
    1. I agree with your comment absolutely ab! I was wondering the same thing in particular with our last class’s conversation. When we spoke about how simulated reality will never be able to have the exact same characteristics of a dynamic reality- in particular that simulated humans will not have the dynamic property of feelings. At the level of T4 I don’t see how this would be so; if one can feel both endogenous and exogenous properties, how would it be that it itself would not be able to show feelings if in-fact does feel (even if it’s feeling at the level of a computation)? On another level, through what is felt, would it in actuality be the same feelings a human feels within the experience of an everyday lived life (For example, if a T4 human was make to acknowledge hunger from their endogenous percept of their stomach churning would this in fact make them go get food or would they just stay there and stay hungry or better yet just not understand that feeling?)?
      Mental experiences are different amongst different eras, people, and even within the same person at different stages within their life. And exactly as ab states, these different layers bring up the question of how we assess the nature of consciousness, mindfulness, and thinking.
      In Searle’s video on consciousness, he brings up multiple errors that are made in the current practices of studying consciousness. One in which he mentions is that there is a clear distinction missing between the epistemic/ontological and subjective/objective factors. However, I feel that Searle makes it seem a lot more simpler then I could ever possibly imagine this to be so within the everyday doings (including things such as skin tones, gender, family history which have played roles in developing one’s consciousness).

      One last note, thinking on that first quote that was given by ab; computationalists thought that they solved the mind-body problem by making a hardware/software distinction. To them the mind is not necessarily related to the matter; the mind occurs at a level of computations which is known as the software. Therefore, the mind ONLY occurs on the level of a specific software. The software however can work through any kind of hardware (brain or computer). Maybe I am getting this confused, but the way I see it is that they are somewhat still dualists. Their belief is therefore that the mind occurs through computations which occur through some physical property which can formulate computations. Is this not dualism? It is still some sort of epiphenomenalism because it is computation that produces the mind. It is now some sort of algorithm (which is not physical but almost idealistic) which produces a certain state of mind (which is also not physical and more-so idealistic)! Maybe there is something I am not quite catching on how computationalists thought this would help solve the mind body problem…

      Delete
    2. ab:

      The question is not whether others have similar mental experiences but whether they have mental experiences at all.

      We can't explain the mechanism of understanding: we're waiting for cogsci to find it and tell us. But we certainly know what it feels like to understand.

      Delete
    3. Demi:

      Feel endogenous properties?

      Feeling at the level of computation?

      No two people feel exactly the same, so why should a T3, or T4? They just have to feel (something) plus be able to do what any of us can do.

      Computationalists are not dualists. The believe mental states are just computational states. And that the same mental state can be generated in many different hardwares, as long as they are running the same software.

      Delete
    4. endogenous properties: as in perceptions/feelings that come from inside one's body
      feeling at the level of computation: perceiving/feeling things and then reconfiguring this through congnition

      I was talking about this on a level of a T4 human where one would be able to perceive the environment as well as their bodily states (not on the level of T3 which only perceives the environment from sensory-motor modalities). What I am finding quite difficult to understand is if a T4 human is able to perceive and therefore feel things that are in the environment (exogenous properties) and their interior states as well (endogenous properties), how is it possible to conclude that a T4 human would not be able to show the dynamic property of feeling to others if it is in fact feeling things within itself through itself or the environment?

      Also, how is it that they are not dualists? They believe that many hardwares can produce a specific software which allows for mental states. This means that they solve the mind body problem by stating that the body (and brain) is not important. However they are creating a new kind of dualism that software = mind. How is that solving the problem?

      Delete
  9. Harnad states: “Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function.” Not only do I find that Searle was overreaching but I also disagree with Searle’s description of neurons and synapses in the brain simulator. “It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn’t understand Chinese, and neither do the water pipes,…” Here, he is using water pipes used for transportation only and metaphorically comparing this to neuronal firing, therefore stating that neuronal firing is only a method of transportation. The main flaw here is that neuronal firing is not only a method of transportation but can actively change itself as we can see in Long-term potentiation and Long-term depression. More importantly, in these cases neuronal firing has been linked to our ability to learn and memorize. Therefore neuronal firing is part of our ability to understand and not just a transporting water pipe. If we were take “Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output”, because we are using synapses, I would think that contrarily to what Searle states, this system would be able to understand.

    “The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.” For this statement, how do we know that the neuronal structures are directly linked to the internal states? It seems to me that by stimulating the neuronal structures we might just be able to stimulate the internal states, like stimulating the neurons carrying serotonin would indeed provide us with a feeling of happiness, therefore affecting our internal states.

    ReplyDelete
    Replies
    1. Software, too can learn and change (and memorize).

      What Searle means is that if cognition is something like water, then a simulated brain would no more think than a simulated waterfall would be wet.

      Delete
  10. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it” (Harnad, 2001).
    Harnad is describing how we can only experience another entity’s mental states directly if we have a way of actually becoming that other entity, which is only possible if one adheres to the central tenants of computationalism. Accordingly, if we can get into the same computational state as the entity in question, then we can check whether or not it has the mental states that have been attributed to it. Harnad explains that if one follows this notion, then one either has to give up the existence of a mental state because of the presence of the computational state (converting computationalism to implementationalism to save the mental) or by giving up on the mental nature of the computational state altogether. Also, Harnad notes that computationalists would not opt on converting computationalism to implementationalism, as this would stray from the central tenants of computationalism by rejoining the material world of dynamical forces. Thus, it seems that the only option would be to give up the mental nature of the computational state altogether, which I find particularly problematic. I do not think it is possible to ignore mental states when discussing cognition. Moreover, I think implementationalism is a better model of cognition than solely computationalism, as it preserves the mental component.

    ReplyDelete
    Replies
    1. Implementation-dependent computation would not be just computation any more.

      Delete
  11. Harnad argues that what Searle really proves with his Chinese room argument is that cognition is not all computation and that the System’s reply is indeed right; that what we may do to “understand” perhaps involves an “executing program” of the sort of computationalism, but of course it is not ONLY that. Harnad re-words and clarifies how Searle’s argument should have been; by saying that a) “mental states are just computational states,” that “computational states are independent of physical implementation,” and that “ we cannot do any better than the Turing Test to test for the presence of mental states” which means that only functional equivalence can be tested (by “reverse engineering”) and at the T2 level, a computer would pass the CRA and therefore understand. Here is where I get a bit confused, Harnad is trying to say that refuting that a T2 computer could pass the TT would be playing into Searle’s exaggerated conclusion, so if we take it that T2 passes the test then every single implementation of that “T2-passing program must have mental states” (mental states that understand), but this is not the case in Chinese (i.e. not understanding Chinese means there is no conscious mental state((parenthesis: Harnad talks about Searle needing conscious mental states for his argument, but then again I am forced to think being unconscious just as not understanding are types/kinds of mental states, is this what he is trying to explain when he talks about Searle’s Persicope and the “system” not rendering to either the mental or the computational state?))).

    ReplyDelete
    Replies
    1. Searle refutes computationalism by accepting all of its premises and showing that they would not lead to understanding.

      If we denied that a computer could pass TT, we are rejecting computationalism out of hand. But then we have no argument that computationalism is wrong (like the chinese room argument. We simply reject one of its premises, with no argument.

      I don't think you've understood Searle's Periscope yet. Please read the other comments and replies and ask again if it still isn't clear.

      Delete
  12. “Is the Turing Test just the human equivalent of D3? Actually, the "pen-pal" version of the TT as Turing (1950) originally formulated it, was even more macrofunctional than that -- it was the equivalent of D2, requiring the duck only to quack. But in the human case, "quacking" is a rather more powerful and general performance capacity, and some consider its full expressive power to be equivalent to, or at least to draw upon, our full cognitive capacity (Fodor 1975; Harnad 1996a).”

    This idea that human language is equivalent to, or at least draws upon, our full cognitive capacity seems to go unquestioned in discussions of artificial intelligence and of the Turing test specifically. However, I still don’t understand how this became so universally accepted. When was it decided that the production of language is the only function that defines cognition, or the only one measurable? I’m not sure what other system I would propose to use to measure cognition, but I still struggle to accept that language production is the only thing of relevance or value here. In the case of the duck we talk about making an artificial duck do all the things characteristic of a real duck (even if we don’t make it look like a duck), but with humans we seem to skip straight to the quacking. Why is this? And what about people excluded by this, such as those with impairments in language production, comprehension, or acquisition? Do we not consider them to have cognition? They wouldn't pass the Turing test, does this mean they are less human?

    ReplyDelete
    Replies
    1. t's not that language is all of cognition. There are many things in T3 that are not linguistics, and animals can do them too.

      It's just that for humans, language alone might -- just might -- be enough to test whether you have modelled all of cognition, because it language draws on so much more of cognition. Just grasping the meaning of words, and being able to describe just about anything in words (and computations -- and the weak and strong Church-Turing Thesis) shows you the pervasive power and reach of language.

      But even if T2 is a good enough test, it doesn't mean that you don't need to have T3 power to pass T2 (i.e., be at least a robot, even though the test does not test your robotic abilities directly).

      And if T2 could be passed by computation alone, then it still would not generate understanding (Searle).

      So that probably just means that T2 could not be passed by computation alone.

      Delete

  13. "Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally. No one could quarrel with a successfully reverse-engineered candidate like that; no one could deny that a complete understanding of how that candidate works would also amount to a complete understanding of how a real duck works. Indeed, no one could ask for more."

    The question I want to raise here is what if someone asked for more? I am struggling to quiet the problem-maker inside of me with this; say I refuse to agree that reverse-engineering is the way to totally understand how something works. I think this might just be a Granny argument but I can’t get passed it. Just because one reverse-engineered something doesn’t necessarily mean that one “UNDERSTANDS” how a duck works. Is there any plausible way to ask for more as a way to understand how a duck works, without raising a Granny argument? I think this kind of figuring out needs a bit of discussion.

    ReplyDelete
    Replies
    1. Try starting with a toaster: Would you say reverse-engineering a toaster, and building one on that basis, would explain how a toaster works? If not, why not (and what would?)?

      Delete
  14. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it’s got the mental states imputed to it. This is Searl’s periscope…”

    The power of Searl’s argument in refuting the claim that cognition is computation is captured in what Harnad calls “Searl’s periscope.” Here is how it works. Computationalists (people who believe that cognition is just computation) argue that “[m]ental states are just implementation-independent implementations of computer programs.” To unpack: mental states are the same as computational states – this means that when we think, we are manipulating squiggles and squaggles (meaningless symbols) according to rules that have nothing to do with their meaning. These states are “implementation-independent” because they can be implemented on any sort of hardware (my brain, this sort of computer, that sort of computer) BUT they must be implemented somewhere. In Searl’s thought experiment, the man in the room accurately answers Chinese questions by manipulating symbols according to formal rules. And Searl shows that this man doesn’t understand Chinese at all! Therefore, he concludes that a computer in the same “mental” state (ie one producing accurate answers to Chinese questions by manipulating symbols according to formal rules) does not understand Chinese either! Searl’s periscope allows him to overcome what Descartes called the other minds problem—i.e., the fact that I can never have definitive proof that someone other than myself is or is not thinking or understanding. Searl’s thought experiment allows him to demonstrate that a computer merely manipulating symbols according to formal rules to produce the answers to Chinese questions cannot possibly understand Chinese! This is kind of magical, because normally we cannot say anything helpful about what another machine can or cannot understand!

    ReplyDelete
  15. In our first class, we were told that (Stevan says) "consciousness is feeling"; at the time I wasn't quite sure where the argument was going, but I believe the argument is starting to take shape; namely, via Searle's Periscope, which this paper defends:

    "If there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it."

    So, what is missing in Searle's room according to Searle is "intentionality" according to him and "feeling" according to Harnad; because we take on the mental state of a computer via any permutation of the Chinese room and because, no matter how slick the set-up (memorized, internalized, what-have-you), you will never "feel" like you know Chinese. This, I have no disagreement with.

    What I do disagree with, as I brought up in the first class and as I mentioned in my previous comment, is the satisfactoriness of this argument: quoting myself, "the way he has rigged these definitions, only a human brain (or a mechanical brain that perfectly replicates a human brain) are able to have these abstract properties." When we ask "can a computer think?" we are allowing for some new way of thinking, separate from what a human can do. When we ask "can a computer feel?" what we really mean is "can a computer feel - in a manner that you or I would recognize as feeling, were we in that same computational state?" This question is not without value - in fact, it can be a very compelling definition of consciousness. However, whether one agrees to accept it as THE definition of consciousness seems to be more a question of the importance one places on the presence of this feeling and less on whether the argument is "right" (what is “right” in this case, anyways?) I myself would want a definition of consciousness that is independent of human judgement, but maybe "consciousness" is so inherently human that a human judgement is the only definition a human would find convincing.

    ReplyDelete
  16. In his article, Harnad (2001) states that Searle’s first and second tenet should have combined to: “mental states are just implementation-independent implementations of computer programs”.

    To me, this reaffirms my point from my comment on Searle’s reading (3a). Based on what we’ve discussed over the past few weeks, it seems that, in explaining the easy problem of cognition (how we do what we do), hardware and software can be separated and distinguished from each other. For example, while our human brains and the computer’s hardware are physically different, the programs running on each can be the same. If this is the case, the programs on the computer are the result of reverse engineering cognition and can help explain how we do what we do. The hardware that implements the program is irrelevant.

    Harnad (2001) also stated that “Searle says computationalism is false as Searle does not understand Chinese, but can manipulate the symbols”.

    Again, this reaffirms my analysis of Searle’s readings. It seems to me that Searle is assuming that understanding the symbols and manipulating the symbols are part of the exact same system. Because of this, Searle is rejecting the notion that computationalism explains cognition. However, isn’t it possible that understanding and manipulating are two different things? Obviously, symbol manipulation can be done without understanding what the symbols actually mean (for example, if “squiggle” appears, then do “squaggle”). Therefore, is it correct to reject computationalism because understanding and manipulating do not work in tandem? I agree with Harnad and I believe that computationalism can explain parts of cognition (for example the easy problems), but not completely. Like Harnad said, “We are not (just) computers”.

    ReplyDelete
  17. "There was even a time when computationalists thought that the hardware/software distinction cast some light on (if it did not outright solve) the mind/body problem: The reason we have that long-standing problem in understanding how on earth mental states could be just physical states is that they are NOT! Mental states are just computational states, and computational states are implementation-independent. They have to be physically implemented, to be sure, but don't look for the mentality in the matter (the hardware): it's the software (the computer program) that matters."

    I feel like I always come back to this problem.. how exactly does the mental state (the software) be "physically implemented" if we're saying mental states are not physical states? If mental states exist outside of physics then how are we to provide a causal mechanism to explain them? Perhaps the software/hardware distinction is not the best analogy since software is physical in that it is electronically stored data? It could be that I'm misunderstanding what you mean by saying that mental states are not physical states..

    "Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a) and reverse-engineering has only two kinds of empirical data to go by: structure and function (the latter including all performance capacities). Because of tenet (2), computationalism has eschewed structure; that leaves only function. And the TT simply calls for functional equivalence (indeed, total functional indistinguishability) between the reverse-engineered candidate and the real thing."

    If we're attempting to reverse engineer the brain in order to learn about human behaviours/conditions and we're agreeing that we can only ever hope to achieve structural and functional equivalence but that we can never know for sure that something we've created has mentality then what is all the fuss about? Why is there so much effort on finding the most plausible theory on whether machines can think or have mentality? When and if we get there, we will just see if it feel like the robot does or doesn't and that's it right?

    ReplyDelete

  18. ‘’ The reason we have that longstanding problem in understanding how on earth mental states could be just physical states is that they are NOT!’’

    Do computationalists think that there is no such thing as the feeling that we think? If so, then it seems that the divergence of opinions between Searle and the computationalists relies on the different definition of understanding/thinking each side has. Feeling, as triggered by chemical reaction, would require to have a brain. On the other hand, the absence of feeling would make machines more plausible thinkers.

    ReplyDelete
  19. "... although the CRA shows that cognition cannot be all just computational, it certainly does not show that it cannot be computational at all."

    I completely agree with this statement. Searle's argument has shown that computation is not sufficient for intentionality or consciousness, however in the greater scheme of things, it does seem necessary. There has to be some sort of computation that goes on in neural activity, that makes sense with the behavioral studies--a certain firing pattern can cause a specific behavior. However, now we are left with the problem for finding what the other part is. What are the other processes must be added to make computation sufficient for intentionality? Searle seemed to think the biological and chemical structure of the brain contribute to give it its causal powers. If those are the requirements though, we will never truly understand cognition. If the biological and chemical structures must be identical, then all we have done is just make another copy of ourselves--this hardly tells us more.

    ReplyDelete
  20. “Unconscious states in nonconscious entities (like toasters) are no kind of mental state at all. And even in conscious entities, unconscious mental states had better be brief!”


    Here, Harnad addresses the Systematist’s “revised notion of understanding” as unconscious. Harnad explains that unconscious states have a role in the mind, but cannot be the sole or primary state of mind-- they must exist in conscious beings. This is in response to the idea that the understanding in the Chinese room could simply be unconscious understanding. How can we clearly distinguish conscious states from unconscious states in other entities? This is the other-minds-problem and as Harnad describes, “biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences”. This solution is essentially a Turing-Test stated in biological terms. Yes, Searle’s Periscope provides insight into what it is like to be another entity engaged in a specific “computational state”, but how can it tell whether there is consciousness. Does consciousness consist of feeling, understanding, intentionality, free will, thinking, or all of these things? Can you unconsciously understand a language? Harnad says that language understanding must be conscious, but it doesn’t always feel conscious, so what are the criteria?

    ReplyDelete
  21. “If there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it’s got the mental states imputed to is. This is Searle’s Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state – or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather unconscious) state -- nothing to do with the mind.”

    I am not completely sure I understood what Harnad meant by Searle’s periscope. I would like to know if I am interpreting it correctly. Let’s say that the restatement of the first criterion of strong AI is true: all mental states are computational states. While we cannot know whether or not another system understands/feels/thinks (the Other Minds Problem), if we are in the same mental (and therefore computational) state as the computer, then we can know whether or not the machine has mental/conscious states. Harnad is saying that the Other Minds Problem is no longer a problem because of Searle’s CRA. By being the system, Searle can operate the program (since according to (2) Computational states are implementation-independent). Because of this, he knows that the system is not understanding any Chinese; that is, the T2 machine does not have conscious states.

    If I am interpreting this correctly, I don’t quite understand why this would not be able to work in a non-computational T2-passing system. Harnad states that “Searle could not be the entire system.” But what if the symbols are grounded in meaning (and therefore, not only computational). If Searle memorizes what all the symbols mean, wouldn’t that constitute understanding and wouldn’t he still be the entire system?

    ReplyDelete
  22. “They should just assume that he had memorized all the symbols on the walls; then Searle himself would be all there was to the system.”

    This is a very good response to the Systems reply and it was highlighted in the Searle paper. This statement encompasses everything I find wrong with the Systems reply. Frankly I do not believe the system (as a whole) would be able to understand if the software (i.e. the man in the room) cannot understand Chinese. There is still missing an underlying mechanism of comprehension in this example, the very thing that humans are able to do. Searle memorizing the symbols and completing the procedure by himself shows that you do not need an entire system in order to generate understanding. He is the whole system but still cannot say what the words mean and cannot understand Chinese at all. Further in the paper, Harnad makes reference to unconscious mental states. Even if Searle in the room was unconsciously aware of Chinese from having all the rules and symbols memorized, I do not find this proof of understanding. Unconsciously knowing something is not understanding; understanding is a mental state. As Harnad says “Even in conscious entities unconscious mental states had better be brief”. The whole point of understanding something is that you can explain it to yourself or others (nothing brief about that!) and this is not possible if you are unconscious of your understanding.

    ReplyDelete
  23. I largely agree with Steven Harnads’ argument to Searl (though Harnad largely agree’s with Searl on several points). But I’m having trouble connecting the dots between a couple of main points.

    “The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place).” It makes sense to me that Searle, acting as a computer can only ever describe his lack of a conscious mental state (ie.understanding) but can do nothing to disprove unconscious computational states.

    “…for although the CRA shows that cognition cannot be ALL just computational, it certainly does not show that it cannot be computational AT ALL”. This makes perfect sense to me. Computationalism is reliant on implementation-independence. As soon as we bring in a partial implementation-dependance Searles CRA (as well as all of computationalism) loses its significance because we now require that a specific dynamic component be included in any story of mental states.

    But assuming that the mind is indeed partially computational, is it then implied that only the unconscious or partly conscious states are computational whilst the rest of conscious thought is explained by some other dynamic component? Where is that line drawn? If we’re resorting to dynamic components of consciousness anyways, is there any reason to maintain any notion of a software explanation of the mind? Would solving the symbol-grounding problem allow for an entirely computational consciousness? Maybe the first point becomes somewhat irrelevant once the second is introduced.

    ReplyDelete
  24. “Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function: There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism.”

    Whereas Harnad allows that Searle’s discrediting computationalism opened up the doors to areas like “embodied cognition,” “situated robotics,” and neural nets, it seems like the field of cognitive science has not moved particularly closer to understanding thinking (cognition) itself. What I mean by this is that increasing the number of highly specific subfields does not mean that any new conclusions or concurrences are made. A growing field is closer to understanding what degree, from very computational to solely dynamic systems, the human mind functions as a system, but a more nuanced view can only go so far. I agree with Granny in that we are not just computers but I still find myself wondering if, using the road to London metaphor, perhaps cognitive science has gotten so caught up in the computational vs. non computational debate that it has become something of a detour away from the destination.

    ReplyDelete
  25. Harnad (2001) describes Searle’s Periscope as computationalism’s fatal weakness. Searle’s Periscope refers to the idea that “if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it” (p. 7). Presumably, this idea is dubbed a “periscope” because it provides some new insight into the “other-minds” problem. However, I disagree that Searle has actually provided enlightenment on the other-minds problem. This idea is merely using a sideways approach to arrive at the same wall. The idea of dictating one’s own mental state is not new, and our language uses phrases such as putting yourself in another person’s shoes or getting in the competitive mental headspace. This type of lanugage suggests that we accept the possibility of manipulating our own mental states to dictate what state we would like to be in. In this case, all I would need to do to verify whether someone or something has a mental state would be to put myself in their mental state. Of course, things are not this simple, as there is no way of knowing if I am exactly in their mental state other than by observing them. Once again we return to the original issue: how do I know that another person has a mental state?

    ReplyDelete
    Replies
    1. I agree with you that it is not a novel idea to put oneself in another's shoes, or to dictate our mental states. The issue presented by Searle's periscope argument is different from this. Just because I can imagine myself in your situation, or tell myself to "be happy and smile", does not necessarily put me in the same computational state as you or as 'happiness'. Actually, my computational state would consist of the state of "imagining myself as you or in your situation" or the state of "imagining myself happy or imagining what happiness feels like". Although the difference is small, I would argue that picturing a mental state does not forcibly make the underlying computational states analogous.
      As a result, I still do not necessarily see the "fatal weakness" of Searle's periscope. To me, it makes sense. If we were to figure out how to mimic an entity's computational state (and it is not by simply imagining ourselves in their position), then we would be mimicking everything and see whether or not than includes the mental states we are aiming for.

      Delete
    2. Because computation is implementation-idependent, every hardware that implements that computation is in the same computational state. According to computationalism, cognition is just a computational state. And we, of course, know that cognition is also a mental state. So when Searle puts himself in the same computational state as the computer, he should also be in the same mental state (if the computer is in a mental state), in particular, if the computer was in the mental state of understanding Chinese because it was implementing the T2-passing computations, Searle too should have been in that mental state. He wasn't. So neither was the computer, or anything else, just in virtue of implementing the T2-passing computations.

      Delete
  26. Harnad’s article helped clear several issues I had with Searle’s paper. However, I’m confused with the idea of “understanding as a feeling”. I can grasp the concept that understanding is not just computation because it feels like something to understand. But rather than saying, cognition is a combination of computation and understanding, can’t we say that cognition is a combination of computation and feeling? That is, something could still be said to cognize if it can compute and feel any sort of feeling. Furthermore, this article mentions that “ratcheting up to the right degree of complexity” does not help explain cognition, as it is an ad hoc speculation. But given that it is speculation, could we extend this idea to mean that cognition is many different types of feelings in addition to computation?

    ReplyDelete
  27. “Moreover, it is only T2 (not T3 or T4) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate.”
    I agree that it’s important to stress that the Chinese Room Argument’s implication that cognition can’t be purely computation doesn’t mean we have to immediately jump to the conclusion that cognition must be purely biological. There are a lot of in between possibilities that Searle hasn’t discredited. However, I’m a little unclear on why it is only T2 that is vulnerable to the CRA. Does sensorimotor input somehow give a T3 system the ability to understand? How do we know the meanings of symbols are any more relevant to a T3 system than a T2 system? Wouldn’t you be just as likely to assume a T2 passing system had mental states as a T3 passing system until you knew its input output conversions were the product of a computer program? Why wouldn’t knowing how to account for a T3 passing system’s behaviour using a computer program cause you to stop attributing feeling/intentionality to it?

    ReplyDelete
  28. I'm afraid I'm having some trouble understanding the term 'implementation-independent' when describing computational states. Harnad states that there has to be some kind of physical implementation. Is it the physical structure itself that is argued to be irrelevant? As for Searle's periscope, I'm not sure I understand what it means for a system to protect itself from it. If a system is a mental state ('purely in virtue of being in the right computational state'), then necessarily we are able to get into the same mental/computational state? What does this mean in literal, physical, real-world terms?

    ReplyDelete
    Replies
    1. Yes, it is the physical aspect that is being debated. Seale means to say (as clarified by Harnad) that the same computational states can arise from systems regardless of their form. A real life example would be that I can do arithmetic and a calculator can do arithmetic. Having very different processing functions does not affect our responses which should come out to the same answer.

      Delete
  29. Harnad notes, “the CRA would not work against a non-computational T2-passing system, nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system.” (Harnad, p.7)

    Searle can only BE the entire system when the system is removed from its implementation (the component necessary for extracting meaning from dynamical systems), and therefore can only manipulate syntactic symbols. If the system is only symbol manipulation, and we can manipulate its symbols, then we must be able to become its mental state. And if we can do this without understanding anything that the computer is doing, than the computer must not understand as well! The CRA rightfully rejects T2-level Turing Testing, and implies that pure computation cannot alone be cognition (although it might contribute in part). However, Searle’s CRA fails for implementation-dependent systems, because Searle has no possible way of implementing the same hardware that the system might use for grounding symbols, and thus cannot become the system.

    Now that A T3 (or higher) level system is still up for grabs, I can’t help beg the question as to how cognition depends on implementation. Furthermore, how might levels, or states of conscious versus unconscious experience, depending on hardware, help define what it means to be a cognitive being. In this sense, the Turing Test (for T3 and higher) seems like an impenetrable criterion for reverse engineering the capacities of human cognition because humans can actively decide whether the system’s email output can/cannot be distinguished from a human being. But, if it were possible to initialize a Turing Test on each possible ancestral lineage of the human species, at what point in evolution does the hardware distinguish a cognitive being from one that is not? Where is the dividing line between consciousness and unconsciousness?
    When Harnad states that he is “certainly impelled toward the hybrid road of grounding symbol systems in the sensorimotor (T3) world with neural nets”, then he must imply that certain mechanisms that exist in non-human species (neural networks) play a role in cognition as well. Within Harnad’s claim, however, two criteria immerge for cognition: the idea that symbols DO arise, and that they are grounded in neural networks. This might explain why one is hesitant to suppose an aplysia snail to be a cognitive being, while even less so, willing to admit a flower to be one. But the aplysia does have a very simple network of neurons governing its behavior. If we wanted to formalize its neural network in the form of symbols we might do something along the lines of: (of course this is merely a hypothetical example, outlined by someone with minimal experience in computing)

    If X (the stimulus) increases by 1, then Y (the sensory output) increases by 1  if Y increases by 1 then Z (motor output) increases by 1  if Z increases by one then A (muscle contraction) decreases by 1

    Of course the precise symbolic representation supposedly depends on the properties of the action potential, and the frequency at which they might fire, but this networks seems to formalize symbolically none the less. And, if the logic in my example is flawed, at what point does a neural net become complex enough in order to properly translate sensorimotor information into symbols for consciousness and cognition to immerge? It seems like the criteria for addressing problems of cognition occur at levels much lower than humans, incapable of being assessed in a Turing Test. This raises further questions about the discreteness of consciousness and unconscious phenomenon and whether one unified variable, manifested as a reverse engineered system, lies on a continuum that measures how cognitive an implementation-dependent system might actually be.

    ReplyDelete
    Replies
    1. it seems however, like the hard problem of cognition always rains on one's parade, preventing one from completely grasping how conscious and unconscious phenomena are distinguishable from one another. Cognition, as the processing of knowledge (how we might do what we do) seems to be a question that can be asked for all levels of life. Reverse engineering the human compared to a primate however, still tells us nothing about whether the primate is conscious, and begs the question as to whether the primate is cognitive to a lesser degree. Feeling can't be measured however, and trumps us every time.

      Delete
  30. I agree with Prof. Harnad's reinterpretation, and logical follow-through of Searle's Chinese Room argument. I think that he redefines the tenants on which computationalism stands well, and from the assumption of computationalism, interprets Searle's argument in a more logically sound, sturdy way than the initial argument was presented. I especially agree with the notion that the "system reply", while not explicitly refutable, does not seem to be a reasonable explanation as to what is occurring in the Chinese room. Further, I agree that the interesting refutation exists in the asking about the status of understanding as a conscious state.

    I feel that I have something to add here though. Prof. Harnad discusses the possibility that understanding is an unconscious state and later refutes this strongly. What he does not consider is the possibility of the computation giving rise to understanding as being an unconscious state. It seems conceivable to me that understanding, as a conscious state, can only exist as a result of some unconscious state, and therefore, when Searle brings the symbol manipulation into his conscious state, no understanding can follow from it (without interpreting him as part of some system or some part of him as some part of some system). The symbol manipulator cannot be conscious of the computation that they are performing whilst computing - by definition this violates the nature of computation, which requires that the symbols are not manipulated with regards to their meaning. In this way, Searle, by bringing the computation into his conscious state, defeats himself, as the computation must not be observable to the computer as a whole in order for understanding to arise from it.

    ReplyDelete
  31. In this article, Harnad defends that although Searle has shown that the person inside the Chinese Room does not understand Chinese even though the output it suggests it does, the argument is not enough to conclude that computation may not be a part of our understanding tools. So even though we reject computationalism –acording to which cognition is computation and mental states are just computational states- we may accept that there is some computation upon which our cognition relies, so what Searle addressed as ‘strong AI’ really is computationalism. In his article, Searle gives a weak definition of understanding, which makes it possible to argue that understanding can be unconscious (but only makes sense in an otherwise conscious entity). Harnad explains that Searle showed that T2 was not thinking, but that he did not invalidate the Turing Test since we can apply it at other levels (indistinguishability in function and structure).

    ReplyDelete