Saturday 11 January 2014

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 




see also:




62 comments:

  1. After reading Searle’s paper, I couldn't help but generally disagree. There’s a lot to be said, but I think the crux of the problem with Searle’s argument is contained within the Systems Reply. Having said that, here are my replies to Searle’s replies to the Systems Reply.


    “My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.” (Searle, p.5)

    “Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with.” (Searle, p.5)

    Ironically, I think Searle’s thought experiment is wrongheaded because it is in fact implausible itself. I think the Systems Reply elucidates the problem with Searle’s initial format in which the computing system isn’t only Searle, but rather Searle + the instructions. Searle replies to the Systems Reply by positing that Searle himself memorizes the rules/instructions, so now the whole system is in fact within Searle himself. This does do away with the issue put forward by the System Reply, but unfortunately, I believe it only creates a new monster.

    Searle’s reply honestly seems like an attempt to pull a fast one on the reader. The problem, as I see it, is that Searle is making use of an unwarranted comparison between the English scenario and the Chinese scenario to expose, through an appeal to introspection, how Chinese isn't understood by the hypothetical Searle who has memorized the rules and applied them. In the English scenario, we know through introspection, that understanding occurs. In the Chinese scenario, Searle wants us to imagine how we wouldn't be understanding Chinese when simply mindlessly applying memorized rules. The problem here is that the hypothetical Searle seems to defy human cognitive abilities, and so how can we relate our introspective analysis in the plausible English scenario to the introspection of this hypothetical Super-Searle who is apparently able to memorize a ridiculously enormous number of rules?

    (Continued in next comment)

    ReplyDelete
    Replies
    1. (Continued here)

      “But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands.” (Searle, p. 6)

      This seems like an absurd analogy considering stomachs, hearts and livers cannot communicate in any language. While we might not be certain that other people or machines or computers have real understanding due to The Problem Of Other Minds, we have absolutely no reason to even begin positing the possibility that stomachs, hearts and livers understand.


      “And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes.” (Searle, p.7)

      Again, I don’t understand the relevance of bringing up systems like hurricanes which give us no good reason to believe they have a mental component (that they could understand or have beliefs). Why couldn't a special type of computational system produce understanding (or beliefs), while others would not?

      To piggyback on his strange hurricane example… Would we deny the existence of the emergent property of being spiral shaped to a hurricane simply because none of its parts (wind currents, air molecules, water molecules, etc… whatever the hurricane is made of) are spiral shaped themselves? This seems dubious at best.

      “Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs…” (Searle, p.7)

      I don’t see what’s so implausible to Searle about a system that understands being made up of parts who do not by themselves understand, with the instructions and the mindless implementation being the non-understanding parts. I think Searle is basing his claim on one’s failure to imagine HOW something occurs, when what he’s claiming is that something COULDN'T occur. I don’t know HOW mindless molecules give rise to beliefs, experiences, mental states, etc… but I do know that mindless molecules CAN give rise to these because I know that we humans are examples of just that. Of course, I admit I’m taking naturalistic assumptions for granted in order to make this claim.

      Delete
    2. 1. Nothing wrong with Searle memorizing the algorithm the computer uses to pass T2.

      2. Nothing wrong with his saying, then, the system is just me (unless you think memorizing an algorithm can cause multiple personality), and I don't understand Chinese.

      3. The hurricane example is like my waterfall and plane: just to show you that a computational simulation of a dynamical system necessarily omits some of its properties (movement, wetness); so Searle suggests the same is true with understanding Chinese, except the only one who can see that it is missing is the system itself, and that's Searle! (No one else in his head, both before and after he memorizes the T2 algorithm.)

      Of course this is all a thought experiment based on a premise: That computation alone could pass T2. I think that premise is wrong, because of the symbol grounding problem: Only a T3 robot could pass T2, and a T3 robot is necessarily not just computational.

      Delete
    3. 1 and 2

      It does seem quite wrong to me. I understand (or at least I think I do) that he's proposing a thought experiment and so unrealistic scenarios are fair game if they're there for the purpose of pumping an intuition. However, the intuition he's pumping, that there would be no understanding, is dependent on our ability to relate to the Searle implementing the program. I can relate to Searle in the instructions-on-paper scenario, because Searle is doing seemingly possible things (irrespective of time constraints), but the whole System isn't just Searle in that scenario. When Searle attempts to put away the Systems Reply by putting the whole system within himself, he is positing a scenario that directly cuts our ability to relate to Searle's non-understanding. The relatability here is key; Searle himself even throws in "quite obvious" when making the claim of understanding or lack thereof. It really isn't obvious that there is no understanding, because we know of brains that understand, yet don't have understanding component parts.

      I feel like I might be belaboring the point, but I really see this as the crux of the issue. There's everything wrong with him saying "the system is just me and I don't understand Chinese", because the system, in this case, isn't anything like a normal Searle (or any human for that matter) and so when he proclaims he wouldn't understand Chinese, he's not talking about the same Searle.

      It seems to me like Searle is using our intuition against us to close our minds off to explanatory possibilities. I don't think computationalism to be IT, but nonetheless, I really do think Searle's CRA is just wrongheaded. By his logic, wouldn't non-understanding molecules, cells, etc that make us up discount the possibility that WE understand? Shouldn't we question any Chinese speaker's understanding since we know they're made up of non-understanding things?

      Couldn't the same argument he makes against computationalism be made against human brains? Imagine Searle is now inside a human brain equipped with his own TMS device activating the same neurons in the same sequence that would be activated when someone engages in conversation and presumably understands the conversation... Searle clearly doesn't understand anything (he's just following the appropriate instructions) and, just like the squiggles and squoggles, the neurons don't understand anything either. So, now, does the whole system, specifically the person whose brain is undergoing this activity, not understand?

      Delete
    4. 3. Of course, computational simulation omits wetness and movement to things like ourselves who aren't part of the system. If we posit a perfect simulation of the universe, it's not us (as outsiders) who can attribute wetness or movement or anything else really... it's the entities being simulated inside the universe who are the judges of what is wet and what moves in their world. I can perceive a rainbow, but try opening up my skull and you'll be hard-pressed to find any rainbow. In practice, it might seem unlikely that a bunch of metal could simulate wetness, but in principle, isn't it a little presumptuous to remove universality to a universal machine.

      And I argue, just as simulated people are the arbiters of wetness and movement in their simulated universe, it is the whole system itself which is the final arbiter of understanding in the CRA. If you'd like, put the system within Searle... but then I'll ask in Chinese whether he/she (the system) understands anything. Since the system passed the turing test in his thought experiment, it'll surely tell me it understands perfectly well.

      "Of course this is all a thought experiment based on a premise: That computation alone could pass T2. I think that premise is wrong, because of the symbol grounding problem: Only a T3 robot could pass T2, and a T3 robot is necessarily not just computational."

      This seems on point! PHEW! If something we're to pass T2, it would necessitate to be at least T3 since sensorimotor capacities seem so basic to all of human cognition, and as you mention, the symbols (words and their respective meanings) need to be grounded. In practice, it seems to demand more than simple computation.

      Delete
    5. 1-2. Memorizing and Executing Squiggle-Squoggle Recipes

      Try the intuition out by imagining that you teach someone who does not know algebra, only arithmetic, the symbol-manipulating recipe for computing the root of a quadric equation aX**2 + bx + c = 0 is -b +/- SQRT((b*2 - 4ac)/2a (just like kid-sib before he learned algebra).

      Anyone can memorize that recipe. Applying that formula to quadratic equations as input and it will always give the root (x) as output.

      Now if you understand algebra, you understand what that means. If you don't, then you've just followed a squiggle-squoggle recipe.

      That's what Searle is doing in the Chinese Room. The fact that the Chinese TT recipe is much longer than the quadratic root recipe means absolutely nothing (unless you imagine -- completely arbitrarily -- that some kind of magic kicks in if you can memorize a long enough string of squiggles and squoggles).

      (And no, Searle's argument about symbol-manipulation recipes would not imply that molecules or organs could not understand: Molecules and organs are dynamic systems, and Searle's Periscope only works on computations -- squiggles and squoggles that anyone can do, whether human or computer -- not on dynamic systems.

      3. Matrix II: Simulations are Just Squiggles and Squoggles

      What do you mean "wetness to things like ourselves who are outside the system"? (I only know one kind of wetness in the one kind of universe. The kind that dissolves salt -- the usual kind, in the usual kind of universe...) The rest is sci-fi fantasy.

      Until further notice, simulations are symbol-manipulations (squiggles and squoggles) that are executed by real computers that are located in our (one) real universe. You're just getting lost in Matrix sci-fi if you imagine the Universe itself may be a simulation. If it were, there'd be just squiggles and squoggles: and running on what computer? In what Universe? And simulated for the senses of what (dynamic) T3 cognizers?

      I think the premises of computationalism are weird enough without going into full-fledged (and, in the end, incoherent) fantasyland. And unconstrained fantasy just generates more fantasy; it doesn't help settle real reverse-engineering questions, such as whether computation alone can generate cognition.

      Delete
    6. 1 and 2

      "That's what Searle is doing in the Chinese Room. The fact that the Chinese TT recipe is much longer than the quadratic root recipe means absolutely nothing (unless you imagine -- completely arbitrarily -- that some kind of magic kicks in if you can memorize a long enough string of squiggles and squoggles)."

      I don't imagine that "magic kicks in" at some point based on how much can be memorized. I think "magic kicks in" gradually as the right type of design is implemented. Though I'd rather say features/properties of the system come to exist rather than "magic kicks in" and these features/properties are only post hoc categorizations made by humans with language (or other equivalent or superior systems). I'm with you that molecules, cells, etc are dynamical systems, and hence why they're the right venue, but the CRA just doesn't seem right to me as an argument.

      You mentioned in my comment to your paper that complexity has nothing to do with it, and I believe you summon that argument again here by attempting to put the thought experiment in simpler terms (with the quadratic function being memorized rather than rules for communicating in Chinese). But I really do think complexity is a central issue.

      Searle imagines a normal conscious person who can memorize billions of squiggles and squoggles. That's where the thought experiment collapses for me. So, instead, let's replace it with a simple quadratic function. Ok... yes, that could be memorized and he could possibly have no idea what "algebra" even refers to. But then he wouldn't be passing any Turing Test for understanding algebra. The interrogator would figure out his lack of understanding pretty quickly by asking him about algebra. Replying with "error" might be a pretty big clue!

      The CRA seems like a trick. Searle puts something impossibly huge and complex where it doesn't fit and then asks us about it, using our intuition about what we know for certain (that normal humans think) versus what we don’t (that mindless computations can produce the phenomenon of thinking).

      Not only is complexity important, but I see the temporal variable as essential as well. If we actually were to seriously imagine what Searle is positing, we’d either have to imagine Searle somehow ruffling through billions of coded instructions in order to get one email sent or Searle memorizing beforehand billions of coded instructions (if we think memorizing more than 7 digits is hard… imagine billions of meaningless symbols). For TT to be passed, Searle would have to be super-human in terms of speed and memorizing ability. I’d have an easier time relating with the hypothetical whole system that seems to communicate and possibly understands Chinese than the hypothetical Super Searle who moves at light speed and can explicitly remember billions of arbitrary symbol configurations without understanding any.

      The problem I'm having is with Searle's CRA, not your conclusion that computationalism can't be the whole game (unless your reasoning behind it relies on the CRA being valid).

      And what about my mini-Searle in a brain example? Would not the same argument apply if Searle were the one activating the right neural activity rather than executing the right rules?

      Delete
    7. 3. Matrixy stuff

      I agree with your take that computationalism is sci-fi, but not because of Searle's argument, but rather, because, to us, it could only be just a simulation. I simply think we're attempting to pump a faulty intuition when distinguishing our wetness from a simulation's wetness by allowing only our real wetness to dissolve salt (the usual kind). If the simulation has high-enough fidelity to the real thing, there will be simulated dissolving salt. In practice, our present simulations might not be so refined and I’d argue, could never actually perfectly simulate reality; they’ll always be only approximations of the real thing (correct me if I’m wrong, but wouldn't that be a fact by the sheer logic of whatever simulation we do have actually being part of the universe it’s attempting to simulate?). The distinction between the CRA and your example of wetness is that, by passing the TT, the System would do the equivalent of dissolving salt (understanding). If I remember correctly, you even argue that only a T3 could pass a T2 (and I’d agree in practice). So, isn’t the hypothetical system Searle is positing that passes T2 much more complex and full-fledged than Searle seems to give it credit for?

      PS: Not trying to be a contrarian!!! I promise! The intent of this thought experiment simply isn't catching for me.

      Delete
    8. You're still putting a huge load on a very vague notion: "complexity." Read up a bit about what complexity really means (complexity theory) and then let me know whether you think adding more of that stuff will somehow phase-shift to mental hyperspace... and don't forget to explain how and why (because kid-sib is not impressed by hand-waving!).

      If Searle simulated neural actions computationally it would run up against the same obstacle all simulation runs up against. No one claimed neural activity was implementation-independent.

      There is a way Searle's argument can be used neural nets, however, as long as you stick to T2. And Searle gets it wrong in his "Chinese Gym Argument."

      Seems to me you're still mixing up computational simulation with VR to human senses.

      Delete

  2. Searle argues that the man in the Chinese room does not understand the Chinese he is processing, and states: “I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.”

    I would agree with the systems reply, which suggests that the understanding is from the whole system and is not just dependent upon the man in the room. Searle suggests internalization of the whole room – yet the man still would not understand the Chinese inputs and outputs being processed. However, I believe that it would still be the whole system that constitutes the ‘understanding’ portion. Although it is all internalized, the algorithms and patterns that the man executes are still part of the system. The only thing that differs is the method in which these algorithms are conveyed – they are no longer in some books or sheets of paper in a locked room, but are instead in his head. Thus, the internalized information space within his head would become the new ‘locked room’, and we once again return to the argument that the whole of the room is part of the understanding, and it is not a necessity for the man executing the actions to understand the Chinese words for there to be understanding. Therefore, such computation could be cognition (although cognition is not solely computation).

    ReplyDelete
    Replies
    1. It feels like something to understand Chinese (or English). When Searle has memorized the programme, who is the "system" that is doing that understanding in Searle? Another mind! Just because he memorized and executed an algorithm?

      Delete
    2. I have changed my mind about Searle’s argument since this reading (and a few weeks later into the course). We discussed how it feels like something to understand Chinese, so in this instance, Searle would still not have the feeling of understanding, making the systems reply faulty. It also brings about another important point by Professor Harnad, in which it importantly demonstrates that there is some computation occurring while thinking, but it is not solely computation that occurs. We cannot compute this feeling of understanding – which is what separates us from machines nowadays (as far as we know and can assume, without giving the other minds problem too much weight).

      Delete
  3. “Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on “

    Searle brushes off the Robot reply by claiming it can be interpreted as a CRA instantiated as a homunculus inside the robot’s head. I believe, however, that he underestimating the leap between a computer and a robot in two crucial ways. First, the perceptual mechanisms of a robot are more than just A/D transducers; they provide a real world referent to the symbols that will be used downstream in the computation. Secondly (and this is caused by his earlier rejection of the Systems reply), Searle is assuming that his job of symbol manipulation represents the sum of internal states in the robot.

    Perhaps the least thought about aspect of this thought experiment is the source of the instruction manuals handed over to Searle. It is assumed that an external source (i.e a programmer), wrote them independently of Searle and then handed them to him to implement them. This seems a natural enough intuition when the Chinese room is a computer and Searle is a component of it, but when the computer itself is an integrated component of a system that is actively interacting with the outside world, this assumption requires further examination. Let us suppose that the robot here is able to see a tree with its cameras-eyes. The eyes transcribe the analog tree to a digital tree picture and store it in the robots memory register (we’ll just call it memory). This memory stores more than the picture, recognizing enough features* of the tree that it begins to form the category “tree” in its memory. When the robot first reads the Chinese symbol for tree, presumably next to a picture of a tree, it then embeds the symbol into its category of tree. The Robot proceeds to repeat this process until it has mapped all the symbols** contained in Searle’s instruction manual into their referents these mapped out symbols are fed into an “instruction table creation module”, which then subsequently feeds the instruction tables to Searle.

    Searle is just a syntactic module; of course he doesn’t understand what is going on in the whole system! When someone is beginning to learn a new language and has to consciously translate each new word to her mother tongue, do we assume the neurons responsible for the translation know what either of the words means? Searle is syntactically connecting two sets of symbols that are semantically integrated in other parts of the robot’s hardware, so it is indeed the system, not Searle that is doing the understanding.

    • This feature recognition would probably have to be pre-built in o the robot, kind of like the theory that human recognition is based on “Geons” (http://en.wikipedia.org/wiki/Geon_(psychology)#Experimental_tests_of_the_viewpoint_invariance_of_geons)
    • To avoid a sort of behaviorist conception of language acquisition, the robot would have to have some sort of grammar detecting module pre-built.

    ReplyDelete
    Replies
    1. just wanted to clarify a sentence kid-sib would hate.

      By " CRA instantiated as a homunculus inside the robot's head" I mean to say that Searle believes the robot to just be the original chinese room getting fed sensory information, and that all the real causal work in producing a conversation is done inside this room

      Delete
    2. Nick, you are basically right about Searle's inadequate handling of the robot reply (T3) -- but it has to be added that the ones making the robot reply did not put it very convincingly either.

      And yes, computation probably just does the syntactic work it's so well suited for, within a larget hybrid system.

      Geons will come again, in the week devoted to category learning.

      Thanks for the kid-siblingification!

      Delete
  4. It has already been argued above, and in my previous Skywritings, that the reply of Searle to the Systems reply of the Chinese Room Argument represents a fundamental piece of what is wrong with Searle’s logic, so I won’t belabour the point here. Suffice to say that his reply to the system’s argument does not actually refute the argument; the rules may be memorized, and the physical ledger gotten rid of, but the fact remains that the “narrating human” or “consciousness” (i.e. Searle himself) is but part of the system, which is comprised of the rules as well as his executional power. I think that Nick’s way of putting it – “Searle is just a syntactic module; of course he doesn’t understand what is going on in the whole system!” – was particularly apt.
    Instead, I will try to explain my dissatisfaction with Searle’s definition of “intentionality” as being the crux of human understanding.
    To begin, I do not know exactly what Searle means by “intentionality” nor by “understanding”, but it seems clear that the way he has rigged these definitions, only a human brain (or a mechanical brain that perfectly replicates a human brain) are able to have these abstract properties.
    In fact, if we accept the Systems argument, we can argue that machine can understand something given the correct rules. If we want to distinguish adding machines from humans, we can add the fact that in order to understand the way a human understands, it also needs to be able to learn.
    Take Siri, for example. Siri can arguably “understand” certain simple commands and causality (e.g. “I need to be at a meeting at 3 a.m.”; Siri will check your calendar and let you know whether you have conflicting events, and offer to create the meeting) – but we all know well that all of this is the clever trick of some programmer, who told Siri what to understand (analogous to the rule-writer of the Chinese Room). Anything outside this realm is incomprehensible to Siri. So, I would argue that Siri does understand some things – but in the way that Searle in the room understands Chinese, not in the way that humans do, and not in the way that the combined efforts of the programmer and the rule-executing entity do. What would make Siri’s understanding more “human-like” would be a capacity to learn; to assimilate the information given into ways unexpected to her programmers that would allow a small set of starting rules to burgeon into a fully self-propelled understanding-entity. This property, and not some abstract “intentionality”, is what is missing from Searle’s Chinese Room, and the restaurant-story-answering-machine we are told of. Searle’s room (though not necessarily Searle himself) understands Chinese; and the machine understands what a restaurant is; but neither is a sufficient proxy for human understanding, because human understanding entails the ability to use this information to make subsequent judgements and to expand one’s own repertoire of understanding.

    ReplyDelete
    Replies
    1. But Dia, Nick was agreeing with Searle on the System Reply: Computation is not enough. It's just part of an understanding system. And Searle really is not understanding Chinese! Hence neither is the TT-passing computer.

      Yes, "intentionality" is a weasel word. Just use feeling. It feels like something to understand. Without no feeling, there's no understanding, just symbol manipulation. And what Searle means is that whatever it feels like to understand, he's not feeling it when he's doing the squiggling and squoggling that passes T2.

      Both T2 and T3 include the capacity to learn, of course. T2 is not Siri, answering your questions. It takes no time to figure out Siri is a trick. For T2 it should be impossible, because T2 is not a trick: it is really full human verbal communication capacity (and even more, if, as I think, it requires full T3 robotic capacity too, to pass T2).

      Delete
  5. “Whatever else intentionality is, it is a biological phenomenon,...”
    I had originally been really puzzled by Searle’s insistence on biochemical stuff; it sounds as though there should be something magical about proteins and amino acids which would allow the emergence of this “intentionality”, impossible in another medium, say of silicon and copper. Searle does not support this insistence very well and I am not sure whether he would be satisfied with a simple dynamical coupling between internal state and a state of affair in the world (which, if I understand, is the kind of thing a successful T3 robot would do). This is apparent in the Robot Reply: “But the answer to the robot reply is that the addition of such “perceptual” and “motor” capacities adds nothing by way of understanding, in particular, or intentionality,...”

    While I would agree that sensorimotor capacities are insufficient for intentionality/consciousness/feeling, I do not think the insistence on the proteins and amino acids is necessary. I think Searle is essentially right in saying that intentionality (or consciousness) is a biological phenomenon insofar as there is good reason to believe it is a phenomenon of living things, but we should not jump the gun and assume that it must necessarily be carbon-based and impossible to design.

    ReplyDelete
    Replies
    1. When Searle insists on biology, he should just be insisting on dynamics.

      And the Chinese Room Argument does not show that cognition is not computation at all -- just that it's not all computation.

      And "intentionality" is just one of many, many weasel words for feeling.

      Delete
  6. My main problem with this article is that Searle failed to define, in my opinion, crucial concepts. Consequently, I was sometimes left with more questions than answers.

    First of all, while Searle made the case of what is not “understanding”, he fails to explicitly define what understanding entails. Indeed, he teases his reader by claiming that there are “different kinds and levels of understanding” (1980); yet, it is still vague as to what Searle meant with “kinds” and “levels”. What level of understanding is sufficient for a robot or for a human? Are there different kinds of understanding that are unique to computer, unique to humans, or common to both?

    By arguing that "understanding is not a simple two-place predicate” (1980), Searle created the possibility of having fuzzy boundaries. I do agree that understanding cannot be a mere all-or-none phenomenon. However, Searle should have spent more think elucidating this gradient of understanding.

    Secondly, Searle discussed the idea of “machines”, without providing the reader with an explicit definition. He even claimed that “in an important sense, our bodies with our brains are precisely such machines” (1980). Is it a specific function within our brain that makes us machines? Even more confusing; Searle argues that there is something about the “biological structure” (1980) that enables humans to perceive, understand, learn, etc. Thus, is this biological structure part of the machine that is the brain? Or is the brain part-machine and part-biological? And if we were to take Searle’s own word: does that mean that there are different “kinds” and “levels” of machine?

    ReplyDelete
    Replies
    1. Never mind "levels of understanding."

      Understanding (like all other cognitive states) has (at least) two components:

      (1) The easy part: being able to do what an understander can do, with words (T2) and things (T3).

      (2) The hard part: it also feels like something to understand.

      If there's nothing it feels like to be able to do what an understander can do (T2 or T3), then there is no understanding.

      Yes, there are degrees of understanding something in particular (even in T2), for example, maths, Spanish, or PSYC 538. But before you can say that someone (or something) half-understands X, you have to show (or know) that they understand at all. Being able to pass T2 (or T3) would be the test of that. (And because of the hard problem and the other-minds problem, it's probably the best we can do.)

      A machine is any causal system. Not just man-made ones like clocks and cars, nor natural ones like atoms and solar systems, nor biological ones, like cells, organs and organisms: any causal system.

      So of course the brain is a machine. But which kind of machine? It's the kind that can pass T3 and T4 and feels.

      Now we have to reverse-engineer it.

      But it's not clear that all the T4 details are relevant. The crucial level ("Stevan says") is T3.

      (By the way, T3 is immune to Searle's Chinese Room Argument: Can anyone explain (to kid-sib) how and why?)

      Delete
    2. I though about your question for while. I am not 100% sure, I will give it a try:

      The capacities required for T3 are far more complex than those needed for T2. T3 must be able to perform at a sensorimotor level that is indistinguishable from that of a human. That is, T3 must move exactly like a human. However, movements are very complex, as they require constant feedback (whether visual or sensitive) for constant adjustment. That is how, as humans, we can move smoothly; it is a constant interaction with our environment.

      For that to happen, then, T3 would need to have "understanding". I think that T3 would not be able to have sensorimotor capacities without an understanding of the meaning of the "squiggles" that are processed. For instance, if I were to walk outside and see a dangling icicle, then I would "understand" that it would be dangerous to walk underneath, if the icicle were to fall. Thus, I would move away. Such action would not be performed without a clear understanding of the current situations, the possible risks, and the solution.

      Thus, if T3 were to possess an understanding of the "squiggle" that it processes, then it is immune to Searle's Chinese Room Argument. Because the later is entirely based on the principle that responding "squoggle" to "squiggle" is a simple computational function, devoid of any meaning of what a "squiggle" actually is. However, for T3 to have sensorimotor capacities indistinguishable to a human, it would need, in my opinion, an understanding of the "squoggles" in order to interact with it.

      Delete
    3. For some reason however, I feel like Searle switches from the multiple levels of the Turing Test. Although it is obvious that he is mostly arguing against T2 (the most basic level of computation), he does take up T3 within the “robot reply”. Searle mentions that even if there was a robot with sensory and motor functions this would not make a difference to Searle’s understanding of Chinese. He states that these would still be just symbol manipulations, and not that he is actually understanding any of these symbols.
      Harnad in his youtube lecture gives a rebuttal to this theory by stating that at a T3 level one is able to learn for example through someone pointing at something; therefore there are constant sets of associations which occur between words and objects which will eventually allow for learning/understanding/knowing (ex. learning of Chinese words). [Similar to what Florence has stated above]
      Furthermore (and perhaps I am wrong about this because I am mixing it up with our last class lecture), Searle makes the argument that simulated pulling of a fire alarm does not and cannot lead to the dynamic property of the neighbourhood catching on fire; in such a way he makes the argument that only biological phenomena allow for dynamic properties to occur. If he was talking on the level of T2 I do not understand how he went so far to conceptualise such a thought experiment. [The first blog with Marc somewhat touches this as well] One would never expect or even imagine for a T2 type of architecture to be able to simulate such things because it is missing properties humans have. Only at the level of T4 (when sensation occurs exogenously (sensations originating outside the body) and endogenously (sensations originating within the body)) would it be a fair game to consider this distinction.
      My questions in all this then are:
      Does Searle actually bounce around between levels of the Turing Test?
      Is Searle indirectly implying that there are strong AI believers at the level of T2 ???

      Delete
    4. As Florence pointed out, I also felt that many terms are brought as evidence without them being define. The concept of 'understanding', for instance, is totally opaque from the reading; while there are many reference to it, none really convinces me that udnerstanding could not be some sort of extensive symbol manipulation>

      "I have inputs and outputs that are indistinguishable from those of the nativeChinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories. whether in Chinese. "

      "Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles -- that is not the point at all. Rather,whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. "

      "many people in cognitive science believe that the human brain, with its mind, does something called -information processing," and analogously the computer with its program does information processing; but fires and rainstorms, on the other hand,don't do information processing at all. "

      Particularly, when Searle argues that "The computer,to repeat, has a syntax but no semantics.", how is it that semantic isn't just another layer of information associated with a meaningless squiggle?

      Delete
    5. (By the way, T3 is immune to Searle's Chinese Room Argument: Can anyone explain (to kid-sib) how and why?)

      In Harnad's (2001) paper it is written that his periscope would fail for any system past T2 because Searle would not be able to be the entire system. Would this mean that...
      At the level of just computing (T2) the Chinese Room Argument (CRA) passes because it rests only on the level of computing (which is what Searle is doing in the room). Whereas for anything T2 and above, Searle's argument does not take into consideration the other input that is coming in from sensory modalities, or motor behaviours. Therefore this in no longer on the level of simply computing a simple one way discussion, but it is now on a level of computing sensory and motor functions; which he did not take into account in his CRA.
      ?

      Delete
    6. Florence, T3 only guarantees symbol grounding, not felt understanding. They're not the same thing (though grounding is probably necessary for understanding). But "complexity" (a vague notion) doesn't help, or explain anything.

      What is immune to Searle's argument is anything non-computional, i.e. not implemention-independent.

      Delete
    7. Demi, Yes, Searle slithers a bit between T2 and T3, but that's mostly because the robot reply is not really a T3 objection; it is made by computationalists who still think all the work is done by computation, and the sensorimotor systems are just peripheral modules providing digital I/O.

      What is immune to Searle's argument is anything non-computional, i.e. not implemention-independent.

      Delete
    8. ab, Searle's argument works because it feels like something to understand (Chinese), and he does not have that feeling, no matter how interpretable his squiggling is to someone who understands Chinese.

      No definition of understanding is needed. We all know what it feels like to understand English and not understand Chinese. That's all Searle means by "understand" (even though he does not say it explicitly -- and he should have).

      Delete
  7. Seale’s argument claims that intentionality can be only sufficiently represented by brain processes because they are capable of producing causal powers. He continues to say that a computer program, by itself, is not a sufficient condition to produce intentionality. He bases his argument off of strong AI because it says that an “appropriately programmed computer really is a mind” and that it able to have cognitive states just like those produced by the mind. The aim of his paper is to discredit Schank’s program, which attempts to simulate understanding within a programmed computer. To do this he provides a thought-experiment where an individual is placed in an isolated room and given inputs in Chinese and a mechanism in order to manipulate the inputs in order to generate outputs in Chinese as well. He claims that even if the person is successful in doing so, they still lack the fundamental ability of understanding because they do not understand the symbols in Chinese. Essentially the person is only using syntactic rules without any application of semantics. Equating this to what a computer does shows that the computer program does not have the sufficient condition of understanding even if it still able to function and produce the outcomes necessary.
    Searle furthers his argument by highlighting the responses made that claim otherwise. The one I would like to focus on is the systems reply. This reply is that even if the individual person does not show understanding, the whole system that provides him with the inputs in Chinese, the mechanism to manipulate the inputs, and the rules on how to generate an output in Chinese does understand. Essentially the argument is that the system as a whole shows understanding even if the individual elements of the system do not. Searle replies in saying that if one is to incorporate the entire system into the individual, this person will still understand nothing. I am in agreement with Searle and do not believe this argument against his claim does any justification as to how a computer program will generate understanding. I believe the systems reply to be superficial as it is only a reply to his thought-experiment and not to the main question of “Could something think, understand, and so on solely in virtue of being a computer with the right sort of program?” The systems reply does not seem to be applicable to computer programs because they are never able to be the whole system. A computer program will always need someone to program it and it will always require input (i.e. the Chinese questions to respond to) in order for it to be a whole system. Therefore, it will never generate understanding like that of humans because it cannot generate the algorithms necessary to show understanding. It always must rely on something telling it what to do and something that tells it how to simulate understanding. It is very possible that it will start to deduce patterns in the information so that it seems to be showing understanding, but this is not the same. Human understanding is more than deducing patterns in information, it is causal powers that only “a certain sort of organism with a certain biological structure” is able to do.

    ReplyDelete
    Replies
    1. By causal power Searle means (1) the power to pass TT and (2) the power to generate understanding (feeling). (We normally can only test (1) and not (2) because of the other minds-problem. But with T2, Searle's "periscope: (see 3b) makes it possible to test both.)

      Searle shows that computation is not enough to generate understanding even if it can pass T2 -- but definitely not because it required a programmer to write the TT-passing program (any more than a robot would not understand because someone reverse-engineered, designed and built it.)

      Read 3b!

      Delete
    2. Despite your definition of causal power in your reply and Searle’s definition of causal power as being ‘’ perception, action, understanding, learning, and other intentional phenomena’’, it is not clear to me what he is actually referring to. If causal powers are the power of understanding, isn’t it redundant to say that the brain understands because it has causal powers?

      Delete
    3. The brain generates both (1) my TT capacity to do with incoming and outgoing words what any understander of English can do and (2) the capacity to feel what it feels like to understand English. The doing capacity to do what an understander can do is not understanding if it is not also felt. Normally we can't know whether or not it's felt. But with Searle's periscope, he can. And it isn't. So it isn't understanding.

      Delete
  8. I’ve been thinking a lot about how Searle’s Chinese Room argument essentially debunks the idea that cognition is computation, since it takes more than just symbol manipulation to have intentionality. We have many times defined what it means to compute - to manipulate symbols regardless of the meanings attached to them - but what is it that we are doing when we are manipulating symbols while keeping in mind the very meanings attached to them? I know that an investigation into this question won’t answer many of the problems of cognitive science, mainly because it wouldn’t explain HOW we attach meanings to symbols, but it would definitely force us to think about cognition and how it relates to computation and other possible explanations for thinking. Computation can explain a lot of what our brains do - for example, it certainly explains how we can add 593 + 1045 without necessarily conceiving of such large numbers, because as long as we follow the rules for addition, we will arrive at the proper conclusion. But there are other cases when there is minimal meaningless symbol manipulation, for example when we are described as thinking with our ‘hearts’ over our brains; making illogical and irrational decisions in the name of love or while under the influence of drugs. Often we make decisions that a computer never would, because we feel strongly about it, and I believe this is something a computer could never be programmed to do. The fact that we can manipulate symbols exactly because we understand the meaning attached to them could be a hallmark of what makes us thinking and feeling beings; while our ability to manipulate symbols without regard for their meaning - our ability to compute - could just be an added feature that we are lucky enough to have. All of this is to say that we are capable of using Chinese in the way Searle claims we can, if put into a room and given all the programs and instructions, because we are capable of computing - but we are also capable of much more than that, precisely because we often manipulate symbols in a way that is diametrically opposed to computationalism.

    ReplyDelete
  9. Searle (1980) argues “for unless you accept some form of dualism, the strong AI project hasn’t got a chance”. He considers dualism as “what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain”.

    Searle is arguing against a program being able to understand that which it has been made to code. I agree with Searle in that strong AI (even if AI engineers may be against the notion of dualism) are in his sense “dualistic” in their construction of cognition. It was explained by Searle that AI workers who believe in strong AI (whereby a program is capable of understanding, being conscious; and therefore having a “real-like” mind) consider the program as independent from it’s hardware. He mentions that strong AI workers believe that mental operations only occur at a level of computational programs of a hardware (and that the brain is just such a substance); therefore computationalism causes the mind (one which understands and is conscious). Searle argues against this by explaining that only the physical, chemical, biological parts of a human brain are what allow for human mental phenomena including intentionality. As strong AI workers believe in the rising of the mind occurs through series of computations, Searle believes in the rising of the mind through biological means. I find this a bit challenging in the sense that: what if computation does play some kind of a role in consciousness (for example the theory of mind project)? AND what if biological means doesn’t necessarily incur consciousness (for example people within certain vegetative states)?


    Also, in the video, Searle mentions that one of the puzzles of consciousness is that it has intentionality. Multiple sections in his paper also bring up this argument, however I question: Would it be possible to create a given algorithm which gives a computer some kind of intention (for example: a “want”/ the goal to co-operate with others) ??

    ReplyDelete
  10. I found Searle's arguments overall to be convincing for cognition being more than simply computation. As discussed in class, there are necessarily dynamic processes involved in cognition.

    However, if I did not find his debunking of the systems argument to be effect - Searle seemed to deal with an essentially limited version of the systems argument, like a straw man he set up to take down. (I mean essentially limited in that he could not have debunked a more comprehensive version on the same grounds.)

    Searle states: "let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the
    system that he does not encompass. (5)"

    So by the entire system, Searle only refers to the individual in the room, the databanks and the mechanism for making the computations (pen/paper). I, too, would argue that a system of this sort does not entail understanding.

    However I'll argue that this is not the entire system. My understanding of the system is more encompassing. The databanks themselves did not magically appear. At some point upstream, an individual developed them, and presumably actually understood Chinese. If you incorporate this more expansive view of the system, than that the system does understand. If the man in the room internalizes all this, than simply put: he understands.

    Potentially I've misunderstood the systems argument, but at no point did Searle explain why the system he was looking at should be limited in the way that it was. Why would the consideration of a more comprehensive system be unreasonable or illogical?

    In the responses to one of the other comments, you (Istvan) stated that a machine that passes the T3 is impermeable to Searle's Chinese Room Argument. I'm sure there's a good reason, I simply haven't figured it out. Why is it necessarily the case that a machine with sensory-motor capacities indistinguishable from a human entails that the machine possesses the capacity to understand? Why wouldn't complex programming be sufficient for these actions? I understand the application of the other minds problem here, that we would not know for certain if the robot does not understand (or feels and loves for that matter), but I don't think that this necessarily implies that it does.

    ReplyDelete
  11. As I was reading the Searle article, I was thinking the same thing as Andras: Searle’s argument is indeed convincing that cognition is more than computation, like Harnad’s in class explanation that a stimulated waterfall has no water, that a computer can not feel and as Searle says: “No one supposes …that a computer simulation of a rainstorm will leave us all drenched.” All these show that there is more to cognition than just computation.

    Yet I do not agree with Searle’s reply that the system is the individual in the room. “If he doesn't understand, then there is no way the system could understand because the system is just a part of him.” If the individual is the entire system, and if the goal is to pass the T2 pen-pal TT that will last an entire lifetime, then it seems impossible to me that the individual does not, at one point over a life-time, categorize and incorporate the data and start to some-what learn the language. He would therefore understand Chinese.
    But on an even bigger scale, if the individual himself did not understand, then Andras is right, there is much more to the system that just the individual. With the databanks, the Chinese scripts, etc, the system as a whole understands. Furthermore, each subsystem (such as the individual himself) doesn’t understand, but the global system together does. This is why I disagree with his quote: “But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands.” Again, the information and understanding is in the system as a whole not the subparts.

    ReplyDelete
    Replies
    1. One answer to the System Reply is to memorize the T2-passing computer programme. Then there's nothing else to the system than Searle, Whether Searle would eventually be able to learn Chinese that way is irrelevant. The T2 passing computer is not supposed to be learning Chinese, but understanding it. Searle's brain certainly understands (English, but not Chinese), but the question remains: how. And the answer is: not just by computation.

      Delete
  12. "As long as the program is defined in terms of computational operations on purely formal defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they contribute to understanding."

    What troubled me here was the end of this quote, Sterling says that computation is not at all necessary for understanding or that it even contributes to it. I agree with the fact that it is not sufficient; however, I don't think that he is right to say that it doesn't even contribute.

    From a neuroscience point of view, it is clearly known that the brain does some kind of computation, at least for the parts of decoding inputs and sending outputs what happens in between isn't very clear and from my point of view I don't think it just sticks to computation, that is clearly seen across several sensory modalities.
    For example the visual and auditory system, which starts with the building of a model of the outside world in a very simplistic and minimalistic manner and then goes out to become more complex through series of computations form one group of neurons to the other, each doing a different and more complex computation of the information. Although there is no explanation of how the result of this series of computation is then combined to form our awareness of the outside world.

    So in some way computation is necessary for us to be able to get an idea of what is happening in the outisde world and respond to it, since it is through computation that information is being processed.

    ReplyDelete
  13. “But the trouble with this argument is that it rests on an ambiguity in the notion of ‘information.’ In the sense in which people ‘process information’ when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do ‘information processing.’ Rather, what it does is manipulate formal symbols” (Searle, 1980).
    Searle is emphasizing the ambiguity in using the ideas of “information” and “information processing” when discussing the possible structure of a programmed computer. Moreover, Searle proposes that in actuality, computers do not do “information processing,” but rather manipulate formal symbols, a process which cannot be equivocated to human cognition. This comes after Searle explains the confusion that exists in cognitive science when researchers suggest that the human brain does something called “information processing;” cognitive scientists may postulate the creation of programmed computers based on analogous models of human cognition using this elusive idea. I think Searle does a good job of arguing against applying the idea “information processing” to programmed computers. Also, I think it is important to note that we cannot even begin to create such an analogous model when we have not really defined “information processing” or what “information processing” really entails. I find that this idea resonances closely with the notion of the homunculus and thus does not really explain much of anything. What exactly is going on during “information processing” and is it just another homuncular argument? Thus, because “information processing” is not a well-defined construct, it is best to avoid using it to explain cognition.

    ReplyDelete
  14. I will try to answer, to the best of my abilities, Dr. Harnad’s earlier enquiry :

    “(By the way, T3 is immune to Searle's Chinese Room Argument: Can anyone explain (to kid-sib) how and why?)”

    It seems that Searle’s admits this very point in his response to ‘The Robot Reply’ in which he claims that “ the Robot Reply tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world” (Searle, p 7). Without a blink, however, Searle proceeds to argue that the robot cannot be proven to have an intentional state because if Searle himself were “receiving ‘information’ from the robot’s ‘perceptual’ apparatus, and [he is] giving out ‘instructions’ to its motor apparatus” without ever understanding the Chinese symbols he uses to manipulate the ‘information’ and convey the ‘instructions’, then the robot must just be “moving about as a result of its electrical wiring and its program” (Searle p.8).

    First, Searle wishes to describe the intentional state of the robot, but as we know, he cannot experience the robot’s mental state directly without being the robot, and thus the hard problem of feeling what the robot feels should be taken off the table.

    With regards to how T3 is immune to Searle’s Chinese Room Argument: Searle argues that formal symbol manipulation cannot explain intentionality precisely because a man can symbol manipulate and produce the correct outputs from the designated inputs without ever understanding its semantic contents. This, is his fundamental reason why cognition cannot be computation (what so ever!). But, because of the Systems Reply that Searle tried to invalidate, in which understanding is a property of the whole system as opposed to its part, we can argue that although computation might not be cognition entirely, it cannot be completely disregarded. Searle’s inadequacy in refuting the Systems Reply, resides in our ability to wonder whether perhaps Searle, as an understanding being, is equally an assembly of various symbol processing components. And so, computation, although not cognition in its entirety (a more reasonable conclusion form the CRA), may still act as a possible contributor and thus, at this point, TT at the T2 level and higher still provides the best tool for reverse engineering cognition.

    Importantly, I must still distinguish as to why T3’s immunity exists where T2’s doesn’t. T2 the epitomized symbol processing device (a Turing Machine), who’s output occurs in email form, does not ground the symbols in any distinct semantic meaning. T3, however, irrespective of whether its structure consists of the chemical components we attribute to a brain, manages to extract meaning from dynamic states, and may eventually ground meaningful dynamics in symbolic form. How the symbols are grounded in the mind is not something Searle has proposed, but his CRA fails to explain why Searle must have a biological brain in order to understand, while a camera (for eyes), a cochlear implant (for ears) etc…. will not suffice (and they may not be the perfect transducers, but these examples are purely for arguments sake). It seems to me that a multiple hardware might properly reverse engineer cognition, as long as the hardware provides dynamical information which can, independent of form, convey meaningful semantics. Searle’s argument relies on the fact that an understanding being must be a conscious being because, to him, the hard problem of feeling is always present. This begs the questions as to whether semantic meaning (that we associate to all aspects of our lives) is an extension of consciousness, and not an extension of the dynamical world in which we exist. T3, however, will not answer whether a system is conscious and is thus immune to the CRA. It will, however, assess the mechanism for how meaning is extracted and used to act in the way cognitive beings do.

    ReplyDelete
    Replies
    1. Adam (me) says, "Searle’s inadequacy in refuting the Systems Reply, resides in our ability to wonder whether perhaps Searle, as an understanding being, is equally an assembly of various symbol processing components."

      My earlier argument was completely AD HOC, and realize this now upon revision. I think a more suitable reason for why the systems reply is not entirely wrong is based on the possibility that in no way does Searle formalize “understanding” in a way that depends on either conscious or unconscious phenomenon, but in fact relies on the idea that it is purely conscious. It is the concept of Searle’s periscope (discussed further in reading 3b) which highlights that you can only understand what another mind understands if the mental states are just manifestations of computer programs, and that the same mental state occurs upon implementation of that program. In this way Searle’s CRA shows, only for implementation-independent systems, that someone can mimic the computations of a machine without grasping its mental state. By putting yourself into the position of all that a program can do, highlights that the program cannot understand the way we can. This is where Searle’s argument can justify the inadequacy of reverse engineering a T2 machine, but fails to disprove the T3’s immunity to his argument.

      Delete
  15. “…I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols…” (p 8)

    I can see at least one objection to Searle’s reply to the robot-version of the Chinese symbol manipulation thought experiment; even if I (as the robot’s homunculus) only “follow” these formal instructions it seems unfair to deem the following of instructions without intentionality. Playing the role of someone in favour of the robot-version, I would argue that the robot following the programmed instructions could have some intentionality due to the highly mechanical processes required by the robot to complete the experiment successfully. Indeed, computationalism is similar to behaviourism in this way of thinking; observable things such as advanced symbol manipulation are treated as proof for understanding the underlying nonobservable mind. In the case of behaviourism, scholars have come to realize that you cannot correlate simple things like a rat pressing a bar for food many times to more complex behaviours like verbalization or truly “understanding” as Searle suggests. In order for me to take computalism seriously, I would need at least a more sound thought experiment that tested a more intrinsic property of the mind. Mind you, this would also require defining an intrinsic property of the mind… For example if it was possible to gain the same results from a robot in certain psychological experiments (especially those that require two or more people) and the robot had not been programmed to be good at those experiments (ie given formal instructions on how to complete the experiment). Though this brings up another question I have for computational ideas of the mind; of course a human would be programming the machine to pass for strong AI but I don’t know if it is possible program a computer/machine/robot with the baseline of the mind (ie a baby’s brain, or potentially an animals mind but this might be even more difficult that a baby) that would grow and learn the same way a human does. Even in modern computing, computers can learn but still there is no “baby” or baseline mind.

    ReplyDelete
  16. Searle’s paper adds to a notion I had after reading Turing last week, namely that it is the untraceable differences between human individuals that make human human cognition different from machine computation.
    If you put a series of monolingual english speaking humans in a series of rooms and gave them the Chinese symbol system as described by Searle in his CRA, each human would provide the exact same output as the next human, regardless of how complex or subjective, because they are all following the same code system (and not understanding). On the other hand, if you asked them English questions and sought responses, their responses would certainly differ, especially as the questions got more subjective (because they are understanding). This, then, highlights what I understand as the difference between computation and cognition.

    Based on this, the Chinese Room argument suggested to me an amendment to the Turing test: that multiple computers with the same hardware and software be communicated with simultaneously in the Turing test, rather than just 1 computer. If the computers provide the same output when asked the same question, they are not displaying cognition. If they deliver different, meaningful responses (which are too complex to simply be the result of some “randomization” program) to the most subjective questions, however, then perhaps they are displaying cognition. This would mean that each computer would have to have some uniqueness or independent character that differs from the character of the computer next to it, as humans do, and that uniqueness could not be traced back to the hardware or the software.

    ReplyDelete
    Replies
    1. I find this interesting to think about, because it leads me to further understand what it would take for a robot to pass the Turing test. I think your objection aligns with some of the counter arguments from last week’s readings. Namely, I think the idea that human responses would be varied and novel goes along with Lady Lovelace’s objection that asks whether a machine will ever be able to “originate anything”. If you ask two human the same question, or ask them to write a poem, you will get two different responses. What do we attribute this to? Intuitively it seems like they need to first understand the question and then integrate feeling and free will to generate a response. These are all three aspects of the human mind that Turing suggests and Searle argues cannot be captured by a machine. Then again, Turing seems to say that feeling and free will are irrelevant to his test of machine thinking. I agree that the “uniqueness could not be traced back to the hardware or the software”, but I think the human variation is possibly one of those things that Turing doesn’t even try to account for in his test. Like you, he does say that randomization would not produce believable free will.

      Delete
  17. Searl’s Chinese Room Argument challenges the claim that cognition is computation, i.e., that the stuff that goes on in our minds which allows us to do everything that we do is just the manipulation of symbols (meaningless squiggles and squaggles) according to formal rules.

    Searl’s thought experiment goes roughly like this: a man is in a room. He speaks perfect English, but no Chinese. He is given a Chinese text and an English rulebook. He does not know that the text is in Chinese (it could be in Japanese or Korean for all he knows!) The rules tell the man how to “give back certain Chinese symbols with certain sorts of shapes in response to the sorts of shapes given” in the Chinese text. Unbeknownst to the man in the room, the Chinese text is a set of questions and what he gives back is a set of accurate answers to those questions. To someone outside the room, it might seem like the man in the room understands Chinese because his answers are accurate. But he evidently does not!

    The man in Searl’s experiment is analogous to a computer: “As far as the Chinese is concerned, [the man] simply behave[s] like a computer, [he] performs computational operations on formally specified elements.” Searl’s argument challenges the claim that cognition is computation because it demonstrates that the mere manipulation of symbols according to formal rules cannot produce understanding. And there is no cognition without understanding!

    One objection raised to Searl’s argument, called the System’s reply, is that while the man in the room does not understand Chinese, the “system” he is part of (him + the rule book + the paper he does calculations on + the Chinese symbols) does understand. Searl responds to this argument by claiming that even if the man internalized the system, ie memorized all the rules and did the calculations in his head, he still wouldn’t understand Chinese!

    Here is the question I was left with at the end of Searl’s piece: sure the man in his Chinese room doesn’t understand Chinese, but he wouldn’t be able to produce the accurate outputs if he didn’t understand English. So there is understanding in that room! Just no understanding of Chinese. Does this mean that the manipulation of symbols according to formal rules is not possible without understanding something (though not necessarily anything about what those symbols mean)? My claim seems dangerous, for it might leave me arguing that my calculator understands…

    ReplyDelete
    Replies
    1. I had the same thought after reading the text - the man understands at a certain level. However, I think the problem demands that there not only be understanding in the room, but that the man understand the story that is input and use understanding to output responses. Searle is comparing the authentic understanding of English interactions to the mere symbol manipulation of the Chinese response. Searle is specifically looking at whether symbol manipulation is equivalent to understanding Chinese. The way I interpret this, the man’s understanding of English in the room is no different from a computer program directing his motions. Java can tell a computer what to do, but the computer does not understand its sequence of activity. The instructions in both cases, English and Java, are somewhat irrelevant to the real issue of whether the human/computer is understanding its storytelling task.

      Delete
    2. Glad someone else had this thought :) I think your response is right!

      Delete
  18. “Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output”

    Searle states, “formal symbol manipulations by themselves don’t have any intentionality”. He claims that it is the mind programming and interpreting that attributes intentionality to the computer’s meaningless actions. The mind has something extra that allows it to act upon hardware and process symbolic output into meaning. Can this division be extending to mind and brain, where the mind is using the brain (feeding input and interpreting output)? Searle talks about dualism in the sense that, “what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain”. He declares that this view is a faulty prerequisite to strong AI. But how does this differ from what he is claiming in the above quote? He declares that there is something essentially different that allows the mind of the human to have intentionality and that dooms the computer to never achieving understanding.

    He then goes on to specify that, unlike strong AI proponents, he believes that mental phenomena arise from the physical and chemical makeup of the brain. He sees intentionality as a biological phenomenon which must rest in its biological roots. The mental cannot be separated from the brain and therefore Searle does not accept dualism. So, in the above quote, Searle is merely reinforcing his opposition to dualism. The human that is programming the computer and interpreting the output has intentionality that is tied to his human biological makeup. This is not a phenomenon that can arise from anywhere else because it is not independent of the brain.

    Because non-computationalists claim that the mental cannot arise from a computer implementation, I often thought that they were resting on a form of dualism. Non-computationalists seemed to be saying that no physical implementation can give rise to the mental, therefore the mental is something different that cannot be captured by computation. In Searle’s case it seems that he is not a dualist because he believes that the mental cannot be separated from its physical implementation in the brain. Rather, the biological brain is necessary for intentionality, and no non-brain can have that.

    ReplyDelete
  19. In his paper, Searle states, “As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding.”

    If I understand correctly, I believe Searle is trying to argue (though his Chinese Room Argument) that cognition does not equal computation since his outputs are correct (based on the inputs) and he cannot understand Chinese. It seems to me that Searle is attempting to tackle the “hard” problem in cognition. While I agree that understanding something may not necessarily be part of cognition and (based on the CRA) I believe that computation does not completely explain cognition, I find it hard to wrap my head around the notion that cognition cannot be explained by computation at all. If computation can reverse engineer the easy problem of cognition (how we do what we do), then isn’t it possible that understanding could be part of another module within cognition? Understanding seems to go beyond symbol manipulation and, to me, understanding can be thought of as hot it feels like to understand.

    Searle also states that “the programmed computer understands what the car and the adding machine understand, namely, exactly nothing”. To reiterate my point, I completely agree with Searle on this. The computer is computing and does not need to understand anything. Its processes of symbol manipulation are enough to explain how we do what we do. Again, understanding (or how it feels like to understand) goes beyond the capacities of the computer. I think that Searle is trying to say that understanding MUST be part of computation/cognition. I believe that understanding does not have to be part of computation/cognition.
    On another note, Searle also states that “as long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding”, but later on, he counter argues the brain simulator reply to his CRA and says that recreating a brain would still not allow the system to understand Chinese.

    To me, it seems that Searle is trying to combine two separate things into one: the software (or mental states) and the hardware (the brain). Based on the first readings we did for the seminar, it seems to me that the hardware and the program running on it are two very separate things. Please correct me if I am wrong, but it seems that Searle is arguing that a recreation of the hardware would automatically recreate the software as well. There seems to be no distinction between the software and the hardware. I don’t believe that recreating the exact hardware of the brain is absolutely necessary for understanding cognition. Again, please correct me if I am wrong, but from my understanding, computation/cognition is not complete without the software that tells the brain which program to run and what to do.

    ReplyDelete
  20. "No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not." p.14

    I agree that simulation is not reality and that a program that simply simulates human behavior is not capable of thinking. The development of his argument highlights the discussion we had in class about the distinction between simulation and the dynamics of the real world. What would Searle's response be to a robot that had sensorimotor dynamics and learned from its environment? Simulation may not imply the semantic understanding but can it not play a role in the semantic understanding if combined with dynamic processes? The argument also calls into question whether simulation (i.e. computational aspects of cognition) have no role in intentionality or can it be partially responsible?

    "The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality."

    Searle's argument with regards to intentionality seems a bit circular. He keeps referring back to the idea that human/animal brains have a special "causal capacity to produce intentionality" but fails to explain where this causal capacity might originate from. Is it an emergent property of brain processes? Does he attribute it to something extra-physical?

    ReplyDelete

  21. ‘’So there are really two subsystems in the man; one understands English, the other Chinese.’’

    In this paragraph, Searle seems to base understanding on the fact that one can tell what a word refers to. He affirms that an English native speaker learning Chinese understands English because he knows, for example, what ‘’hamburgers’’ refer to but does not understand Chinese because he has no intuition on that reference.
    But couldn’t we then say that the computer understands the language it uses to manipulate the other language? The manipulator language’s symbol do refer, have content, are not purely formal while the manipulated language’s don’t. So maybe the machine does not understand the manipulated language but it does understand the language of instruction.
    Then, why couldn’t a machine do more than computation and manipulate forms that have a content.

    ReplyDelete
  22. "...because I am a certain sort of organism with a certain biological structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have intentionality."

    I completely agree with Searle's argument, but for a completely different reason. I believe that the formal powers of computation alone cannot achieve intentionality. But I also think that a process like understanding is an epiphenomena that arises from the brain activity when the formal symbols are read. Something like this can never be captured by a machine, it is something particular to organisms alone. Maybe this all just goes back to the mind-body problem?

    Something that bothers me about this paper was the fact that even with the lack of understanding, the Turing test can be passed. Because the behavior of a robot is perfectly passable as human, we would automatically ascribe intentionality to it. Except, as Searle said, when we have good reason not to, such as if we knew "the behavior was the result of a formal program". This makes me feel as if the Turing test is not powerful enough to distinguish what is and is not cognizing. What can we turn to then?

    ReplyDelete
    Replies
    1. It seems that unless we solve the mind-body problem, we will never be able to have a cognizing computer.

      Delete
  23. Much of what Searle says aligns with and reminds me of David Chalmers' extended mind thesis (https://www.youtube.com/watch?v=ksasPjrYFTg)
    First, Searle says that "in artifacts we extend our own intentionality; our tools are extensions of our purposes" (page 5).
    Later, he says that formal properties are not by themselves constitutive of intentionality, and therefore that to duplicate the effects of mental processes, we must exactly duplicate their causes.

    Let me back track for second. The extended mind thesis attempts to answer the question "where is the mind" by derailing the assumption that the skin and skull mark the division between your brain (which contains your mind) and the world. My understanding of Chalmers' thesis is that insofar as our cognitive processes rely on aspects of the environment (a calculator, a book), then our minds are made up precisely of those artifacts on which we rely. Our minds are not limited to our brains then.

    Could this perhaps contribute to the systems argument?

    ReplyDelete
  24. Searle, in his CRA, has irrefutably devised a computational system that works without a single hint at understanding. No part his system has any hope of understanding phenomenon outside its own formal structure, even though, as an entity, it is seamlessly interacting with the outside world. Any attempt to attribute understanding to Searle’s proposed system becomes quickly caught up in Searls definition of understanding and loses it’s grounding in wishy-washy semantics. No matter how you put it, Searle does not understand Chinese, and Searle is the only body in the computational system capable of understanding. For all intents and purposes this should be accepted as fact.

    To overcome these replies one has to take a different approach. This approach must question the systems ability to wholly simulate cognition (in the computationalist sense of cognition). Of course this is difficult, because the system is a streamlined analogy. But I do believe that the analogy is incomplete when it comes to certain key components of cognition, namely memory and learning. These are background processes as far as human interaction is concerned but still they are key concepts in the study of cognition. Take the T3 style robot described in Searle’s second (Robot) reply. Say the robot’s hoping to go on it’s second date. This robot will behave peculiarly unless it has memorized information from its first date, as well as information about dating norms and assimilated this information into some sort of dynamic data storage bank and categorized this information in terms of semantic relevance. It can then use this to alter its behavior for the upcoming date. Searle’s system leaves out any notion of memory storage and categorization. The system is also lacking in another component that might be used to alter the “English instruction manual” in accordance with these memory schemas.

    I’m not suggesting that this somehow grants the man in the room any additional understanding, but it presents a hole in Searle’s system requiring a description of additional dynamic elements. Depending on how the elements are defined, it opens up the potential for something akin to understanding across the whole system.

    ReplyDelete
  25. “The whole point of the original [Chinese Room Argument] example was to argue that such symbol manipulation by itself couldn’t be sufficient for understanding Chinese in any literal sence because the man could write “squiggle squiggle” after “squiggle squiggle” without understanding anything in Chinese.

    I actually agree with a large part of what Searle is saying in his article (at least I think I do, if I understood correctly). From my interpretation, of it Searle does not say that cognition cannot be computation at all; rather, that it cannot be all computation. In his example, what the computer program lacks is “intentionality” (or as Dr. Harnad has pointed out, feeling, since intentionality is just a “weasel word” for feeling). Like the Other Minds Problem, we cannot really know if a machine feels, or thinks, or understands. Searle asserts many times over and over again that solely manipulating meaningless symbols to get certain outputs is insufficient for “intentionality”/understanding. While he never says this explicitly, Searle seems to imply that the main reason the person in the room is able to understand English is because the symbols are not meaningless. They do mean something. So if the Chinese symbols become associated with the objects/ideas that they refer to, and the man in the room memorizes the entire system including the meanings of the symbols, then he would be able to understand. This association of symbols with meaning would constitute that it would no longer be just computation, but Searle seems to suggest that if meaning were attributed to these symbols, then a computer may indeed be able to understand.

    ReplyDelete
  26. When Searle argues against Shank's claims regarding his program (1. 'that the machine can literally be said to understand the story and provide the answers to questions' and 2. 'that what the machine and its program do explains the human ability to understand the story and answer questions about it') Searle quickly and effectively reminds his readers of what we instinctively 'know' about the reality of our consciousness, and its inherent distinguishability from programs, which merely simulate our behaviour.

    In rebuttal against Shank's first claim and using his Chinese writing room example, Searle explains that although the input and output of information of non-Chinese speaker can be indistinguishable from a native, this does not change the fact that the person does not understand any of the information that is being transmitted. Although the brain is processing the presented information and is retrieving within its memory what the appropriate response is, and then finally executing this response, I believe Searle is arguing that the phenomena of consciousness is more than just formal symbol manipulation. It requires an actual reason to be responding with the appropriate output.

    ReplyDelete
  27. Searle's argument against Shank's second claim is, from what I understand, that whatever appropriate input and output behaviour that can be made to exist identically within a computer as well as a human does not create understanding, and where there is an understanding, there is no proof that the input and output of information that is present in the simulation is present in the human.

    This argument made me think that perhaps our ability to think and feel is incomparably unique and that perhaps it is unproductive and ungrateful of us to minimize its complexity in order to match simple, lifeless material. Searle's argument against Shank's second claim makes me wonder: will simulating inputs and outputs of an incomprehensibly complex living brain get us any closer to understanding the nature of consciousness? Searle would argue that consciousness should only be understood as a complex neurobiological phenomena, and I absolutely agree.

    ReplyDelete
  28. ' (…) Only a machine could think, and only very special
    kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains.'

    'Any mechanism capable of producing intentionality must have causal powers equal to those of the brain.'

    Perhaps this need for intentionality within consciousness reflects all aspects of the living organism, who is at all times affected by the environment, and who is always in need of the appropriate response in order to survive. However, regarding a 'machine with internal causal powers equivalent to those of brains', could such a thing exist? What could be equivalent in internal causal power to adaptive survival instincts?

    ReplyDelete
  29. I generally agree with the arguments made in Searle’s paper but I found one point to be rather unsettling. In particular, I’m not convinced of Searle’s response to the Robot Reply. The Robot Reply poses the scenario of putting a computer inside a robot equipped with visual and motor apparatuses. This robot would then be capable of perceiving and manipulating the environment. Therefore, it can be said that the robot has “genuine understanding”. Searle then responds with the following: “Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation.” This response makes sense and I would agree with it if the robot was simply following instructions and computing symbols in the environment. However, given these perceptual and motor capabilities, wouldn’t the robot eventually be able to attribute meaning to these symbols based on the feedback and its interaction with the environment? I would suppose that these sensorimotor interactions go beyond simple computation as it incorporates dynamics as well. In sum, I see this robot as going beyond input and output of formal symbols.

    ReplyDelete
  30. John Searle proves that computationalism fails to provide a causal mechanism for our cognition. A causal explanation of human thought and behavior without mention of thought's content is no explanation at all. If when asked to explain how one recalls the name of a third grade teacher all we can say is, "it's part of the program to give such a response," we have explained nothing. The Church-Turing Theory gives us the crazy idea that if we built a simulation of whatever it is that a human does, we would have built a human, with a brain, capable of human thought.
    "For simulation, all you need is right input and right output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, wheter it is pain, love, cognition, fires, or rainstorms." (12)
    Neuroscience today is a sort of computationalist's phrenology. Certain parts of the brain are able to compute certain other things. Like, back in the occipital lobe, all we might have are neurons that can compute the orientation of lines and more rostrally, we've got some neurons that compute just complex shapes, like circles, and eventually you've got neurons that compute jennifer aniston's face. Maybe that's hyperbolic, but that's the idea. Progressively more complex programs that rely on computations of more simple programs, or measurements, spread out across the brain.
    But the process by which neurons make their computations is no less syntactic than the one made by computers. Computers use zeroes and ones to determine weights of inputs. Neurons use a handful of chemicals to determine weights of inputs. I really don't see the difference between our current conception of how neural networks and computers work. It's all just electrical engineering. Both the T3 robot and the human are making measurements. Finding out where computations occur doesn't tell us anything more than some other number of algorithms connected in a different way. Neither one tells us what causes thought.
    Systems rebuttle suggests that one part of the brain make purely syntactic computations and another system perform the integrative, meaningful parts of thought. But all neurons can do is perform calculations, weigh IPSPS and EPSPS, integrate disparate inputs, but where does the meaning come from?
    I think Searle is forced to use vague terms like "Brain" because it's unclear how we actually work. The best we can do is admit that we do do neural work that provides us with intentionality, meaning, feeling, content, representations, all those words Harnad, S. won't let us use. At this point though, our neuroscientists might as well be electrical engineers.

    ReplyDelete
  31. The reading by Searle is very successful at hitting our intuition, and in this way it’s easy to feel convinced by his argument, at least in my case. Searle argues that cognition cannot at all be computation on the basis that computation or formal symbol manipulation can be performed without proper understanding; and cognition of the type we do, requires understanding/intentionality/feeling. I was particularly persuaded by the distinction between simulation and duplication, indeed computationalism is about simulating how we do what we do, performance capacity, the catch is whether with these types of models establish causal mechanism, or better said, dynamics. Here I am a strong partisan of what Searle remarks: that “actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains” and then that “whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific of biochemistry of its origin as lactation, photosynthesis, or any other biological phenomena.” By saying this though what Searle really shows is that cognition is not all computation. As it has already been intensely clarified by intentionality Searle really means feeling and I don’t know if there exist a particular “right” way of approaching the questions we are putting forward, but I would set out on the look for the “chemical algorithm” of feeling.

    ReplyDelete