Saturday 11 January 2014

(3a. Comment Overflow) (50+)

(3a. Comment Overflow) (50+)

21 comments:

  1. As Harnad and others have brought up, a major flaw in Searle’s reasoning is his use of the word “intentionality”. While “feeling” may be a more comprehensive word for it, I think what Searle’s is getting at is better expressed in his discussion of “understating” when he says: “My critics may point out that there are many different degrees of understanding; that ‘understanding’ is not a simple two-place predicate; that there are even different kinds and levels of understanding….But they have nothing to do with the points at issue. There are clear cases in which ‘understanding’ literally applies and clear cases in which it doe not apply”
    It seems that Searle takes the concept of understanding for granted and holds it as a given in human cognition. With this assumption he presumes intentionality. If one delves deeper though, can we really assume that humans understand anything at all besides from what our brains have been “programed” to know? Isn’t our understanding predicated upon layers and layers of script, much like the instructions from the Chinese room argument?
    Take for example the restaurant story given by Searle. He asserts that because we can infer weather or not the patron ate his burger, we understand at least more deeply than the computer. But, isn’t the only reason we can appreciate all the intricacies of the story a testament the infinite amount of similar scripts and rules modeled for us through our lifetime? Is our so-called understanding not just extremely sophisticated programming?
    This brings my thoughts back to the many mansions reply, which in my opinion is dismissed by Searle all too quickly. He contends that this reply is faulty since it undermines computationalists assumption that computational states are implement independent. He asserts that; “ It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain phenomena.”
    He understands the many mansions reply as just a question of advancement in hardware beyond what has been imagined up until now. But what about advances in software? If a machine can be programed with the theoretically infinite amount of rules the human brain is programmed with through out experience, would that computer not also be thinking?

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Searle(1980) attempts to attack the claims of strong AI, according to which programs serve as explanations of the mind and that both are independent of their ''realizations'' (hardware and brain). This view arises from what he identifies as 3 sources of confusion: the ambiguity of the term ''information'' and ''information processing'', behaviorist residuals that as for the Turing test lead us to believe that a similar behavior entails similar processes of generation of that behavior, and finally, on a dualistic view that considers the brain and the mind as independent.

    In order to attack this view he constructs a model (CRA) to prove that it is possible to have computation without understanding, that understanding implies more than computation and, actually, he says that we have no reason to believe that any computation is part of understanding. This claim clearly needs argumentation and some of the weaknesses of the article reside in the fact that Searle repeatedly allows this type of unsubstantiated dismissal.

    He bases his claim on the fact that understanding is based on intentionality, it is to say consciousness directed to something, and therefore entails an interaction with the world. Computing cannot have intentionality and thus concern real things because what it does is purely manipulation of formal objects that have no meaning, that are not the symbol for anything real. Rather, the apparent understanding they seem to demonstrate when they adequately perform the operations they are ordered to, is no other than the understanding of the person who conceives the program or who interprets it's results.

    I think the most interesting thing that Searle does in this article is to denounce any attempts to explain the mind (if I understand correctly this is equivalent to saying ''consciousness'' or ''intentionality'') that obliterate its biological nature. It seems clear to me that the biological component is essential, even though there may be more to it, other necessary explanations from other levels of organization. Because still, in the road from electrical impulses to consciousness, the gap seems enormous to me.

    ReplyDelete
  4. 3a. “If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental.”

    This is, in my opinion, one of Searle’s most important points in regards to how and where strong AI fits into the study of the human mind. It seems so obvious: mental systems and AI systems are different (if they were the same, we could use the same term to mean both types, but instead one has to be “artificial”) and so constitute separate categories. Perhaps the force of computation has been trying so hard to convince us that AI is the same as a human mind that it’s been a while since we have stopped to think about how the systems are separate by definition—in this sense, Searle’s argument is still very relevant.

    A separate point about the Chinese Room: this got me thinking about different kinds of learning. There is human learning, obviously, by some mechanism, and there is “machine learning” according to some, and I wonder how many “types” of learning there are? The reason the Chinese Room made me think of this is because the way Searle described increasing ability to manipulate symbols. It reminded me of the “variational learning” model of language acquisition (Yang), which could be easily implemented as a statistical model in a computer. In this sense, acquiring the symbol manipulation would very much be like acquiring a grammar with syntax but no semantics as Searle pointed out. There has to be some unique learning mechanism in the human mind beyond a statistical model that separates a human system from an AI system—right?

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Searle uses the example of a person who does not understand Chinese using English instructions and sets of Chinese symbols to generate answers to questions about a story in Chinese to show that putting formal principles into a computer isn’t sufficient for understanding because a human would be able to follow the same principles without understanding what was happening. I’m generally on board with Searle’s argument showing that cognition is more than just computation, but I think there’s one particular part that discredits what is so appealing (for me at least) about the idea of computation as an explanation for cognition.
    “To confuse simulation with duplication is the same mistake, whether it is in pain, love, cognition, fires or rainstorms.”
    (This is also good because it questions the inner granny inspired conviction that pain and love would somehow be more difficult to replicate than anything else.)
    The intuition that computers are like brains because they both ‘process information’ in ways that fires and rainstorms do not meshes well with computationalism. BUT, as Searle points out, computers and people do not process information in the same way. When a computer processes information it is manipulating formal symbols and what the symbols stand for is irrelevant. When a person is processing information the meanings of the symbols are completely intertwined in the act. So human information processing differs from computer processing because it involves intentionality/feeling/semantics. Finding a program that will generate the right output in response to an input is not the same as duplicating what goes on in the mind when it is processing that same input.

    ReplyDelete
  7. "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?"

    It seems to me that the obvious answer to this question may be that many people doubt that understanding is physical in the same way rainstorms and fire are physical. We do not detect thoughts with our eyes, thoughts do not wet our skin, and as of yet thoughts have not burned down any neighborhoods that I know of. Understandings are not readily apparent as physical events or events with some physical component.

    It should be noted that Searle's CRA has not proven the existence of some physical component of cognition -- it simply refuted the belief that cognition is computation. It seems to me that there are at least two obvious arguments for why cognition is physical in some way. For the purposes of this post I will label these as (i) "because brains" and (ii) "because feeling".

    i) "Because brains"
    "Because brains" is what I would consider the dominant way of thinking about the unexplained aspects of cognition in North American (and certainly biomedical) cultures. Unfortunately it seems as though the field of cognitive neuroscience has come up short in their search for causal mechanisms to explain cognition. As Prof. Harnad sometimes says (if I may paraphrase) "neuroscience has not told us anything about cognition that we did not already know". For the most part, cognitive neuroscience stays afloat by uncovering new (and often useless) correlations between cognition and biology. I am still waiting for the day when a neuroscientist can prove to me that a biological processes causes some previously unexplained aspect of cognition.

    In defence of cognitive neuroscience, one may say that the unexplained aspects of cognition will be explained by biology as soon as we scale up our technology to match the complexity of human cognition. This strikes me as eerily similar to the view that we will explain cognition with computation once we scale up our technology. When it comes to this line of thinking, I adopt a similar stance as with Harnad's comment from our readings this week where he wrote "this [view] is not a counterargument but merely an ad hoc speculation" (note that this quote was written about a different line of reasoning than the one I am describing here, although the two lines of reasoning are similar).

    ii) "Because feeling"
    "Because feeling" is the viewpoint that humans have something called understanding simply because it feels like they do. Certainly we can be sure that feelings are real (i.e., not simulated), but this does not imply that they are a priori correct. Consider, for example, an individual with obsessive compulsions to wash their hands even after they know their hands are clean. It is reasonable to believe that this person has these compulsions because the feeling of uncleanliness is separate from the reality of having clean hands (Wegner, 2004). Similarly, it does not seem reasonable to assert that humans can "understand" Chinese simply because they feel like understanding is a real thing. That said, it is still reasonable to say that humans feel something (even if it's wrong), which is why I am quite convinced by Searle's CRA that computation can not be all there is to cognition. Feelings are as real (i.e., not simulated) as water or fire and I do consider them to be part of cognition. There must be either i) something physical or ii) something supernatural in cognition to make the feeling real (in the same way there must be water molecules in water for it to be wet).

    References:
    Wegner, D. M. (2004). Précis of The illusion of conscious will. Behavioral and Brain Sciences, 27(05), 649–659.

    ReplyDelete
  8. “...but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena” (p. 10, Searle, 1980)

    It seems that Searle is confusing his software with his hardware. Understanding is something non-tangible, in that we cannot concretely see or touch the phenomenon of understanding something. It may arise from, or lead to, mechanical changes within our brain, but it is firmly part of the mind in the mind-body binary. Two people can theoretically be in the same mental state although they have different brain, different levels of neurotransmitters running through their brains at any given time. Similarly, a computational state is fundamentally derived from the program state. The program, run on two different pieces of hardware that work completely differently, can arrive at the same state. If we accept that two people can arrive at the same mental state through different hardware, and two computers can arrive at the same computational state through different hardware, then why can a computer and a person arrive at the same mental/computational state (since they are the same, according to computationalists) through different hardware? Searle does not explain why biological systems are the only possible system for achieving mental states of understanding, other than to say that it must be so because we are built using a biological system. For a moment, he even seems to be using a Grandmother Objection.

    ReplyDelete
  9. “But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”

    In “Minds, Brains, and Programs”, Searle attempts to disprove this statement. He gives the example of a “Chinese Room” in which an English-speaking man is given English instructions on how to reply to Chinese symbols with certain other Chinese symbols. This man would pass a sort of Chinese Turing Test, in that he would be able to trick people reading his replies into believing he understood Chinese. Searle points out that this man is does not in fact understand a word of Chinese, so he is able to produce computations without intentionality (feeling). One objection to this thought experiment is the systems reply:

    “While it is true that the individual person who is locked in the room does not
    understand the story, the fact is that he is merely part of a whole system, and the system does understand the story.”

    In a way the systems reply has good point in that the man actually does understand English, the language in which the instructions are given (the “program”). So if you look at the system as a whole, there is a level of understanding. We can say he understands English because he know what it feels like to understand English. But if we apply the systems reply to an actual computer, it falls short. Even if a computer is capable of taking Chinese characters as input and returning prescribed Chinese characters as output according to a program, we don’t know that the computer actually knows what it feels like to read the program. We can not say that it understands.

    ReplyDelete
  10. For the most part I agree with Searle, but I do see how this is really not a refutation of the Turing Test. But this is irrelevant.
    For me to understand the Chinese Room argument I decided to think about it in a different way. Imagine you are given a set of rules (in a language you understand) that tells you to return a square when you see three circles and a triangle. This is basic pattern matching. Let’s posit that this language with squares, circles, and triangles make sense to some cognitive being. For me, these symbols have no meaning (semantics) but they apparently have a well formed syntax. From this, I cannot learn the language. I have no means and no access to translational tools. Now, this is the way I imagine a computer works. Computers work with a byte system which is basically a set of on and off switches designated by 0s and 1s. This is how the computer functions. Any program created for it must be translated by a compiler (just another program that generates machine code from code written by the programmer) into bytecode which is then run, an output is given, which is promptly translated through some pattern matching to a response in the program language.
    The implications of this, if we are to say the “understanding” (which is really just pattern matching and not at all understanding) of the computer’s bytecode is how cognition works in the human brain, is this to say that we actually don’t understand the language we use on a daily basis? That the compiler in our brain reduces the input to a “bytecode” if you will?
    Honestly, I am concerned by one major thing, if in English we can construct an infinite number of sentences, if we can create new words (like Google), if we can cause evolution in language (ever try reading ye olde English?), how can a computer do this? Not to mention the blaring issues with memory, since I have yet to see a way for a computer to generate original ideas and flesh them out. Furthermore, the rules of syntax for all of the human languages are immensely complex and there are exceptions all over the place. Furthermore, consider Morphology - there are rules that I know in my native language of Russian that I do not remember ever learning concretely, and that do not have a concrete rule, like how do we know to assign genders to objects and know which one to use, even if the object is essentially novel?

    Lastly, I agree with Searle on a major point. The “idea that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains.” I truly believe that the brain cannot be ignored and I am for certain not a dualist and probably am no where close. That is a bit of a strong position to take, but I cannot fathom that cognitive capacities do not require the actual brain. From this stance though I find it difficult to believe that even Searle believes given “you can exactly duplicate the causes, you could duplicate the effects” in order to create an artifact that can think. This to me is still ludicrous.

    ReplyDelete
  11. Searle, in his argument against the effectiveness of the T2 Turing Test, also manages to provide an intuitive argument against computation, though I believe it to be insufficient to put computation to bed. Any argument against Searle seems to come from assumptions of computationalism (namely, the tempting "systems response", which Searle effectively refutes), though computationalism itself always came more from those assumptions than empirical evidence. I will make that assumption explicit in mine. Coming from the perspective of computationalism, it is certain that the conscious being is not aware of the symbol manipulation that is occurring to give rise to its cognition. In his thought experiment, Searle expects to both perform the symbol manipulation himself, and experience the understanding associated with what he is doing. While this effectively debunks the T2 TT, it does not fail to refute computationalism, which makes no promise that the "level" on which understanding occurs is the same as the one on which symbol manipulation occurs.

    To help define what I mean by "level", consider this: no part of any system which is blind to the rest of the system can be aware of what an entire system is doing. It is absurd to think that a motor knows that it drives the car, yet the car is driving due to the motor. No part of the computer which you are reading this on - including the screen - is aware of the fact that it is rendering meaningful symbols on a screen, yet it is indisputable that the computer as a whole is doing just that. In the biological case, even without the assumption of computationalism, this property holds true. The label-line neurons that receive input from sets of photoreceptors and return output corresponding to the orientation of lines on the screen are not aware that you as a whole are reading, yet both of these states exist, and if those neurons were to quit, no doubt the reading, which they were not aware of to begin with, would no longer be possible. All of this is to say that systems all have at least two levels - one that is performing some action, and the underlying mechanics that drive that action.

    Searle attempts to circumnavigate this by replacing the entire system with himself, but even this is ineffective, as the system does not have to be aware of it's mechanism. You are aware that you are reading, but at the same time do not feel like you are interpreting the dots on the page as a set of lines, though that is certainly occurring.

    Searle's failure here serves to help define a fundamental question of cognition, which both computation and dynamics based models of the mind must explain: how is it that understanding can arise from a system that cannot understand?

    ReplyDelete
  12. Searle states that intentionality cannot be programmed into a machine the way it is into a human being. Furthermore understanding happens in very different ways between the 2. According to Searle, robots can process meaningless inputs and through programming be able to give an appropriate output. This does not necessarily entail comprehension, just that it was able to spit out the right symbols. There is no emotion attached or comprehension as it happens with us. While the emotion may be a less important aspect of the argument, responding to a stimulus is important. When we come in contact with something novel, we process it by connecting it to what we have in memory to classify it and then respond accurately. A machine cannot do this since it simply reacts objectively and uses stores of facts in order to respond. If internal representations of the same stimulus are different in a person versus a computer, then how can it yield the same result of understanding?

    ReplyDelete
  13. "I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing."

    Searle's paper places a lot of importance on the idea that two systems can be functionally equivalent, yet one might not exhibit intentionality (and thus lack understanding), while the other does. Searle specifically acknowledges the assumption that it is possible to formally define a system that acts exactly as a human mind acts; i.e. has the same input-output relations as a human mind, but the relations between input and output are computed purely formally. I question this assumption. The types of input-output relations that are mediated by an understanding of the meanings of words or intentional objects are incomprehensibly complex. It does not seem implausible to me that, short of hardcoding responses to every possible question about a symbol (a maximally inefficient algorithm), it would be impossible to generate verbal outputs and behavior identical to those of an understanding system.

    In order to determine whether or not something has 'understanding', it seems to me that we need a definition of what understanding is. Searle's approach is to assume that understanding is something possessed by humans (due to the "causal powers of the brain") and not by other things. It seems like if we start this way, we come dangerously close to begging the question when we deny any type of system other than the brain the possibility of understanding. I was further unable to get a clear sense how intentionality and meaning contribute to understanding, or how these properties come to be in systems capable of understanding. It seems that the best we can say to define meaning is that it does not simply consist of the structural correspondence of formal rules to the behavior of things in the world. If it is indeed the "mark of the mental", what the heck is intentionality, and how does it work?

    ReplyDelete
  14. In Searle's Minds, Brains, and Programs, he uses an example of Chinese room and argues that computer programs are not sufficient for intentionality. I pretty much agree with his argument, while there is something left out in his explanation.

    Searle argues that people are just machines, including our organs like stomachs and lungs. However, people/brains have the ability to understand, but stomachs/lungs do not. Similarly, computer programs can give us the same outputs that human beings give, but they do not have the ability to understand. Near the end of the article, he says, "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?' This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no. 'Why not?' Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics." I think these sentences pretty much conclude what Searle wants to say.

    I agree with Searle on his conclusion that cognition is not only about computation, or that computation cannot represent all of that about cognition. His argument reminded me of the discussion in class last week about the strong Church-Turing thesis. Strong Church-Turing thesis says that computer programs can simulate/model any systems as close to that system as you wish. However, the main problem is that they still lack something. If you simulate a plane, the computer program is still a computer program, and it cannot carry people and fly in the sky to another place. If you simulate a water full, the computer program does not have water that flows. As for cognition, the computer program cannot simulate feelings because feelings are invisible to anyone except for the introspector himself or herself. Searle's argument kind of lacks the same thing. Yes, computer programs are very powerful and can give pretty much the same outputs as human beings do, but what Searle is talking about, if we use the idea of the hierarchy of Turing Tests that Harnad introduces in his paper The Annotation Game: On Turing (1950) on Computing, Machinary and Intelligence, is only T2: "we can talk about anything and everything". T2 does not include star-gazing or food-foraging, as Harnad claims. In Searle's argument, he lacks whether computer programs can tell us what they feel, and for example, how they perform star-gazing or food-foraging. For the Chinese room argument, I would say, he lacks what the person inside the room feels about knowing or not knowing about Chinese (express his feelings in Chinese). I guess feelings are very impossible for a person who does not understand Chinese to express in Chinese because there is no way that he can use manipulation of Chinese symbols based on their shapes and a set of rules to express his feelings.

    ReplyDelete
  15. It is hard for somebody like me, who grew up with a computer in the house from day one, to imagine a life without them. Since the modern computer is known to lose its efficiency as it ages, I find myself anthropomorphizing these devices that spend more time with me than I’d like to admit.

    Searle’s paper was written in 1980, which is before my time, but nonetheless I can’t help but think that at that point the modern laptop or phone would seem unbelievably futuristic. He argues “the computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.” What would it take for a computer to seemingly understand a language introduced after its initial programming? The way an iPhone is able to predict what I intended to write when I input some jumble of letters, and the way it quickly accepts a new word into its dictionary after I use the word only once or twice, makes it seem like it is ‘learning’, without my explicit direction to do so.

    Perhaps my phone does not understand what I am saying, but it modifies itself to fit my habits (even giving me an ETA for where I would usually be at that time on that day of the week, which is freaky), and so this learning function makes it seem so human (and understanding) that I wonder if understanding will ever be attributed to technology. The whole system understands (or to be more precise, adapts to) my habits with eerie accuracy, so even though a theoretical little man manipulating the input and outputs of this machine might not understand the inputs, it is hard to remove all attributions of character or learning.

    ReplyDelete
  16. Searle states that "The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program."

    Why would it be that perceptual and motor capacities not contribute to understanding? I would think that, having symbols be grounded in a manner external to the computer, would provide meaning to the "squiggles" and "squaggles" that the computer manipulates. I would also believe that the issue of intentionality is at least somewhat addressed, since the manipulation of symbols now has a consequence for the robot (and the computer). Granted, this does smack of Searle's homunculus example under the robot reply; I also believe granting a computer external capacities is not sufficient in determining the mechanism it uses to understand. However, I don't see how, beyond granting real world responses and consequences to the symbols a computer manipulates, that it would ascribe meaning to them in addition to merely serving as input to translate according to its rules (my natural response is to propose some sort of programming of its external capacities—as in, for example, pain responses—but even then I can see that this does not solve the fundamental quandary of how the "homunculus" would derive meaning, and consequently intentionality from it).

    ReplyDelete
  17. ''As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding.''

    Here I wonder if we can imagine a machine that does not rely on purely formal elements (i.e. squiggles and squaggles). The answer, as Searle confirms is yes, and that machine is a human being. But nowhere in Searle's paper is an identification of what is/might be this essential, non formal element that humans have that gives it cognition, understanding, and feeling? Here I can't help but think God, Spirit or any number of the formless, non-physical categories that we rely on so often to explain why form is filled with content (meaning) in the world. On the other hand we can imagine that this unknown element that makes humans intentional is a biological fluke. Just a fluke of certain chemicals being around each other at a certain time, creating the feeling of feeling. That doesn't help us understand what intentionality is, but rather gets rid of it as we become physically essential and determined.

    ReplyDelete
  18. "While it is true that the individual person who is locked in the room does not
    understand the story, the fact is that he is merely part of a whole system, and the system does understand the
    story."

    The notorious systems reply. I believe Searle accurately defends against the system reply by arguing that all the rules could be memorized and rule-follower still wouldn't understand Chinese. Also, if the systems reply is valid, it leads to a panpsychic view where any combination of things could be a feeling mind if it is acting appropriately. And what does it mean to act appropriately? If a person and a set of rules together can understand Chinese, maybe a rock understands math, but never has the chance to show it.

    However, I believe the systems reply comes from a speculative answer to the hard problem. We know that we feel, but we don't understand how and we can't localize it biologically, so some argue that feeling is just something that emerges when you have a brain-like thing operating. The ambiguity of how we feel leads people erroneously to put the same emergent characteristic onto other systems-but not every system is a brain. Cognitive science will have to find out what the difference between a brain and a "chinese-rule-system" is, but until then, I think it's safe to assume that they are not the same.

    ReplyDelete
  19. "But according to strong AI, the computer is not merely a tool in the study of mind; rather, the appropriate programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."

    Programs = mind? The important thing to note is, first of all, strong AI claims that programs, not robots is equivalent to mind. Programs by themselves are implementation-independent. It can run on anything that can has computation ability, i.e. able to do symbol manipulation. It has no dependencies on the underlying implementation. That's the difference between T2 and T3, T2 program is implementation-independent, it only takes in and produces verbal statements, whereas T3 robots have sensory input devices and can learn (grounding symbols) from the environment. Secondly, strong AI claims programs can have cognitive states. Cognitive states are felt states, you can feel something when you cognize. It is more than symbol manipulation.

    Searle's famous Chinese Room Argument is the refutation to strong AI: There is no cognitive states when computers doing computation to mimic human cognition. Imagine Searle is in a closed room; all he has around him are instruction papers that writes the rules for responding to every questions written in Chinese. Searle does not understand Chinese, nor do any of those rules make any sense to him, but the one thing he can do is recognizing symbols. He can tell the difference between one Chinese character from the other. Questions are written on a sheet of paper and given to Searle, Searle's job is to find the corresponding response and write the answer back. The Chinese speakers outside the room are happy to see the answer that they got, and they thought that the one sitting inside the room must be a Chinese speaker. How surprised will they be when they see Searle walks out who cannot speak Chinese at all?

    Searle claims that what a T2 program does is what he does: Following the instructions to deliver the response a human would anticipate, while having absolutely no knowledge of what it's doing, in other words, the program never get to understand the meaning of the words. This is conflict with human cognition, we not only produce the right response, and we also understand them.

    How human cognition works is: first, we need to understand the meaning of the input (transforming the symbol into whatever semantic representation that lives somewhere in our brain), second, we make up an idea that we want to convey as the answer, and finally, we transform that idea into a verbal answer (human language symbols). T2 programs lack the part about the transformation from symbols to idea and idea to symbols.

    The most interesting argument on CRA is probably system reply. System reply argues that Searle left out an important part from the picture: the instruction book. Searle doesn't understand Chinese, but the aggregated system of Searle and the instruction book, does. And T2 program is such a system.

    Just like strong AI, system reply made the same mistake: the program never gains deeper understanding (semantics) of the language other than how Chinese symbols look like. The instruction book doesn't teach you the semantics; you never get to transform symbols to ideas. Searle refuted with the fact that he can just memorize every single instructions on the instruction book, although sounds like a horrible idea for a human, he successfully internalized the instruction book and become the "Bigger system". But does he have any understanding of Chinese? No, because he never get to bind syntax with semantics to produce meaning for Chinese characters.

    ReplyDelete
  20. Searle's thesis in this paper is that the only machines that can think are brains and other machines with "internal causal powers equivalent to those of brains," and that strong artificial intelligence, which is entirely program-based (due to implementation-independence) is therefore not useful for cognitive science.
    Searle however showed that passing the Turing Test isn’t sufficient to prove understanding (i.e. meaning) with his Chinese Room Argument (CRA). The CRA is a thought-experiment where Searle is inside a room with a Chinese-Chinese dictionary, performing symbol manipulations correctly which have meaning to Chinese-speakers outside the room. Searle however, not being a Chinese speaker, doesn’t have a clue what the characters mean (i.e. they are ungrounded). Therefore computers don’t know either (using the same implementation-independence argument). "Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything."

    Searle goes on to counter-argue against objections that others have made towards his CRA. Searle counters the systems reply, which says that the system understands Chinese, even though the person inside the system doesn't, by asking us to imagine the person internalizing all components of the system. The person still doesn't understand Chinese, and therefore the argument is invalid.

    Searle goes on to say that " The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test." He further makes the point that by the systems reply logic all kinds of noncognitive systems are going to turn out to be cognitive, like stomachs (i.e. they are doing information processing at some level, but we can't infer that they also have understanding). Searle also describes how AI doesn't sufficiently distinguish mental states from non-mental states, which is a severe problem for the validity of the discipline.

    The robot reply adds in some sensorimotor capacities with visual sensing ("eyes" in the form of cameras) and limbs, which "adds a set of causal relation with the outside world." But Searle points out that this is insufficient since the robot doesn't have any intentionality (or in other words consciousness through which to attach meaning to senses and processes).

    The brain simulator reply proposes that simulating neuronal firings of an actual Chinese speaker (instead of simulating information we have about the world). Searle argues that "the problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states." I am not fully satisfied with this counter-argument - I think that it hasn't been done before, and so we don't really know what will happen if this type of brain activity can be perfectly simulated!

    ReplyDelete
  21. (Part 2:)
    The combination reply takes the systems, robot and brain simulator replies and says that together they are sufficient for solving the CRA. Searle brings up the problem of intentionality again to counter the argument.

    The other minds reply says that we can't know that other people have understanding - and so if we are going to attribute understanding to people, we should also attribute understanding to machines. Searle counters this by saying, " In 'cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."

    The many mansions argument is that we just haven't found out how to write a program advanced enough to have intentionality yet. Searle argues that there's something else, that cannot be in a formal program, inherent to our biological structure that has causal powers and gives us intentionality.

    In sum, Searle is saying that "no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running." Strong AI (and computationalists) are wrong, as no computer program can sufficiently account for intentionality. Something more, which Searle thinks is biological, is needed for intentionality.

    ReplyDelete