Saturday 11 January 2014

(3b. Comment Overflow) (50+)

(3b. Comment Overflow) (50+)

11 comments:

  1. "Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the Chinese Room Annalogy..."
    I'm sure that I'm falling into the subterfuge of Harnad S., but it looked like Searle beat away the T3 test with his rebuttle to the "Robot reply" and the T4 with his rebuttle "Brain simulator reply."
    "the addition of such 'perceptual' and 'motor' capacities add nothing by way of understanding."
    If we think being able to recognize patterns and store data gathered from the world constitute understanding, we missed the boat. Put yourself in the robot, says Searle, and realize that you as a person cannot make anymore sense of the data coming through the Robot's sensory system than you could in the Chinese Room.
    But suppose you put a person in a neuron (and I realize I'm arguing both sides here, but humor me for a second). You hopped in a one-man submersable, had it shrunk by a ray-gun (I'm pretty sure the Defense Department actually has one of these), and like an eppisode from the Magic School Bus, made it into a neuron. There's Sodium and Potassium all around you. All of a sudden -ACTION POTENTIAL. You don't know what happened. Being in the machine, or even being part of the process, will not give you the understanding that the whole system has.
    But the problem remains the same. And it's a massive problem for all of neuroscience, I think. How do we go from information processing, from churning around zeroes and ones or sodium and potassium, to understanding. You're kind of forced into the issue of agency, and maybe one has to admit that it's all epiphenomenal. But that's a boring dead end, and we know from experiment done at the Douglas and at McGill, that we can concoct genuine experiences with chemicals.
    Bottom line, we have to figure out how we circumscribe the hard problem of consciousness, and ground symbols, without losing those symbols to zeroes and ones, or sodium and potassium.

    ReplyDelete
  2. I understand the necessity for computational states to be implementation-independent. This is in fact the beauty of programming, that the software is hardware independent. Yet of course there must be some form of physical implementation. But I’m not understanding is that when Harnad states that “Mental states are just computational states and computational states are implementation-independent,” I’m confused as to what physical states are. Are physical states lacking in a cognitive component? Am I not aware of my body?
    Harnad goes on to discuss unconscious states and nonsensical nature of unconscious mental states. The quote “finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues.... It’s definitely not what we mean by ‘understanding a language,’ which surely means conscious understanding,” finds me further agreeing with Searle’s idea/point. If something like this were truly a possibility in humans, maybe the argument wouldn’t work, but this is not how humans function (and thank goodness, I do enjoy being aware of what I understand).
    Now unfortunately I am brought to Harnad’s soft spot for the Systems Reply. I concede that CRA only shows that computationalism wrong because cognition cannot be all computation (because computation does not create understanding) but I am still hard pressed to see that there is the possibility of a hybrid of computation and brain function.
    “There are still plenty of degrees of freedom in both hybrid and non computational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4).”
    Perhaps I would be more willing to agree if there was a clear idea for how in the world one may begin doing this, without actually looking for mind-brain correlates, i.e. how are the brain and mind interwoven and how the brain regions relate to functions. But I still have not seen this attempted practically and since we do live in an age of immense technological advancement, I'm really hoping one day I will see more than theories. For now, I guess I am a Granny.

    ReplyDelete
  3. "computational states are implementation-indepedent"

    I agree with the fact that this is the soft belly of computationalism. Indeed, it is the software that matters but in the case of living beings the interaction between the software and hardware is dynamic. Its a 2 way interaction, whereas in a computer device the hardware isn't affected by the software.
    For example as we learn, the brain goes through a process of plasticity where certain connections are destroyed and other are reinforced. On another hand, these computational states (mental states), which are not physical, cause some physical alterations in the brain like a change in the level of certain neurotransmitters and the way neuron function. This interaction might be explained by quantum mechanics and the potential of our consciousness to change certain aspects of our reality; of course this interaction works also in the other way from the brain to those mental state, its a circle. How the way certain groups of neurons function also affect those mental states; therefore, there is no possible way that those computational states are implementation-independent.

    ReplyDelete
  4. "It is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that fonly for the special case of an implementation-independent, purely computational candidate."

    How does building a symbol system that moves within and perceives the real world (i.e. building a T3 system) ground it? Does this grounding guarantee understanding? In his paper, Searle states that if we add sensors, we are not necessarily building an understanding system; the system must convert the sensor data to digital (symbolic) form, and then it will manipulate the sense data as it would any other symbolic data. Does building a representation from real-world sense/action-result data and correlating the representation with a word or concept create understanding?

    ReplyDelete
  5. In Harnad's What's Right and Wrong about Searle's Chinese Room Argument?, Harnad made some explanations regarding to the concepts that Searle uses and commented on Searle's argument about cognition is not just about computation. Harnad seems to agree with Searle's final conclusion, but thinks Searle needs more illustration in his explanation.

    One thing Harnad mentions what the brain and mind dualism. Seale thinks that when we talk about computation, we do not need to know how brain works - more specifically, if we know about how the brain works, we do not need to worry about whether cognition is computation; the answer is just there. I believe this is very much true. Although brain supports cognition, meaning that without a brain, cognition does not exist, cognition is very independent of the brain function. Harnad uses the example of hardware and software of a computer, and we discussed this in class as well. Hardware supports software to run properly, but what kind of hardware the computer has is irrelevant. It is possible we run the same software and get the same function on two different hardwares. Mind and brain have a similar relationship.

    Another thing, and I believe it is among the most important things, is that, Harnad explains the differences between T2 and T3 Turing Test and their importance. As I mentioned in my previous skywriting, I think Searle lacks something excluded in T2 but included in T3 (differences between T2 and T3), and I referred it as feelings. I think Harnad has a similar argument, and he presents it in a more detailed and formal way. He uses the notion "dynamics" and suggests that a T3 robot is a hybrid of computation/dynamic thing, which means computation is part of a T3 robot, but not all of a T3 robot; a T3 robot needs dynamics to do things that excluded in T2 but included in T3. If I understand this article right, Harnad uses the System Reply mentioned in Searle's Minds, Brains, and Programs, which suggests that the person inside the Chinese room is part of the system. Harnad seems to say that the person inside the Chinese room is now the computation part of a T3 robot, and the person is truly part of the system; however, the person is not the entire system and he will not be able to "internalize all of these elements of the system" as Searle suggests because it requires dynamics and the person does not have a dynamic part. Harnad seems to accept the System Reply and reject Searle's response regarding to the System Reply.

    ReplyDelete
  6. As a student of neuroscience and psychology, I struggle with the notion of not looking “for the mentality in the matter (the hardware): it’s the software (the computer program) that matters”. I have learned to see the mind as an emergent property of the physical brain, so it is hard to accept that “the physical details of the implementation are irrelevant to the computational state that they implement”. In the case of humans, no two software systems are identical, as they would be in computers. How can we make the same argument for human cognition and computers when the end products are so vastly different?

    The state of an individual human’s cognition is, as I see it, a direct consequence of the physical reality of his or her brain. The brain influences what the mind is capable of doing (for example, if I target your Broca’s area then I will most likely damage your language abilities), and the mind is capable of influencing the physical reality of the brain (maps of motor control in the regions anterior to the central sulcus are adjusted based on how much we use certain motor skills compared to others). How could “the physical implementations of one and the same computational system [be] indeed equivalent” in the context of humans? The brain and the mind are continuously changing each other, so who says we have the same computational systems?

    ReplyDelete
  7. “To walk like a duck, something roughly like to waddly appendages are needed, and to swim like one, they’d better be something like webbed ones too. But even with these structure/function coupling constraints, aiming for functional equivalence alone still leaves a lot of structural degrees of freedom”

    After reading this article I was left with the question of where exactly do we draw the line with these structural degrees of freedom. How do we decided which implementation details are significant, and which are not. The question relies on a clear definition of what cognition is, which in our class is “everything we do,” making it difficult to demarcate the line at which implementation details enter T4 territory. How do we cleanly separate the things we do that are and are not cognition? Is our heart beating a part of cognition and if not, what about the feeling your heart race? As harnad states in the article, “So there is really a micro functional continuum between D3 and D4’’ The line between T3 and T4 does not really exist and maintaining that it does still entertains the computationalist view that human cognition is something which can be understood in the abstract. While a T3 robot wouldn't be implementation-independent, it would be implementation non-specific (it can be made of anything so long as it performs the function). But I would argue that while a T3 robot would get most of it right, it would require a choosing of what is essentially human and what is incidentally human. Over time, I think the accepted and adequate T3 robot would begin looking a lot like a T4

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. "Computational states are implementation-independent"
    I find this concept really interesting a little troubling. I understand Searle did not mean that the brain was irrelevant in that we do not need it. Obviously we need it but its construction is what doesn't matter. So far so good. I find it unbelievable that we can construct an artificial system that can execute the same kind of mental programs we can. For basic functions like arithmetic, I do believe it is possible as we have already done it but to build a robot that passes T3, that is not nearly enough. We barely understand how and why the brain does what it does. Without being able to understand how the brain can do what it does, how can we build a system that can generate the same kind of outputs? This is where reverse engineering comes in as a possible solution but this would mean the physical structures are important after all. In general I agree with Searle that he and the room together do not understand Chinese but in order to have something that does, I think better understanding the brain (the hard problem) may lead to better solutions in the future.

    ReplyDelete
  10. "It is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate."

    What differentiates T2 program and T3 robot is if it depends on implementation. Pure computation is implementation independent. It is universal truth just like gravity, it applies everywhere in the universe, not just the earth. To become implementation dependent, however, is to be more than pure computation. CRA failed to refute a system that is not only computation, and in turn, is implementation-dependent. It can no longer be Searle sitting in a closed room. It must be a robot (computation system) that has arms and legs and everything as human (implementation-dependent). To what extent must the robot be similar to human being is hard to know. But what can be sure is human cognition requires dynamics.(being able to see, touch, smell, etc.) And T3 robot has dynamics.

    ReplyDelete
  11. In this paper Harnad addresses Searle's Chinese Room Argument. What Searle calls "Strong AI" is boiled down to three beliefs: that the mind is a computer program, that the brain is irrelevant and that the Turing Test is decisive. Harnad reformulates this into "recognizable tenets of computationalism". The first is that mental states are computational states. The second is that "computational states are implementation-independent." And finally, the Turing-Indistinguishability test is the strongest empirical test for the presence of mental states and is therefore decisive for the computationalist theory.

    An important point in this paper is that only the T2 in the form of an implementation-independent and purely computational form is vulnerable to the Chinese Room Argument, since T3 or higher would have sensorimotor grounding. Harnad also mentions that there are many ways that T3 could be engineered through approaches that are not purely computational (without reverse-engineering the brain).

    ReplyDelete