Saturday 11 January 2014

(5. Comment Overflow) (50+)

(5. Comment Overflow) (50+)

13 comments:

  1. "Systematic correspondence between scratches on paper + quantities in the universe is a remarkable and extremely powerful property. But it is not the same as meaning, which is a property of certain things going on in our heads."

    In "The Symbol Grounding Problem", Harnad specifies as a necessary condition of meaning the processing of the relationship between a symbol and its referent by a cognitive process. This stands in stark contrast to a traditional (early Wittgensteinian?) formulation of meaning consisting solely of the correspondance between rules for manipulating symbols and rules that dictate objects' behavior in the real world on the other (a formal correspondance). A necessary condition for a symbol to be meaningful is to have consciousness somehow cognitively associating it with its referent/rule for picking out the referent.

    This symbol grounding problem arises when considering Searle's Chinese Room. For there to be meaning within Searle's mind, it must be conscious (conscious in the sense of "within one's awareness"—if he is unaware of understanding the meaning of the symbols, and they are ungrounded.) The unanswered question is "what is the cognitive machinery that generates this meaning within the mind?"

    In the "Computation." paragraph, I was curious whether implementation-independence was brought up in order to probe our intuitions about whether the mind can actually be implementation-independent. My own intuition is that cognitive processes are implementation dependent, perhaps because of how modern neuroscience is predicated on the brain and the mind being identical, or at least the causal privileging of the brain over the mind. Evolutionary theory suggests that the brain evolved the way it did so that it provides very specific cognitive processes, further fueling the intuition that in the case of the brain/cognition, the hardware and software layers are inextricably linked.

    One statement that I was confused by was under the "Consciousness" header—"So the meaning of a word in a page is 'ungrounded', whereas the meaning of a word in a head is 'grounded', and threby mediates betweeen the word on the page and its referent." It seems that if someone is looking at a word on a page and understands its meaning, then the word itself is "grounded" because there is some sort of causal connection between the word on the page and the agent's (mentally) picking out what the word corresponds to in the world.

    ReplyDelete
  2. Symbol grounding is the process of connecting the symbols and meanings/ideas. For us, the language we use daily e.g. English, is a symbol system and the idea that we express through English is the meanings that are grounded to those symbols.

    Symbol Grounding Problem is how symbols get their meanings and what are the meanings? The argument that a word is composed of features that gives its meanings cannot stand, because for Non-decomposable words like names, there's no features contribute to its meaning whatsoever. We cannot guess who the person is if we don't know her/him. But once we do, we would be able to associate this meaningless name to this person.

    "A word refers to (its referent) is not the same as its meaning". I would use the famous quote from René Magritte, "This is not a pipe" as an example. This word written on the paper is not a pipe. It is just squiggle and squaggle. It is used as a symbol, a pointer to the concept of a pipe in our head. So when you think of the word as just the symbol, it is not a pipe, but if you think of the actual concept of the pipe when you see the word, you would be left wonder what he meant.

    Searle's Argument brings the symbol grounding problem to the consciousness level.

    Searle's Chinese Room argument negates the view that computation (the manipulation of symbols) alone could ground meanings to symbols. And thus a Turing-Test Passing (T2) Machine could not explain our ability of reasoning, because it could be just like Searle manipulating meaningless symbols without knowing meaning to any of those symbols. So it's reasonable to argue that there's something else going on in our brain that ground those symbols and enable us to give them meanings. This 'something' is the dynamic ability of us: we not only reason, but also see, hear, touch and smell, etc. And for a machine to be able to ground meanings like we do, it should at least be able to "experience" the world as we do. By experience I mean, it needs to have some sensory devices attached to the CPU. For example, machine could use camera to see an apple as we do. And it also needs to associate this image of apple to the symbol of apple. The image in this case, is part of the concept/meaning of the apple itself.

    ReplyDelete
  3. Harnad's paper addresses the issue that undermines the computational theory of mind. Symbol manipulation is simply the input of meaningless characters which are transformed by a set of rules based on shape, rather than semantics. This seems unreasonable because we tend to believe that the things around us have meaning and that is how we understand how to interact with our environment. A symbolic level is not enough to capture mental phenomena because of this whole “Chinese Dictionary-Go-Around”: “Suppose you had to learn Chinese as a first language and the only source of information you had was a Chinese/Chinese dictionary.” This is a simplification of Searle’s original Chinese Room Argument in which the Chinese/Chinese dictionary acts as the set of rules the person in the room uses to answer the Chinese input to the correct Chinese output. If you accept Searle’s argument, as I do, then the person in the room never actually learns Chinese; the person does not understand Chinese because the symbols have no meaning - they are not “grounded”. This idea of grounding the symbol system in the real world is crucial if we want to “get off the symbol/symbol merry-go-round.”
    The proposal of a hybrid model to handle the symbol grounding problem gives me some pause. To me connectionism seems logical, while symbolism lacks connection to reality. The biggest problem according to this paper with connectionism is its lack of modeling “our behavioral capacities [that] appear to be symbolic” which apparently leads to the hypothesis that “the underlying cognitive processes that generate them… must be symbolic.” I am not entirely convinced that connectionism lacks the ability to model our linguistic, reasoning, and chess-playing skills. Yet I can see the merit in the proposed hybrid model. By combining the symbolic system and connecting it to the non symbolic representations, we connect our perceptions with their meanings, thanks to the neural networks that pick out the relevant details of what we observe.
    This seems reasonable but I do wonder if the symbol system described in this paper could perhaps just be the storage of the connection weights and patterns of the neural networks. I guess the larger question then is, how do we categorize what we observe? Do we store one symbol/ representation to a name? Or perhaps that’s not at all how we deal with categorization.

    ReplyDelete
  4. I think that it's important to note, with regards to the symbol grounding problem, that there is not simply a matching program going on in the mind, correlating symbol with referent. Rather, our symbols are usually defined by their referent. That is to say, that symbols cannot exist without referents in the first place. Of course, there are meaningless symbols, but those are grounded by the fact that they lack referents, the procedure for finding the referent is "don't even try". This isn't to say that symbols on paper are grounded from the perspective of the paper, but from the perspective of the cognizing observer, if there are symbols, they are grounded. This leads to an interesting question then: how is it that symbols spontaneously arise in dynamic systems.

    ReplyDelete
  5. As I read through Harnad’s, “Symbol Grounding Problem” I got confused with the idea of iconic representations.
    Consider the following passage, “No mechanism has been suggested to explain how the all-important categorical representations could be formed.”

    But have we not also failed to explain how the iconic representations could be formed?

    You continue by saying, “In the case of horses (and vision), they would be analogs of the many shapes that horses cast on our retinas.”

    Are there innate fundamental icons of basic inputs (i.e. shapes) that then build the horse icon? Or should we see the icons as formed directly from the sensory input. That is, are they formed in response to the similarities that exist between certain inputs and then matched to create the icons? Essentially my question is, how do inputs become a discrete set of icons?
    This stems from another question. Would I be right in assuming that these iconic representations are stored? If so, is a discrete set of iconic representations then imposed on a non-discrete set of inputs?

    ReplyDelete
  6. I don't think it's correct to say that meaning of a word is the way to pick out its referent. I think the meaning of a word is a relation, namely the relation between an object (in the world) and the knowledge (in the head) we have about it, that we have acquired through experience, which is kind of intuitive. The relation between a word and it's object (or more correctly, an object and it's word) is a relation of reference, which as cited, is not the same as meaning. Therefore, I understand the reference relation as following the meaning relation, as reference is an element of language. In regards of this, I think groundedness (connection between the object and its mental representation or symbol), is the base on which we build meaning. First, our experience leaves a mental trace in our brain, which through accumulation and categorization, become grounded symbols. For which we then create (or learn) words. Meaning arises from accumulation and categorization of experience about certain objects, groups of objects and their relations. I think this is more or less what is mentionned about recent research in robotics (Steels 2008) on robots that develop comunicating systems.

    ReplyDelete
  7. (Meaning, semantics, relevance? There is relevance in the syntax obviously, but it is more difficult to ground a system. Language as a system. Linguistics as a piece of cognitive science.)Symbol manipulation, symbols are really just a binary between 0 and 1. There is a didactic nature to our (human) ways of classification and also defining language. This is raising a more important alarm when it comes to animals, especially those who are known to have communication techniques that are not language. If we are trying to create a conscious machine, then surely we will also want to accomplish the task of creating animal consciousness. We’ve all seen a movie with a talking animal, and there are people like myself who feel the same empathy towards animals as they do towards humans. There is a connection there that should be labeled “conscious of other cognitive organisms”. Many linguistics would argue that language is far from arbitrary. From what I remember of a neruolinguistic course, the nature of how we create the sounds that make up our words is a huge deal and definitely plays a role in language. In an evolutionary sense, communication (think Latin or another “dead” language) is something that both separates us and connects us to animals. Another version of the Turing test that I would love to implement, could a (quite advanced) robot communicate in sign language? That is if I could give it just two arms and “hands” would it be able to sign back? There would be nothing other than visual input, ie the computer would be deaf. I do not follow functionalism, nor do I subscribe to computationalism; and know there are species (I’m thinking Rhesus Monkeys and certain types of dolphins) that communicate with “words” that are not grounded. I would need proof that we could recreate the consciousness and ground their system of communication in the same way we are meant to ground the “meaning” of words in our heads. This kind of connection might be something to convince me but even then, I can see that I would have trouble accepting category-based theories.

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. Steels(2008) suggests that the Symbol Grounding Problem – about how words get meaning relevant to real world- which is most fundamental to cognitive science is resolved by identifying ‘the right mechanisms and interaction patterns so that the agents autonomously generate meaning, autonomously ground meaning in the world through a sensorimotor embodiment and perceptually grounded categorization methods, and autonomously introduce and negotiate symbols for invoking these meanings’. Searle believes that understanding is based on intentionality, which means that consciousness is directed to something, is about something, and that this will never be the case for artificial intelligence since the phenomenon arises from neurobiological processes which they lack of. Steel argues that this issue is now resolved because they have created robots who, based on interaction with each other and in the world, have invented and used grounded symbols, it is to say they have reverse engineered symbol grounding.
    I find the results interesting and puzzling, but it doesn’t seem to me that these robots ‘understand’, even leaving the question of feeling out. First, the task they have to execute in the language game is limited to identify a color, create a symbol (word), communicate it and for its interlocutor to perceive the symbol, test it on the colors it perceives and store the information when there is positive feedback. But there is more to meaning/understanding than this execution which I feel is accounted for by the programming of these robots. Also, it seems the only way for these robots to extend these capacities to other spheres and to create new categories by fusion of pre-existing ones would be for the experimenters to program it.

    ReplyDelete
  10. For a change of pace, I thought I might share an example of symbol grounding that came up in my reading for another class - as an English Lit minor, I'm always bringing in my Cognitive Science knowledge into the novels that I read, and for my 20th Century Lit class we read Edith Wharton's The Custom of the Country. As with many novels from that time, it highlights the tension between two worlds - Old Money, the class of aristocracy and inheritance, and the rising New Money class. A recurring theme throughout the novel was that a significant difference between the two classes is that New Money members are unable to attach meaning, value, or significance to anything, and thus they end up in a cycle of endless spending and buying - not unlike the image of endlessly stuck on a merry-go-round when one attempts to use a Chinese-Chinese dictionary without any prior knowledge of Chinese. Old Money, on the other hand, attaches an enormous amount of meaning to tradition, family heirlooms, and aristocratic values passed down to generations, while New Money by definition lacks these traditions. Thus, when they try to emulate Old Money, they look the same externally, but since they have not attached meaning to their actions or objects, their world is empty, and their value system unfounded.

    Obviously this is not quite the same as attaching meaning to individual words/symbols to ground them, but similar enough to be relevant, I think. Searle may look externally like he knows Chinese, but ultimately his grasp of the language is empty, because he can’t do any symbol-grounding.

    ReplyDelete
  11. “nor can categorical representations yet be interpreted as meaning anything”

    In order to categorize with language we had to ground the symbols in a particular
    Language with their respective objects or through other symbols which are themselves grounded. When we ground a certain symbol, I think we can assume that we know what that symbol means since we have had a “first hand experience” with the object that this symbols represents. From grounding these symbols we achieve categorical represensations, it is through the knowing what each symbols represents that categories are made through some sort of filtering, which is directly dependent on knowing what the symbol stands for. If the basis for constructing a categorical representation is dependent on meaning how can the categorical representation not have any meaning?

    ReplyDelete
  12. "So the meaning of a word in a page is "ungrounded," whereas the meaning of a word in a head is "grounded" (by the means that cognitive neuroscience will eventually reveal to us), and thereby mediates between the word on the page and its referent."

    The paper begins by setting up the question at stake. Searle showed with his Chinese Room argument that the meaning of words exist within the brain of the cognizer and not within an implimentation-independent computational system. The question now is: how does the brain generate meaning? From here the "Symbol Grounding Problem" is formulated. We know that symbol systems can be manipulated based on the shape of the symbols and the rules/algorithms that go along with them - this process is implementation independent. The symbols themselves are manipulated based on shape and not meaning, however, symbolic systems may be meaningfully interpreted. From here it is evident that the meaningful interpretation occurs within the brain of the interepter, yet it still begs the question of how it is that our brains can interpret meaning within symbol systems. The paper points to two features that appear necessary for this: symbol grounding and consciousness. Grounding a word means that the word must be connected to its referent, which requires sensorimotor apparatus in order to interact with the referent of words. Consciousness comes into play in order to ensure that meaningful interpretation is actually being meant and felt by the interpreter.

    ReplyDelete
  13. In “The Symbol Grounding Problem” Harnad looks at how meaning and words are linked. Some people reckon meaning is based in the features or rules used to figure out what thing a word is referring to. This paper asks, how does that happen?
    First though, by this definition meaning also includes the processes used to pick out that meaning. This works well when applied to people who have consciousness (feeling and intention). But Harnad asks, what if words are on a piece of paper? Do they have meaning then? Harnad argues no, without minds to mediate connections between words and the things they refer to (their referents), there’s no meaning at all. Therefore consciousness is what grounds a word in meaning. The connection of ungrounded words to referents (i.e. meanings) aren’t mediated by a conscious mind, while grounded meanings are.
    Then Harnad asks whether what is happening inside a computer is grounded or not. Computationalists argue that the brain is like a computer and that it is possible to find a program identical to the one being run inside the brain. And therefore since our brains are computers, and conscious meaning is happening in our heads, then meaning happens in computers too. The right program will be identical to our brain’s “program" (which is implementation-independent and can run on any hardware) will pass the Turing Test. Passing the Turing Test means that a computer’s communications are indistinguishable from a person’s. Searle however showed that this isn’t sufficient to prove understanding (i.e. meaning) with his Chinese Room Argument (CRA). The CRA can be quickly described as a thought-experiment where Searle is inside a room with a Chinese-Chinese dictionary, performing symbol manipulations correctly which have meaning to Chinese-speakers outside the room. Searle however, not being a Chinese speaker, doesn’t have a clue what the characters mean (i.e. they are ungrounded). Therefore computers don’t know either (using the same implementation-independence argument). So Harnad comes back to the question: how is the brain grounding words in meaning?

    Harnad outlines the symbol grounding problem. Symbols are objects part of symbol systems. A symbol system is that set of symbols plus rules for their manipulation based on their shape. Meaning happens inside the brain, not inside the system. The brain’s capacity to pick out referents is due to sensorimotor capacities which directly connect symbols to their referents. Harnad then adds that while grounding is necessary it’s not sufficient - our consciousness is also necessary for symbols to really have meaning.

    ReplyDelete