Saturday 11 January 2014

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or: Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or: https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.


If you can't think of anything to skywrite, this might give you some ideas:
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445.
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and future. Topics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

67 comments:

  1. [Thinking to myself that I could see where this was going after hitting the section of the paper entitled “Searle’s Chinese Room Argument”, I wrote my thoughts out without finishing the paper, then read the latter half of the paper. I seem to be more or less in accordance with the paper without purely being a summary, so I’ll assume that my comment has some relevance and post it.]

    The symbol grounding problem – or, rather, the explanation of what it means to express an ungrounded symbol (i.e. process a symbol according to computational rules alone) versus a grounded symbol (i.e. computation PLUS intangibly “feeling” the meaning) – has something to do with the order in which concepts (referents) and the symbols referring to them originate. A symbol can be literally anything – a rock can stand for a cat, if I so choose it in my symbol system. However, whether the symbol has a deeply felt meaning or is just meaningless “squiggles and squaggles” (like Chinese characters to Searle; or, arguably, like words a computer program) is a function of one’s own experiences. Unlike a computer (as instantiated in this day and age), a human can go out into the world and conceptualize referents such as water, danger, food, pain, etc. without having to give recourse to symbols at all. I guarantee you that prelinguistic humans did not mistake water for sand or pleasure for pain, any more than non-linguistic animals do. We don’t need symbols in order to do that. It is only when language evolved that we found in convenient (and very beneficial) to affix symbols to these ready-made notions of ours. If the mirror neuron paper’s theory holds water, maybe this started out by grunting and imitation of ingestive sounds. But whatever the case, the fact remains that first came the feeling, then came the symbol. Contrast this to computers (or Searle’s Chinese room) – we input whatever we want into a computer, but until we have a computer that can learn independently (“go out into the world”, so to speak), we will not have a computer that understands what it is talking about – that has grounded its symbols in experience.*

    * (This is, incidentally, where my earlier disagreement comes into play – I think it sufficient to put a computer in a simulated world, whereas Stevan Says it needs to be the real physical world. But that debate-ship has sailed.)

    ReplyDelete
    Replies
    1. Grounding vs. Meaning; Doing vs Feeling

      Actually, the symbol grounding problem is not the problem of feeling (the "hard problem"): it's just part of the "easy" problem of doing.

      What it is that has the symbol grounding problem is computation (symbol-manipulation), and it only has it when it is set the task of generating (i.e. being) cognition by passing T2 (verbal -- i.e. symbolic -- capacity only) via symbol manipulation alone.

      In that case the symbols are ungrounded because they are not connected to the things they are (interpretable as being) about.

      Only symbols-only T2 has the symbol grounding problem; T3 robots are grounded. They not only have our verbal capacities (indistinguishable from any of us) but they also have our sensorimotor capacity to interact with the things (and the people) in the world that our symbols are about, indistinguishably from any of us (i.e., Ethan!).

      T3 would solve the easy problem (and therefore also the symbol grounding problem), but not the hard problem of whether the T3 robot (Ethan) feels. T3 is immune to Searle's argument, because it is not just symbols. So Searle cannot "become" the system, as he did with the T2-passing computer, and bear witness to us that he does not understand Chinese. To pass T3 in Chinese you have to not only be able to manipulate Chinese symbols according to rules based on their arbitrary shapes, you would have to be able to connect them to the (non-arbitrary) shapes in the world that the symbols (are interpretable as) denoting. To do that, Searle would have to understand Chinese.

      But the fact that Searle could not demonstrate that T3 does not understand does not demonstrate that T3 understands! To explain how and why T3 understands would require solving the hard problem, because understanding has three essential components: (1) the capacity to manipulate words (T2); (2) the capacity to connect words with the world they are about (T3); and (3) the capacity to feel what it feels like to understand what words mean. (1) and (2) are doing; (3) is not doing but feeling.

      But all you need is (1) and (2) for grounding.

      (A computer interacting with the real world cannot pass T3. A computer interacting with a simulated world is also not passing T3. A computer that is part of a T3 robot that is interacting with the real world is just a part of the grounded system (so here the "System Reply" is correct: the robot may understand but the computer does not). Neither is a computer running a simulation of a robot in a world. All of those are still just ungrounded squiggles and squoggles, exactly as in Searle's Chinese room.)

      Delete
  2. If "meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to", then how do we ground symbols that mean 'conviction' or 'joy'? How does one learn such abstract concepts that our sensorimotor capacity does not seem to be able to ground? The referent of these words have feedback in order to support the learning of it.

    It seems to me that symbols can also be grounded based on other symbols that are grounded by experience. I don't need to experience or see a blackhole to know what the meaning of a blackhole is. The idea of a blackhole came from mathematical equations (another symbol system) and concepts of physics. But no one has really seen a 'blackhole' we just have hypothesis of what it could be based on a different symbol system than the one our natural language exists in. Seems to me there's nothing in my sensorimotor capacity that can augment this symbol. Is this symbol even grounded then? In fact, I'm not even sure I know what a 'blackhole' is but I know that it REFERS to something in outer space.

    ReplyDelete
    Replies
    1. Hm. I don't think a grounded symbol has to necessarily follow from experience. In your example of the black hole, maybe the black hole is grounded within the other symbols that we have (that are more concrete) - so I would say "black hole" is grounded, at least in the human mind.

      But then there is this quote: "The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us."

      In the youtube video above (for this section), Professor Harnad mentions an example of “zebra” as a combination of “horse” and “stripes” – around 10:00min. He mentions that categorizing words in this manner allows us to learn in one trial (versus the trial and error example given earlier in the video with tactile discrimination). This is mentioned in the context of categorical learning I believe, but since so much of our knowledge is not as a result of the direct interaction between whatever it is and the symbol, I think “black hole” in this case would still be an example of symbol grounding (since it can be ground in other symbols and it is referring to something, it is not just some shape that is meaning-independent).

      Delete
    2. Direct and Indirect Grounding

      You are both right: In later weeks we'll see that not all words need to be grounded through direct experience. In fact most words are not grounded directly. You can learn what they mean indirectly from verbal definitions, descriptions, and explanations -- as long as the words in the definitions, etc. are grounded (either directly, or indirectly via definitions, etc.). The point of the symbol grounding problem is that it cannot be indirect grounding all the way down.

      The interesting question then becomes: "How many words have to be grounded directly? And which ones?"

      Delete
    3. The relationship between meaning and experience confuses me. Does such a relationship (direct or indirect) entail that grounding requires memory?
      Could a robot that remembers experiences ground symbols in the way that humans do?
      Or is the question more specific: how does experience lead to grounding? What does memory really entail? How is human memory different from, say, a computer's?

      Delete
    4. My understanding is that the link between meaning and experience is made by categorization. Symbols do not refer to things on a one-to-one basis. The word ‘apple’ refers to an abstract object you are designating as an apple. By experience, you have learned that apples are small, round, red or green objects, If I use the word apple in a sentence, you are going to be able to picture an apple in your head without needing me to show you an actual apple. This learning by experience is made through categorization, Very schematically, after having seen many different types of apples you have abstracted their common features and know what, in general, apples look like. In the end, experience has taught you that there is a category of objects sharing common features that are designated by the symbol ‘apple’. The meaning of ‘apple’ is the rule that tells you to pick its referent in the category you have built.

      Delete
    5. On memory, I would say that it is required for grounding if learning requires memory. However, when you want to know what the word ‘apple’ means, you don’t have to recall every single apple you have seen (at least not consciously). So, I think grounding is just building a rule to reach the referent but does not mean you have to remember all the referents you have put in one category.
      But I agree that you probably need memory in order not to forget the rule…

      Delete
    6. I think memory is necessary but not sufficient for grounding. A strategy for learning the symbols could just be to simply memorize words based on other words. So when asked: what is an apple? You could say: The red, round object that tastes sweet. But this can be done without actually KNOWING what an apple is nor what it refers to. So how do we know if a machine has symbols grounded?

      Delete
    7. "how does experience lead to grounding?... How is human memory different from, say, a computer's?" (from Stephanie, a couple comments up)

      "...the meaning of a word in a head is "grounded" (by the means that cognitive neuroscience will eventually reveal to us)" (Harnad, 2003).

      We experience phenomenons, objects etc. with our senses. Our nervous system is what is ultimately experiencing and object/phenomenon, and our memory is a result of our nervous system's direct contact with the world. So robot could only ground symbols the way human's do if they can interact with the referents of symbols in the same way humans do. This requires the machine to have sensorimotor capacities like humans, which would also mean a system that remembers in way that is similar to humans. A defining feature of the human nervous system is role of neurotransmitters, chemicals which are essential to how we experience things. It is easy to understand how chemicals play a role in our memory (i.e. alcohol and other drugs that strongly influence the functioning of our memory). So if a machine were to ever have the capacity to ground symbols like humans do, I would expect there to be a chemical system in place.

      Delete
    8. Indrek, I would have to disagree with your suggestion that memory requires a chemical system. If we are talking about that, we are now talking on the level of T4.

      Right now, I'll take a break from symbol grounding to say that objects that do not have neurochemical systems also have "memory" in a sort of way. Computers, for example, are able to recall functions and rules. As long as there is some sort of storage capacity and the appropriate rules for recalling it, then there can be memory. A T3, or even T2 Turing machine could have the capacity to do this. This leads to what Jocelyn mentioned later on about memory being necessary, but insufficient for grounding.

      I would have to say that a machine that could spit out that an apple is "a red, round object that tastes sweet" could be differentiated from a machine (or robot) that had symbols grounded vs. one that does not. For example, could they pick out an apple from a slew of objects? For that to happen, the machine would have to know what it meant to be "red" and to be "round" as well. These words, in turn, would have to be grounded.

      Delete
    9. I agree with Angela's point, but can the symbol 'apple' really be considered to be grounded if 'red'+'red' = 'apple' if this was all programmed into the computer by a person. I think that if one could design a computer that had means of sensory perception (although not necessarily using a chemical system), that would be true grounding.

      Delete
  3. FORMAL SYMBOLS

    “First we have to define "symbol": A symbol is any object that is part of a symbol system. (The notion of symbol in isolation is not a useful one.) A symbol system is a set of symbols and rules for manipulating them on the basis of their shapes (not their meanings). The symbols are systematically interpretable as having meanings, but their shape is arbitrary in relation to their meaning.”

    “It is critical to understand that the symbol-manipulation rules are based on shape rather than meaning (the symbols are treated as primitive and undefined, insofar as the rules are concerned), yet the symbols and their rule-based combinations are all meaningfully interpretable. It should be evident in the case of formal arithmetic, that although the symbols make sense, that sense is in our heads and not in the symbol system. The numerals in a running desk calculator are as meaningless as the numerals on a page of hand-calculations. Only in our minds do they take on meaning (Harnad 1994).”

    Right, but are the “symbols in our heads” not manipulated (not by us, but by the laws of biochemistry dictating neural activation patterns) based on their shape (what neurons happen to be activated within the brain)? Can’t converting seemingly meaningless patterns in our world into meaning simply be a process of relating the seemingly arbitrary sounds (words, which seem arbitrary, but aren't completely since their shape is contingent on whether other people respond concordantly to it or not) to non-arbitrary patterns of neural activity (non-arbitrary because the pattern of activation is also dependent on the pattern/shape of activation of other words in the network and is obviously dependent on the recurrent neural activation that occurs when in presence of the word (“blue” tends to be heard when a certain wavelength of light is disproportionately hitting the retina))?

    Essentially, can’t meaning (the connection between the symbol (the word) and its referent (the thing)) be accounted for through dynamics dictated by shape (meaningless neural activity)? The word being represented by whatever neural networks tend to respond the sound of the word (ex: “cat”) while the referent being represented by whatever neural networks tend to respond to the sight, sound, touch, etc. of a cat (ex: four legs, meows, soft, etc). Isn't that correspondence between auditory stimulus and other stimuli a non-arbitrary (contingent on the linguistic context) pattern of neural activation? Are words (and the context they’re presented in) not priming the invariant features they allude to and vice versa? Can’t this correspondence be how our brains make meaning of the things in the world? Should we really even posit whether “our minds do take on meaning”? Isn't that putting the cart before the horse?

    (continued in reply)

    ReplyDelete
    Replies
    1. ROBOTICS

      “But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings.”

      The zombie hunch seems to be solipsism in disguise. People also only pass our own TT, and so if some robot were to pass TT, it would be just as silly to posit it doesn't have a mind (or feeling feelings, meaning meanings, subjective experience, qualia, etc). Wouldn't passing the TT essentially be the process of people believing it isn't a zombie? Maybe we’re all mistaken and it is just a zombie, but how is that any different from solipsism?

      “And that's the second property, consciousness, toward which I wish merely to point, rather than to suggest what functional capacities it must correspond to (I have no idea what those might be -- I rather think it is impossible for consciousness to have any independent functional role except on pain of telekinetic dualism).”

      Why can’t we simply have functional capacities, with complex categorization being one, that create the persistent illusion of consciousness? Maybe consciousness, as normally interpreted as something “extra”, is simply wrong. Maybe the illusion is real, and that illusion does have a role to play. Doesn't the fact that we believe we’re conscious (that we have that “extra something special”) impact how we behave? Consciousness (which I interpret as the belief that we’re conscious) might be a simple by-product of categorization and a drive to exchange information through language. But whether or not it has adaptive value, the belief that we are conscious does change our behavior. So, a zombie that believed it was conscious (talked and behaved like it believed it was conscious) would act just like us, and who are we to deny it its apparent subjective perspective?

      For the sake of making sense of what I’m positing, consider an impossible triangle. The illusion only works from one angle (like subjective experience, where only the subject “sees” consciousness). As soon as you escape that particular perspective, the illusion collapses (like solipsism or the zombie hunch or how many people used to consider all animals mindless automaton). From a different angle, the impossible triangle reveals that it’s an illusion, yet, once we’re back at the right angle, the illusion still works. It’s simpler for the brain to throw anything into a respective category, no matter how illogical or counter-intuitive, rather than have no way to make sense of the stimuli (the actual 3D representation of the object that creates the illusion of the impossible triangle is very complex and hard to understand even after being seen). Consciousness really seems to fit the profile of magic. I have a feeling I’m summoning Dennett here (couldn't find the source though), but he says something along the lines of real consciousness for adults, like real magic for kids, seems to be something people want to believe in. When you show a kid how a magic trick works, he/she still asks about real magic, as if there’s anything beyond trickery. An explained magic trick is just a magic trick, which isn't anything like “real magic”. In the same way, all of us give in to the same train of thought when it comes to consciousness. We can peer behind the curtain, watch the trickery that explains how the brain works, but we still want to believe in “real consciousness” that is completely separate from the trick that creates the illusion of consciousness.

      (I’d be glad to continue our back and forth from the mirror-neuron sky-writing section as you suggested. But where? Here?)

      Delete
    2. Feelers

      We don't know how the brain manipulates the words (or images) in our heads. Yes they are encoded biochemically or physiologically, but that doesn't mean they are manipulated computationally (and even computational manipulation would be implemented biochemically or physiologically in the brain).

      And to ground words you have to connect them to their referents in the world, not just to biochemistry -- or, rather, the biochemistry must connect them to their referents in the world. Perhaps this is what you mean? That would be fine (T4).

      No, the notion of a Zombie is not solipsism ("maybe I'm the only one that exists and all the rest is just my hallucination"). "Zombie' just refers to the possibility that T3 does not feel, but only acts as if it feels. Not knowing whether or not T3 (or any other organism, including other humans) feels is the "other-minds" problem.

      The "hard problem" is the problem of explaining how and why organisms (or T3s, if they feel) feel. And the hard problem is exactly equivalent to the problem of explaining how and why organisms (or T3s, if they feel) are not Zombies.

      No need for a "Zombie" hunch. Maybe there can't be Zombies. But if so, it is still the hard problem to explain how and why not.

      Consciousness is feeling. A conscious state is a felt state. An unfelt state is an unconscious state. That's all there is to it.

      (So I suggest using "feeling" rather than the many weasel words that just create smoke and confusion: consciousness, awareness, subjectivity, qualia, intentionality, 1st-person states, mental states, representational states etc. etc.. Like the Matrix, these weasel words make it seem like we're making headway when we're just spinning wheels.)

      Now do me a favour, Marc, and run that argument by me again -- the one about "consciousness" being an "illusion." Only this time, try it on feeling:

      "I feel, but I'm not really feeling: it's an illusion. Descartes' Cogito is wrong: there's no feeling going on; it just feels like there's feeling going on..."

      (See how talking about "feeling" instead of one of the weasel words keeps us honest?)

      Be careful not to fall back into a Matrix reply ("someone/something else is having this feeling, and I am just his/her/its illusion...": Until further notice the feeler of the feelings is the feeler of the feelings, not someone else who feels like the feeler of the feelings, but is not...)

      (We'll get to Dan Dennett in a few weeks...)

      Delete
    3. "And to ground words you have to connect them to their referents in the world, not just to biochemistry -- or, rather, the biochemistry must connect them to their referents in the world. Perhaps this is what you mean? That would be fine (T4)."

      Yup.

      "No, the notion of a Zombie is not solipsism ("maybe I'm the only one that exists and all the rest is just my hallucination"). "Zombie' just refers to the possibility that T3 does not feel, but only acts as if it feels. Not knowing whether or not T3 (or any other organism, including other humans) feels is the "other-minds" problem."

      To be quite honest, these all seem closely related. Solipsism is the craziest position, being skeptical of everything but one's own experience. The Other-Minds Problem is not being skeptical of everything, just of others' personal experiences. It seems like the Other-Minds Problem is sufficient to believe in zombies, while it's necessary to be a solipsist. How can one doubt everything but one's own experience without doubting the experience of others? Or am I mixing things up?

      "No need for a "Zombie" hunch. Maybe there can't be Zombies. But if so, it is still the hard problem to explain how and why not."

      I just don't understand how we can posit that zombies could be having this very same conversation we're having. I just can't wrap my mind around it. I understand the difficulties in explaining how meaningless stuff add up to meaningful stuff, but the idea that there could be zombies seems unintelligible to me. To me, the evidence for feeling is the belief that we feel (whatever the hell that means) and so I can't wrap my mind around the idea that something can act the exact same way, have the same underlying biochemistry, says it feels, yet we still have reason to question whether it really feels. If we think the zombie is mistaken in what it means by feeling, why couldn't we question the same thing in people?

      It seems to me like explaining how we do what we do (including believing we feel) is enough work in itself. Would explaining why we believe we feel be enough to account for feeling from your perspective? If not, how can we ever explain it since feeling is considered an inherently subjective phenomenon?

      (continued in reply)

      I admit this is all rather convoluted, but its my attempt to incorporate “feeling” somewhere. If it’s beyond explanation at first glance, which it is, I think you have to start questioning the semantics. Out of curiosity, do you think people mean something different by “sensation” or “experience”?

      Delete
    4. “Now do me a favour, Marc, and run that argument by me again -- the one about "consciousness" being an "illusion." Only this time, try it on feeling:

      "I feel, but I'm not really feeling: it's an illusion. Descartes' Cogito is wrong: there's no feeling going on; it just feels like there's feeling going on..."

      (See how talking about "feeling" instead of one of the weasel words keeps us honest?)”

      I’m not claiming consciousness doesn't exist. I mean consciousness seems to be something, but that something is an illusion (I guess what in my analogy with magic would be deemed “Real Consciousness”). For example, having freewill seems be part of the illusion of consciousness. We all feel (i’m aware I’m using “feel”) like we have freewill, but when we attempt to focus on our thoughts, we seem to lack control over them. So, we feel, because we say we feel and “feel” is the word we use to categorize the indescribable. It seems like a redundant word to use: I experience feelings, I feel experiences, I experience experiences, I feel feelings, I sense feelings, I feel sensations, I experience sensations, I sense experiencing feelings, etc.... This all seems like it’s saying the same thing, namely “I’m detecting something I have a hard time putting into well-defined categories.” So, I think the question is why “does the magic trick work from that particular perspective”, or “why do we believe we feel?”. If the HP is essentially explaining why the zombie hunch is mistaken (while the zombie hunch involves believing there could be an identical universe where this conversation is being had by “unfeeling” biological robots), then I can’t help assuming this is all is due to confused semantics. I don’t know the answer, but I think the difference in our conception of the problem revolves around how we relate “experience/feeling” to “belief”. I think you’d say (sorry if I’m wrong) “we feel believing”, but I’d say “we believe we feel” but we’re simply mistaken in our belief that “feeling” is an all or nothing category, as if the distinction really captures anything substantial. Maybe it’s similar to “living” and “non-living”... it’s a useful category, but when explaining the “living”, there’s no sharp divide. It’s a useful ad hoc category used to do the right things with the right stuff, but the distinction clouds our ability to make sense of living things.

      Delete
    5. “Out of curiosity, do you think people mean something different by “sensation” or “experience”?"

      My immediate response to this would be that “sensation” is physically tangible, whereas experience is a collection of feelings, creating something greater than their sum. To me, experience is one’s (personal and unique) perception of reality.
      But then, if I think about it a bit harder, this just seems to be the question: “what is meaning?”
      The trouble with language is that it is imprecise. It is a translation of meaning into symbols, and thus reductionist. How are we to know if we use the same symbols to denote vastly different things? Maybe people mean different things by “sensation” or “experience”, but maybe you and I also mean different things when I say “experience” and you say “experience”. We can only express things we have words for. Any word you don’t know is something you cannot express. When thought of this way, language is an inadequate tool.

      “I mean consciousness seems to be something, but that something is an illusion”

      This sort of reminds me of William Hume’s idea of the self: “Man has no identical self”. Hume argued that the self was not a single experience, but was instead the sum of many self-perceptions and sensory experiences. If we liken “consciousness” to this idea, then I can see why you might consider consciousness an illusion: it is simply a way of rectifying/grouping many sparse feelings, experiences and perceptions of yourself and your place in the environment.

      Delete
  4. “Another symbol system is natural language. On paper, or in a computer, it too is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. In the brain, meaningless strings of squiggles become meaningful thoughts.”

    What about words that are made up? Are these ‘symbols’ grounded? If someone were to say “I florped the cat”, although I don’t know what “florped” is, I would assume that it is some action the subject, ‘I’, is performing towards the object, ‘cat’. So, in a way, I have an idea of what “florped” could be, but I do not know what exactly it is. Where do new, unknown words (or made-up words) fit into this symbol-grounding problem?

    ReplyDelete
    Replies
    1. I don't have any definite answer (luckily I think I might be in good company), but I think you can rationalize how you seem to make sense of what "florped" can mean by noticing how even seemingly well understood grounded words can be understood when put in different contexts. We tend to unconsciously perceive the appropriate meaning of words within any given context. "Play", in its most concrete form, is used in sentences like "I want to play the game", but can be used even more abstractly when used in sentences like "Would you play that song?" or "Let it play itself out". Words tend to bend their meaning to fit the context without us having to do much conscious work, so "florped" being imbued with some meaning (even though its quite vague) through its context, I don't think, should be considered an exception to the rule.

      I would assume most words find their meaning through context, initially more dependent on the actual physical context (what stimuli happen to coincidence with the hearing of the word) and later more on the actual conceptual/linguistic context (like "florped" inside of "I florped the cat").

      Delete
    2. Yes, you can sometimes figure out what a new word means from context, without the need of a formal definition. That's still enough to ground it.

      (Also, it's passible to only understand the meaning of a word partially, or to an approximation: in fact apart perhaps from formal definitions in mathematics, all word meanings are approximate.)

      Delete
    3. I have a question in relation to Vivian’s comment. To understand and be able to use a language, we would need to ground verbs, adverbs, adjectives as well as in-between words such as “the” “a” “and”..etc, I’m not sure I quite grasp symbol grounding but how are we supposed to ground these? Moreover, as Harnad writes in his section on the Natural Language and the Language of Thought (and a few other students have quoted as well): “To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations. ” Therefore, to be grounded, we need to be able to interact with the words from a sensorimotor perspective. For words like “apple”, that is understandable but for a word like “the”, I’m not sure that’s even possible. Do we therefore have to ground all words within a language to understand it or simply words that refer to physical things in the world? I understand that context could help with non-words such as “florped”, but to use the context to understand the word “play” in the sentence: "Let it play itself out" to use Marc’s example, you need to be able to associate a meaning to “Let” “it” “itself” and “out”. Without grounding a meaning to these words, there is no way to understand the word “play” from contex.

      Delete
    4. Catherine, I 'd assume objects (nouns) and properties of objects (adjectives) would be grounded first. Once you have grounded "car", for example, maybe "a" or "the" come to be learned from the combined contexts (linguistic and physical). A child might hear "the" in contexts where a specific "car" seems to be the referent, when the parents are referring to their car, the one the child rides in. "A car" would be mentioned when referring to other cars most of the time. This might seem more difficult and abstract to decipher for the child's brain, but remember that "a" and "the" and "it" are words used very often, so they'll be "the car" vs "a car" experiences as well as "the ball" vs "a ball" experiences and obviously many others.

      I don't know what Stevan would say, but I have a feeling he wouldn't say particular words are the big issue... just the first ones. I tend to assume experience can account for much of the grounding, so take everything I assume with a bucket of salt. I think the Poverty of Stimulus argument tells us grammar rules can't be accounted for this way, so aside from the initial grounding problem, there might be another obstacle to experience accounting for all linguistic knowledge.

      Delete
    5. There are two kinds of words: content words (also called "open class words" (like nouns, verbs, adjectives, adverbs) that name a category whose members (examples) you can point to as their referents. That's about 99.9% of all words, and always growing. Then there is a small (and fixed) number of function (or "closed class") words like a, the, if, not. Function words are syntactic (formal) and can be learned from context ("an apple" vs "the apple"; "apple" "not apple") or from formal instruction, like maths. Function words do not have referents (meanings), just formal usage rules. So the symbol grounding problem is about the content words, not the function words.

      Delete
  5. I find the symbol grounding problem slightly circular in its reasoning.
    1) How do we generate meaning or ground symbols? We do it with our head. But how does the brain generate meaning? With sensorimotor capacities, in order to interact with the “world of objects, events, properties and states”. But how do we interact with that word? With some, at least basic, underlying understanding (thus meaning) of what the world is, and of what the objects/events/properties/states that compose said-world are.

    Or, I would also see it as:
    2) What do we need to generate meaning or ground symbols? A brain (dynamical system). How do that brain generate meaning? With sensorimotor capacities. How do we generate sensorimotor capacities? With our brain.

    ReplyDelete
    Replies
    1. By the way, I do know that the Prof argues (in his video) that sensorimotor capacities (involving interaction with the world) requires learned categorization capacities. However, I don't see how you can categorize "stuff" without some sort of prior understanding of what the "stuff" actually is. In other words, you need meaning to gain more meaning.

      Delete
    2. Categorizing and Naming

      A formal symbol system is just squiggles and squoggles: no connection between the symbols and the things in the world the symbols might denote.

      To ground its (verbal) symbols, a T3-passing system (whether an organism with a brain or a robot) needs sensorimotor (robotic) capacity. This means the ability to learn to categorize the things in the world.

      To categorize is to do the right thing with the right kind (category) of thing.

      That does not begin with naming, but with much more concrete kinds of doing: eat this kind of thing and not that; avoid this and not that; manipulate this this way and that that way.

      Once concrete sensorimotor doings are grounded in the capacity to do the right thing with the right category of thing (which requires recognizing the category), the doing can be short-circuited to become any arbitrary act (e.g., a gesture or a vocalization). That can then become the name of the category -- but not before the proposition (a subject/predicate statement with a truth-value: TRUE or FALSE) is born.

      We'll talk more about that when we get to the nature and evolution of language.

      (There is nothing circular about any of this.)

      Delete
    3. I see. Then, categorization does not imply any "meaning" as a requirement. However, I have a hard time grasping how would T3 acquire this categorizing ability.

      If "To categorize is to do the right thing with the right kind (category) of thing", how would T3 know what is the "right thing" to do? How would it recognize that X goes in category Y, especially if T3 has never encountered X before.

      If T3 were to encountered a zebra for the first time, how would it know that zebras and horses are different species (although from the same family) despite their similar appearance?

      Then again, maybe this will become clearer in future lectures

      Delete
    4. My guess is that T3 would acquire this ability through learning and interacting with the environment (thus the need for sensorimotor capacity). If we think about the world before the evolution of language, people still needed to communicate with each other. Therefore, I would say that learning is essential to the T3 robot.

      I kind of find it irrelevant that you would know that a zebra and a horse is a different species. You ask how a T3 robot would know that zebras and horses are different species...well I definitely didn't know the first time I saw a picture of a zebra that it was different from a horse! I didn't know until someone (perhaps friend, teacher, or parent) told me that it was a zebra and not a horse!

      Delete

    5. "However, I don't see how you can categorize "stuff" without some sort of prior understanding of what the "stuff" actually is. " (Florence above)

      It turns out in fact that poeple's categories can be shifted around without them actually being aware that they are categorizing at all. for example, you are exposed to sounds that all start off sounding the same to you (e.g., you hear a bunch of 'pa's) but that in fact these sounds actually consist of pa's which have property X and other with property Y (e.g., a built up of pressure or not before the air is released to make the 'p' sound), you will end up categorizing them in 2 bunches of different stuff. Similarly, even if you start with categories you do have (e.g., ba's and pa's), if you are presented with a bunch of mid-way between ba and pa you'll actually start treating them as part of the same category. This suggests to me that you CAN in fact categorize (and even re-categorize) without understanding what is it you are categorizing... if this is of interest to you you could look up the work of Jessica Maye and colleagues (e.g., Maye, 200 and Maye & Gerken, 2001)

      Delete
    6. "It turns out in fact that poeple's categories can be shifted around without them actually being aware that they are categorizing at all."

      But we're still differentiating items along a continuum. T3 doesn't work. Florence won.

      "...how you [can you] categorize "stuff" without some sort of prior understanding of what the "stuff" actually is [?]"

      For T3, everything is on a single continuum. Oh yeah, Ethan can see differences between a potato and, say, Joseph Goebbels. But where would Alex Rosenthal fit on the scale of potato to Goebbels? A lot closer to Goebbels. Pay close attention because the example is illustrative.

      A perceiving, measuring creature needs a program to "see" in the same way we do. We construe the world according to categories, not the other way around. We don't see the world for what it is. And this was Nietzche's whole lament. No two leafs are the same. And yet a tree is full of leaves. A forest is full of leaves. Language forces us to "lie" by boxing in our world.

      In order to do this we need...wait for it... categorization.

      "Even those aspects of the world that are experienced as continuous and holistic are represented with language that is discrete and combinatorial." (Senghas, A. 2004) With T3, I think, we've gotten an experience of a holistic world. But cognition does not yet have symbols; only icons.

      Senghas, A. (2004). Children Creating Core Properties Of Language: Evidence From An Emerging Sign Language In Nicaragua. Science, 1779-1782.

      Delete
    7. Sorry, T3 would work, but are sensory-motor receptors enough for categorization?

      Delete
  6. "Symbols can play a significant role in the development of an individual (see the example of children’s drawings later in this chapter), but most of the time symbols are part of social interaction – such as in communication through language – and partners get feedback on how their own semiotic networks are similar or divergent from those of others. The semiotic networks that each individual builds up and maintains are therefore coupled to those of others and they get progressively coordinated in a group, based on feedback about their usage. If I ask for the wine and you give me the bottle of vinegar, both of us then learn that sometimes a bottle of vinegar looks like a bottle of wine. So we need to expand our methods for grounding ‘wine’ and ‘vinegar’ by tightening up the methods associated with the concepts that they express."

    In reading this, and upon reflection of the other articles this week, I was questioning why at T2 is incapable of grounding.

    Programs are able to learn, and correct associations, in the way the example above describes (or so I believe). Yes, it doesn't inherently entail the understanding of meaning as well, yet it seems to me a T2 is capable of grounding.

    Suppose we lock an individual in a room at age 20 (sort of like Searle Chinese Room), and communicate with him only via email thereafter. Using the zebra example, we assume he's never seen one, but is familiar with horses and stripes. We describe this to him, and he is able to make the correct association, and is able to picture it. In this case is he not grounding it, and realizing it's meaning without utilization of his somatosensory system?

    In the same way, an appropriately programmed T2 would be able to make a similar connection of horse + stripes = zebra. As Searle's chinese room argument suggests, it wouldn't necessarily understand, but is that necessary for grounding?

    So in summary, I'm uncertain why somatosensation in a prerequisite for grounding. Am I misunderstanding grounding? Would an individual with disabilities, without somatosensation, sight, smell, movement, and talking (say only capable of hearing) be able to ground symbols? It seems to me the only practical difference in this question between a T2 is the inborn categorization capacity that Harnad discussed in the video. Is that sufficient enough of a difference for them to be capable of grounding, whereas a T2 may not?

    ReplyDelete
    Replies
    1. From my understanding, grounding symbols is done by sensory motor and learned categorization capacities. It seems to me that T2 is not capable of grounding because it was no way of connecting the arbitrary symbol to something in the real world. This connecting is done through sensory motor capacities (e.g. sight, hearing, etc.).

      In your example of locking an individual in a room at age 20, I agree with you and I believe that he IS grounding a zebra. I’m assuming that this person has already grounded a horse and stripes before entering the room. But, I don’t think that it is absolutely necessary to ground something by seeing it. Harnad stated in this article that one can “combine and recombine categorical representations rule-fully into propositions that can be semantically interpreted”. Thus, he can ground “zebra” without actually seeing one. Instead, he can ground zebra by being told to combine stripes and horse. However, this individual would have had to have used his senses to ground “stripes” and “horse” in the first place. So, he would be using his somatosensory system, but in more of an indirect way.

      For example, I could describe a unicorn to you. It would be something along the lines of horse + horn = unicorn. You have never seen a unicorn, but you can still ground it by combining what you have already grounded (horse and unicorn).

      So, I believe that T2 can be taught to recombine horse and stripes to make zebra, but this manipulation will be solely based on the shape of the symbols. T2 is incapable of grounding because it has no way of connecting the word “horse” with the actual creature.

      Delete
  7. The Symbol-Grounding Problem is an explicit formulation of the intuition pumped out of the Chinese Room. In simple terms, I take it as highlighting the difference between asking somebody “What is a tree?” and telling them “Show me a tree.” The assumption of computationalism was that being able to produce enough true sentences about trees was enough to consider the computational system as understanding trees. The Chinese Room, on the other hand, pointed out that without any relation to the world, these sentences are just floating, and the only meaning they ever receive is “parasitic on the meanings in our heads” (Harnad, 1990). In the particular case of the Chinese Room, it is parasitic on the meanings in the heads of his Chinese interlocutors. Hence, the claim is that the minimum a system must be able to do in order to be said to understand something is to be able to pick it out in the world (although this might not be sufficient).

    “The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science.” (Wikipedia)

    Insofar as our bodies are physical things moving according to physical laws, then there is indeed no problem in principle in building a robot which has our categorizing capacities. Is that capacity sufficient to make it T3? In other words, issues of consciousness aside, is categorization the essence of cognition? I think this is debatable. In particular, it presupposes that the organism is always planning and/or knowing what it’s doing. It also presupposes that ambiguity is a hindrance to knowledge or a result of insufficient information, and that it cannot be a feature of the world: it presupposes that there is a single “real” state of affairs. But some phenomena are truly, irresolvably ambiguous and their whole meaning is in that ambiguity (e.g. poetry, metaphors, Necker Cube, etc.)

    But perhaps these are just details. If we do suppose categorization is the essence of cognitive performance, then I do not see any a priori reason why a robot with sensorimotor capacities (grounded) could not pass T3. I do however wonder why that robot would care to do that or anything else for that matter. To assume that a robot with all its capacities would just find itself something to do is glossing over the relevance of our being alive (and more important, mortal) in understanding our behaviour. This is also relevant to the issue of consciousness:

    “Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point (Harnad 2001b, 2003, 2006).” (Wikipedia)

    I think that the way the problem is framed here does make the problem intractable: once we agree that we have an other-minds problem, i.e. once we have made feeling a purely private thing, then there is no way out and the mind-body problem will remain. But to make feeling private is something we do when we stop and think about feeling. Thinking about thinking, or feeling about feeling is likely to distort the thinking or the feeling.

    I know I feel, that’s a given. But I also know that you feel. All human beings and all animals feel pain when the integrity of their body is threatened. I don’t just assume they do in order to err on the safe side. The reason why I know that is because they are living things and everything they do is understood and intelligible in the context of their being alive. Yes, a T3 robot could pick out an apple from a bowl of fruit. But why on Earth would it do so? What does it care about apples, trees, etc.? Even with T3 capacity, a robot remains an engineer’s slave.



    ReplyDelete
    Replies
    1. This is not a vitalist point, but an enactivist one. Being alive means wanting to keep that life going. This means every situation the organism finds itself in is one where self-maintenance is a problem it must solve by finding some way or other to interact with its environment. Through this interaction is meaning generated. There is no wetness in the world. No redness. No vertical lines. All these things exist only at the interface of a living organism and the environment it is embedded in, and they exist only insofar as they have some relevance for what it is trying to achieve, namely, staying alive. (This is written in dogmatic terms but my point here is to point to enactivism as a novel approach that does not introduce a dualism from the get-go by assuming feeling is necessarily private.)

      Delete
  8. How do words get their meanings? Harnad doesn’t promise us the full answer in his piece, but he gives us at least part of it. Here is how:

    First, Harnad points out that words on a sheet of paper don’t inherently mean anything. They are just squiggles and squiggles—strings of symbols. It is only when someone who understands the language of those words reads them that they become meaningful. I know that the string of symbols “k-i-t-t-e-n” refers to a small furry animal with two pointy ears.

    Are words inside computers inherently meaningful? Searl’s Chinese Room Argument says no! If Searl (a non-Chinese speaker) were to provide accurate Chinese answers to Chinese questions by simply manipulating Chinese symbols according to rules based purely on the shapes of these symbols, he still wouldn’t understand Chinese! And nor would a computer, doing exactly the same thing as he is. So no meaning here.

    If words on sheets of paper or in computers aren’t meaningful, how do words gain meaning in our heads?

    Part of the answer, per Harnad, is that the symbols in our heads are GROUNDED. The word “kitten” is a string of symbols “k-i-t-t-e-n.” BUT because I have sensorimotor capacities (ex: eyes with which to see a kitten, fingers with which to pet it), I can make a connection between that string of symbols and the thing it refers to (a real-life kitten!). And aha! Meaning arises!

    Harnad doesn’t say that every word needs to be grounded (because what sensorimotor capacities would allow me to make a connection between the string of symbols “idea” and the real thing?) but at least some of them do for meaning to arise in a system of symbols.

    Now for the tricky question: if I build a robot that can do everything I can do, that has both computational and sensorimotor capacities, will the words in this robot’s head have meaning?

    Harnad doesn’t answer this question. The symbols in this robot’s head are grounded, which is a necessary condition for meaning to arise. But it may not be enough. The robot may have to be conscious as well …

    Here is the question I was left with: does the symbol-grounding problem challenge the claim that cognition is merely computation (ex: that we can do everything that we can do by manipulating meaningless symbols according to formal rules)? Sure, it helps explain why there is meaning in the words of our heads but none in the words in computers (Searl already demonstrated this I think, but now Harnad shows us why Searl is right). But do the words in our head need to be meaningful for us to do everything that we do? Can human capacities be generated without understanding?

    ReplyDelete
    Replies
    1. The symbol-grounding problem does seem to challenge computationalism. We see that words are meaningful when grounded because these symbols have intrinsic not extrinsic meaning.

      Harnad describes the Chinese room in his 1990 paper, saying, “the interpretation will not be intrinsic to the symbol system itself: It will be parasitic on the fact that the symbols have meaning for us”. Therefore, we see in the Chinese room that these words have no meaning at all and thus the system does no understanding. This is a T2 robot that does not have the power to ground. Given only the ability to compute, a system cannot ground symbols.

      We do know that minds, for whatever reason, are able to do grounding. That means that the mind has something extra, which machines do not. Harnad writes, “whatever it is that the brain is doing to generate meaning, it can’t be just implementation-independent computation”. The fact that there is an extra “something” challenges computationalism.

      I think you need to flip your last two questions around a little. Words are meaningful BECAUSE of everything that we can do (our mind’s abilities). Understanding exists because of something that our mind is capable of beyond computations.

      Delete
    2. Searle’s CRA had already answered that cognition is not merely computation, or in other words that cognition is not ALL computation. The symbol grounding problem, in my opinion more than challenging that cognition is merely computation, is putting forward the issue that there is more to cognition than just computation (as we have already stablished) and (importantly) how to figure out what this “more” is. Part of this “more” is how meaning is attributed to words and what meaning in itself is, which as Harnad stablishes leads us to the “problem of consciousness.”

      I agree with Lila that it is because of our sensorimotor dynamic capacity that we achieve understanding. As an example, when a person is reading a book in a language that is not their native language (for example: Spanish as a mother tongue and reading in English) the person will probably not have access to the meaning of all the words in a sentence but the context might allow to pick out the overall meaning, and reading will not be interrupted by having to refer to a dictionary; in the same way I believe human capacities can be generated without complete understanding (or we could just say T3?) but SOME understanding will always be required to generate COMPLETE human capacities (T4). Human capacities can easily be generated, the issue is the awareness accompanying those capacities, and whatever happens in our brains to generate that awareness.


      P.S.: link to movie coming soon relevant to this class  https://www.youtube.com/watch?v=l6bmTNadhJE “a machine that can think and feel”

      Delete
  9. I am having trouble with the direction of the symbol-grounding problem.
    By direction, I mean the following:
    The way I conceive of symbols and their meanings is either, I have something that I want to express and so I use a symbol to express it. Or, I am presented with a symbol, and it signifies something meaningful to me.
    The symbol-grounding problem only seems to deal with the direction of a symbol being presented, and it signifying something (a meaning). Let’s say, input. There is a relationship (direct or indirect) between the symbol and the meaning it takes on inside my brain.
    Yet what about the reverse? What about meaning before symbols? Can a thought be grounded if the thing being thought about does not exist, or more specifically has no symbol to denote it? Where then is the relationship between my thought and its content? Is a thought grounded if there is no symbol to represent it?

    ReplyDelete
    Replies
    1. I think that the process of interpretation is directional. Interpretation involves taking symbol as input and attaching meaning.

      I don’t necessarily see direction to the symbol-grounding problem. Meaning is the referant of word, but the mind doesn’t have to directionally attribute it to the word, it just exists within the mind.

      As Harnad says in his 2003 paper, systematic interpretation is not equivalent to meaning. He explains “We select and design formal symbol systems (algorithms) precisely because we want to know and use their systematic properties”. This description captures both directions that you mention. People make symbol systems to capture meaning (direction 1), and then, people use symbol systems to communicate meaning (direction 2). There is a difference between this process and “meaning”. Rather, in the paper, Harnad talks of meaning as a property that something can have in our mind.

      Delete
  10. In The Symbol Grounding Problem (2003), Harnad explains briefly what the symbol grounding problem is and why we should care about it. While reading this paper, I came up with two main questions and I hope they are relevant with what we talk about in class.

    The first thing is what exactly is grounding. According to Harnad, grounding is the capacity or ability to pick out different expressions' referents. A thing can have several ways to express it, and if a system is able to always pick out the referent no matter how the expression changes, it is said to have the ability of grouding. It seems to me that grounding means understanding the real meaning of the expressions, and the ability to always identify the links between expressions and their referents, instead of looking at the shapes of the expressions and the rules of the system.

    But, how do we, people, learn to ground? For example, how do I understand the meaning of grounding? I definitely did not look at the letters of the word grounding. By introspection, it seems to me that by looking at the explanation of the word grouding, I am able to connect the explanation of grouding to whatever knowledge I have right now and form a definition or an explanation that myself can understand. Harnad seems to argue that such a system is not present in computers (if my introspection of this system is correct) and that grouding is necessary if a system wants to pass T3. However, at the same time, I wonder whether grounding and understanding mean the same thing. It seems they are different (otherwise we just use the term understanding). I feel that grouding is more basic than understanding? Understanding will be more complicated? It is possible that you can ground but cannot understand?

    Then the second question came up, how actually do human understand meanings and how exactly do we use our language? It seems a question for neuroscientists and cognitive scientists, but the answer to this question will benefit to answer whether it is possible for a robot to have the ability of grouding. It seems very hard for me to understand, because we were not born with the ability of speaking or understanding, and we basically learn how to speak and how to understand what other people say. Thus, is it possible for computers or robots have the ability to learn like we do when we are babies? In class, we discussed that computers or AI can be very powerful, you are able to teach them and maybe they can make modifications of what they have performed. I guess the answer is no? because in order to have that ability, computers or AI will have to be able to replicate our brain mechanisms and they are unable to do so right now.

    I just want to clear my mind about what I have right now. Please tell me if there's anything wrong with my current understanding or direction in this class.

    ReplyDelete
  11. This comment has been removed by the author.

    ReplyDelete
  12. What I grasped from this week’s readings on the symbol grounding problem is that you need a T3 robot to pass a T2 test because a purely computation based T2 robot manipulates symbols according to their shape (which we know is irrelevant to their referent and meaning) and has no way of connecting the arbitrarily shaped symbol to the non-arbitrary thing that it stands for because it can’t perceive or act on things in the outside world. It is the cognitive ability to ground the symbols in our heads that allows us to understand arbitrarily shaped symbols on a page as having meaning and this same ability allows us to interpret the symbol combinations generated by a machine as meaningful. Because of the CRA, we know that a machine can use manipulation rules to generate language/information that is meaningful to us but means nothing to it. A robot couldn't pass the Turing test if it depended on the ability of interpreters to attribute meaning to its outputs. As Harnad writes, “It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.”

    My question for the week is related to the idea that groundedness is not necessarily a sufficient condition for meaning. Say we have a T3 robot (that interacts with the environment and picks out the referents for its symbols). This ability to match symbols and referents says nothing about our T3 robot’s ability to understand things like sarcasm or inside jokes, right? If I sarcastically tell a T3 robot that nothing makes me happier than hearing my alarm go off at 6am, I’m trying to convey that I’m not a fan waking up early. However, if the T3 robot just takes my sentence and matches the arbitrary symbols to their referents, wont it think I legitimately enjoy getting up at 6?

    What if the T3 robot and I had known each other for a while and it had previously seen and heard me do and say more straightforward things indicating that I don’t like waking up early? Would it be able to use this previous information to come to the realization that my sentence about loving 6am alarms couldn’t possibly be true or would my sarcasm just cause it to get confused?

    ReplyDelete
    Replies
    1. “If I sarcastically tell a T3 robot that nothing makes me happier than hearing my alarm go off at 6am, I’m trying to convey that I’m not a fan waking up early. However, if the T3 robot just takes my sentence and matches the arbitrary symbols to their referents, wont it think I legitimately enjoy getting up at 6?”

      A T3 robot will be able to do anything that a human can do verbally and in real life. The sensorimotor capacities of this robot will allow it to ground symbols. I agree with you in that “this ability to match symbols and referents says nothing about our T3 robot’s ability to understand things like sarcasm or inside jokes”; this ability and understanding sarcasm require two different things. This ability to ground symbols to their referents is necessary to understand the meaning of that sentence at face value. Other cues are needed to place the sentence in context.

      Pragmatic cues to pick up sarcasm, such as intonation, require sensorimotor capacities (like in T3). If this was a T2 robot, then it would be different; sarcasm is extremely to convey through written text and these pragmatic cues are limited. As a result, the T2 robot may not understand sarcasm. But, I believe that in the future, a T3 robot will be built that can do what a human can do. As such, this robot will be able to pick up intonation cues and apply that to the context of speech. Thus, this T3 robot includes the ability to understand sarcasm and inside jokes.

      Delete
    2. Just wanted to add that in retrospect I absolutely agree that a Turing Test passing T3 robot would be able to understand sarcasm. The sarcasm things would only pose a problem for a T2 robot (because it lacks to sensorimotor ability to pick up the non-verbal cues it would need to understand sarcasm).

      If we are able to create a Turing Test passing T3 robot that robot would be able to understand and recreate sarcasm and any of the other complex aspects of our everyday interactions. The level of complexity of a certain cognitive ability just means it may be more difficult to reverse engineer but not impossible.

      I think that maybe what I was trying to get at in my original comment an extension of the other minds problem and/or a fundamental problem of communication. If we can never know for sure that another being is capable of feeling, can we ever know for sure that we were successful in conveying the feeling we were having when we said something?

      Delete
  13. With regards to consciousness Steven says, “There would be no connection at all between scratches on paper and any intended referents if there were no minds mediating those intentions, via their internal means of picking out those referents” (p2)
    This makes sense to me, in so far as a symbol can only really be understood consciously if it becomes grounded in sensorimotor experiences. But the conception of consciousness still rubs me the wrong way. The article “The Symbol Grounding Problem” argues that the Chinese symbols Searle can perfectly manipulate are not grounded and thus not understood. It seems implied here that there is a relationship between sensorimotor experience and consciousness, yet it seems unclear and rather hazy. Consciousness, as the state of awareness of objects externally or within oneself, seems just as difficult to assess in a cognitive system as feeling is. Also, it seems that “what we can do” goes beyond that which is conscious. Sensorimotor capacities, in the case of touching a hot stove for example, will trigger behavior, in the form of reflex actions, irrespective of whether it has been grounded or not. Does this mean that meaning endowed in sensorimotor dynamics, if not grounded in symbols yet still processed by the brain, results in unconscious behavior? It is interesting to note that humans will then eventually ground the sensory component of the burn in addition to performing a reflex, and this presumably helps us avoid the danger in future instances. Simple invertebrate models (the aplysia) has shown us that they too can avoid future instances of dangerous stimuli, but this occurs purely through the plastic changes of the reflex system, and it would be a doozy for me to argue that they are in fact conscious too. Is consciousness then just a bi-product of a more effective, stronger neural processing that ultimately tries to serve similar purposes as coded reflex action? It seems to me, consciousness is stronger because it provides a sense of bi-directionality in its ability to then influence further behavior that reflex action simply can’t. Lastly, is it important to consider consciousness at all? If we are trying to reverse engineer, then why not take it out of the equation? It seem like it might be sufficient to say that symbols are grounded in sensorimotor dynamics, symbols that are not directly grounded must be decomposed into further symbols that eventually can be, and the symbols can then be manipulated in order to produce output that can associate with the symbol’s sensorimotor referents. How might consciousness be involved causally, if at all?

    ReplyDelete
  14. Question on the ‘Solving the Symbol Grounding Problem: a Critical Review of Fifteen Years of Research’ paper.

    I found Mayo’s argument quite convincing that one could get the meaning of a word such as ‘victory’ by connecting different representations he/she has of the concept of ‘victory’ based on the different experiences of ‘victory’ he/she has lived.

    However, I am not sure what the common feature between the different occurrences of ‘victory’ would be. Probably the feeling of winning. So, is Mayo just showing here that abstract words’ meaning also arises from sensorimotor capacities (in the sense that an individual needs to have felt the feeling of winning to know what ‘victory’ means)?

    ReplyDelete
  15. “To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations” (Harnad, 2003).
    Harnad explains that in order for symbols to be grounded, they would need to be connected in such a way with nonsymbolic sensorimotor capacities that they could be amplified by their connections with them. Further, the sensorimotor interactions with the world would have to fit coherently with the symbols’ interpretations, and would need to be able to pick out the referents of its symbols. I found this idea similar to Barsalou’s Perceptual Symbols System, in which the conceptualization of abstract mental representations is directly associated with concrete, perceptual mental representations. Thus, mental associations between abstract and concrete concepts are formed through direct experience with perceptual stimuli and are stored in memory with an associated mental state. Although the Symbol Grounding Problem is using the idea of a symbol (which is manipulated solely on the basis of its shape and not its meaning), both ideas account for the environment, which I think is important. The referents in the Symbol Grounding Problem are like the concrete, perceptual mental representations that are discussed in Barsalou’s theory. I do think that Barsalou’s theory goes a bit too far, however, in postulating that these mental associations between abstract and concept “things” are stored in memory with an associated mental state, as it seems more speculative than anything. It makes sense that the capacity to interact autonomously with the world would have be able to pick out the referents of the symbols in order for those symbols to be grounded, but what exactly is the mechanism that is accounting for those connections? Also, if the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities in order for it to be grounded, which sensorimotor capacities are most important for the grounding problem (and which ones are not important at all)?

    ReplyDelete
  16. "To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities."

    If "picking out referents is not just a computational property; it is a dynamical (implementation-dependent) property," then I would like try distilling the essential parts of a grounded symbol-meaning manipulator. First, it needs to be able to manipulate symbols, which we know how to model using computation. That part is not tricky. Second, a symbol needs to have a "dynamical" association made by the grounded robot/person. What does dynamical even mean?

    My research into the definition of dynamical has not been entirely fruitful. I looked to van Gelder's paper "The Dynamical Hypothesis in Cognitive Science" for help, but the definition seems to have changed over time: earlier it was "a system of bodies whose motions are governed by forces". Later it was "mapping on a metric space," and also "[a system] whose state at any instant determines the state a short time into the future without any ambiguity" (Cohen & Stewart, 1994). This information at least goes along with the reading's definition, that dynamical means "implementation-dependent". So how the hell is this force-driven, metric-space mapped, cause-effect system doing what a computer can't do (pick out the referents of symbols)?

    What does it take for a robot to have 'someone home' inside? What is this spark that exists in the realm of the dynamical - does this have to do with feeling? I wonder if the human symbol-meaning association mechanism works in concert with feeling. I can already feel the opposition to the example I'm about to use, but hear me out: humans associate things that exist physically (dynamically?), like sound waves or odorants, with feelings that exist abstractly in our minds (love, fear). The meaningless symbol (sound waves) are received by the body then processed to a point where they can be partnered with a meaningful feeling (how this feeling originally got meaning is not the problem I'm discussing). I wonder if the [neuronal] mechanism underlying this partnering process could inform us about how a symbol meets meaning, but I am open to the possibility that if I am way off topic.

    ReplyDelete
  17. I can't help but think that, in humans, symbols refer to feelings, and that feelings are not equal to the sensory inputs we could build on a robot. When I feel a waterfall, receptors in my fingertips buzz, but I also have a conscious experience of wetness.
    In a robot we can build receptors that buzz when they touch a waterfall, but I do not think this is enough to create the conscious experience of wetness, where I believe our symbols are really grounded.

    Some evidence:
    1) In dreams, we see things that do not correspond to our sensory inputs, however we can refer to what we 'saw' the same way we refer to things our eyes pick up when we're awake. In dreams, the experience of having seen something is detached from the sensory input.
    2) We don't always experience ongoing sensory input, for example the sensation of pressure from a chair under us or the feeling of our tongues in our mouths. Our receptors are firing, but we do not feel the sensation of the chair that we did when we first sat down. Again, the experience of feeling is detached from sensory input. (note: it is difficult to refer directly to the feeling we don't feel from a chair after having sat for a while)

    Humans ground symbols in conscious experience, not sensory input, so a robot with sensory input wouldn't be any better at grounding symbols than a piece of software, unless it had a conscious experience of the sensory input, which brings us back to the hard problem.

    I certainly agree with Harnad that T3 capabilities are required to pass T2, due to symbol grounding. However, I believe that T3 robots (and all Turing Test machines) require consciousness to do everything that humans do.

    ReplyDelete
  18. On the necessity of sensorimotor capacities for symbol grounding:

    I accept that sensorimotor capacities are necessary to ground the symbol with it’s referent, and that this ability to ground it part of what allows a computer/robot to pass T3. However, I’m curious to what sensorimotor capacities are necessary to experience for symbol grounding. In other words, what are the bare minimum capacities? A blind person can still ground the linguistic symbol “apple” by characterizing it as an object that has a particular taste, shape, texture, sound when you bite into it, etc. A deaf person can also ground the symbol “apple” by adding the colour characteristic and removing the sound characteristic. Someone who cannot feel pain won’t characterize “fire” as something that burns, but can use other characteristics. Clearly not all sensorimotor capacities are required for experience leading to symbol grounding. So what is the bare minimum, or are overlapping contributions from various capacities necessary but without a specific combination?

    ReplyDelete
  19. From this reading I gathered that the shape of formal symbols are only valuable in their systematic interpretability; their shape is arbitrary and meaningless except for what they represent in our brains. Although I am not quite sure, I believe this is to say that the meaning behind a word lies in the stores of memory and information that are retrieved and reprocessed when exposed to a symbol shape. I think that (based on Searle's Chinese Room argument and the idea that meaning requires consciousness) Harnad is nudging the reader to believe that the same is true for words. The shape and sounds of words are important in their systematic interpretability however they mean nothing until they are detected and registered by our brain.

    ReplyDelete
  20. 'The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. The symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains (Cangelosi & Harnad 2001).'

    I understood that for a symbol to be grounded it must possess the capacity to pick out its referents, however I do not understand why the symbol must be connected to its referent directly. Is this to say that the meaning of words within our brain must be consistent with the state of the referent within the outside world (which formed our cognitive meaning of it initially)?

    ReplyDelete
    Replies
    1. I don't think the symbol has to be directly connected to its referent-- a lot of times when we define a word, we use other referents entirely but still manage to convey the same meaning. The symbol-grounding connection seems like it can be as specific or as general as needed, since Harnad writes that, "To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that its symbols are systematically interpretable (by us) as referring to." Properties and states are very broad categories to be grounded in, so I think there is a lot of leeway in what constitutes a "direct" referent.

      One answer to your question could be that it's not so much that the symbol has to be directly connected and referenced by its referent, but instead that any symbol in our brains has some tie to the outside world because symbols have the power to "interact autonomously." To me, this makes the most sense using a non-language example: currency symbols. If someone read, "this cookie is $1" but did not know that the symbol $ referred to dollars, they could think $1 was a number of other possible characteristics a cookie can possess. Similarly, even though France no longer uses the franc (making francs a symbol inconsistent with the present outside world), "francs" still have a meaning-- they may not be as directly connected to the current monetary system, but it is still a symbol with a functional referent. All this goes to say that the meaning of words in our brains does not have to be consistent with referents-- I think a referent can be flexible and change over time while maintaining its role in grounding any related symbol.

      Delete
  21. "Another symbol system is natural language. On paper, or in a computer, it too is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. In the brain, meaningless strings of squiggles become meaningful thoughts. "
    I'd like to take this chance to discuss how symbol grounding theory may be applied to explain some findings in the study of bilingualism.
    Words from my native language have a more direct and embodied "meaningfulness" to them. These differ from words I learned later in life (usually in a formal, prescriptive setting) in a way that when, for example, the word "apple" is uttered (in my native language Chinese) I immediately picture the red round fruit in my head. Sometimes I feel like I can almost taste it. When the same word ("apple") is uttered in English, however, there isn't that quick association of word vs. image/taste/feeling, rather I might see the word A-P-P-L-E spelled out in my head, or have the words "manzana" "pomme" "苹果" pop up. You don't just have to take my word for it, there have been studies demonstrating a very similar effect of first vs. second language on this underlying embodied meaning (a term I am using in place of "symbol grounding", for now).

    First a little bit of background information: the Stroop Effect is observed when the name of a color is printed in a color not denoted by the name (e.g., the word "red" printed in blue ink instead of red ink), naming the color of the word takes longer and is more prone to errors than when the color of the ink matches the name of the color. The significant finding is that in late bilinguals, the interference (i.e. longer reaction time to name the color, and more errors) was smaller in their second language than in their first language and it was not affected by the proficiency level of their second language. In other words, people with near-native proficiency in their second performed similarly to people at an intermediate level, and both groups did better using their second language than their first language. This is where the symbol grounding problem comes into play. Is this variance of effects between one's first- and second- languages on the Stroop Test (which I am interpreting as the dynamic representation of colors) the result of different levels of symbol grounding (explanation to follow), or is it the result of different algorithms the brain uses to ground such symbols?

    (continued in comments)

    ReplyDelete
    Replies
    1. By levels of symbol grounding, I am referring to either direct or indirect grounding. For example, when I first learned the word "red" in Chinese, it's directly grounded by associating this particular word to the color red (or more specifically, to the neurochemical mechanism triggered by, or the feeling of, perceiving the color red). Later when I learned the word "red" in other languages, I mostly just grouped them under the existing Chinese lexical entry of "red", hence, indirect grounding. Is this the simple explanation to the aforementioned Stroop Test study? Do directly grounded symbols elicit a more rapid and involuntary association with their meanings compared to indirectly grounded symbols?
      Another possible explanation could be that we use different algorithms in acquiring/learning languages at different stages of life. Everyone acquires their mother tongue by initially directly grounding the most common objects/actions/feelings. At this stage, a combination of dynamic (sensory inputs) and formal (pronunciation and spelling of words) processes is used. As we grow up, the majority of new symbols are grounded indirectly, and there is less dynamic involvement. I guess I'm proposing here that late second language learning -- usually occurring in a class setting -- is done mostly through formal processes. Setting aside for a second the fact that machines don't have a grounded "first language" to start with, could it be possible that second language learning is (to a degree) a computational process? Just think of your second language as a (large) set of symbols grounded on the basis of another intrinsic (in the sense that the meanings are embodied) set of symbols. Might this shed light on some of the problems we've encountered in trying to build a T2 machine?

      Delete
  22. "We already know what human beings are able to do. They can (1) discriminate, (2) manipulate,[12] (3) identify and (4) describe the objects, events and states of affairs in the world they live in, and they can also (5) "produce descriptions" and (6) "respond to descriptions" of those objects, events and states of affairs."

    It seems to me that (5) is the same as (4) and (6) is the same as (3). What is the difference between "producing" a description vs. describing? Likewise, what is the difference between "responding" to a description and using it to identify an object? In the first paper (2003), Prof. Harnad uses a shortened version of this list, which I think is more fitting.

    ******************************************

    "The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route"

    I disagree with this statement and am rather confused as to why Prof. Harnad has chosen to align with "bottom-up" approaches to modelling cognition. The exact opposite could be said (and it would also be a spurious claim): "if computation is an aspect of cognition (albeit not all of cognition), then there is really only one viable route from symbol manipulation to sense: from the top-down."

    While I agree that the best models of cognition will take into account symbol grounding, I do not see this as evidence to suggest that the best way to model cognition is "bottom-up only", but a mixture of the two. I also have not seen evidence that rule out "top-down only" and "bottom-up only" approaches to studying cognition (these approaches simply cannot be comprehensive, but this does not necessarily mean they are useless).

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. I agree with Ethan here, in the sense that I have become confused why symbolic, top-down approaches are made insignificant.

      The grounding considerations within Harnad’s 2003 paper looks at iconic and categorical representations as occurring through a bottom-up manner. I agree with this argument and specifically I recognize the importance of the two representations working independently of each other and on a level of nonsymbolic representations. Iconic representations work through the basis of a sort of pattern recognition within our sensory modalities allowing a person to identify the sameness or differences of something in the environment. Similarly, categorical representations achieve classification based on the process of sensory modalities being able to detect the unchanged features of a given something in the environment.
      What has become perplexing to me within this argument is the question of: how is it possible that human cognition proves itself to be different than an animal’s cognition (in terms of being able to manipulate the world around them)? It seems to me that animals are just as capable as human in their cognitive capacities to create the understanding of iconic and categorical representations (for example elephants being able to tell apart a lion (as a preditor) versus their child (as something to care for)- I would assume that these classifications occur along the same continuum). Therefore if that is the case what makes humans different? Is it not a symbolic level which is grasping things differently? And if so, would that not be a reason for the top-down symbolic approach to have an equal amount of significance within an explanation of cognition as does the bottom-up sensory approach?


      Furthermore, it is mentioned that nonsymbolic representations are not capable of meaning, rather there needs to be an implementation of successful combination rules and schemes which allow for semantics; what is known as symbolic representations.
      Again this throws me off with why the top-down symbolic approach seems irrelevant.

      An explanation is then given about the connectionist approach and the new hybrid system view whereby symbols and nonsymbolic representations are connected. Within this theory, people are able to create categories with names because they are able to compare and contrast it with their iconic and categorical sensory representations. I am finding it very difficult to chew through this material.
      Some questions to clear up would be:
      I can see how connectionism is quite different from computational symbolic representation (because this one is occurring only on the level of symbols, rather than symbols and nonsymbols), but how does connectionism in particular play a role within cognition? Linking it back to my first confusion, how is it that this symbol system is implemented solely in humans? Are we now trying to understand this function at the level of the prefrontal cortex, and what room does this leave for computationalism?
      Also, the paper mentioned that according to connectionism “cognition is not symbol manipulation but dynamic patterns of activity in a multilayered network of nodes or units with weighted positive and negative interconnections”. Does this mean that new algorithm codes for computations can be created? And does this mean that on a level of T3 robots with sensory modalities, cognition could perhaps maybe actually have a real shot at taking place with these new algorithms??

      Delete
  23. “Our brains need to have the “know-how” to follow the rule, and actually pick out the intended referent, but they need not know how they do it consciously.”
    I understand we need some type of formula or rule in order to extract the meaning from symbols, but there does not seem to be any explanation on how this occurs. It is claimed that the properties necessary for meaning are groundedness and consciousness, but this does not tell anything about how we actually come to equate two things i.e. a symbol and its referent. Searle’s Chinese Room Argument disproves computationalism and offers nothing substantial to replace it. I do not believe computationalism to be true, although with regards to the Symbol Grounding Problem it does provide a partial solution. We take an input and register it in our system and then produce an output that corresponds to it –yes it cannot be the whole story, but taken together with consciousness it seems as though it is exactly what is occurring in our minds with language.
    Also, how could we come to know how to do this if we do not require consciousness? There must have been a first step to learning the meaning of a symbol, and this must have required conscious knowledge of the symbol and the referent. In order to have the “know-how”, we must be consciously aware of what we are trying to do i.e. determine the meaning of a symbol. Maybe we don’t know how we do it consciously every time, but at one point in time we must have in order to reproduce it.

    ReplyDelete

  24. The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up."

    What about things that are innate? Babies have certain instincts such as breastfeeding. Can breastfeeding be a symbol if it's not grounded from the bottom up? Would this not count as cognition? (Because cognition = symbol manipulation)

    What about physiological processes that are top down, such as being able to discern the meaning of words with hard-to-read typography (see: http://en.wikipedia.org/wiki/Top-down_and_bottom-up_design#mediaviewer/File:TheCat.png). Could one perhaps use pre-existing symbol manipulation to execute top-down processes?

    ReplyDelete
  25. I just have a question about the Zero semantic commitment condition "Z" imposed by Taddeo and Floridi.

    When examining Professor Harnad's proposed solution to the SG problem, they say that his hybrid explanation fails the "Z" condition due to the fact that the sub-symbolic neural nets responsible for iconization must be biased towards some features over others in order to be useful, and that such a bias requires semantic commitment

    "Moreover, unsupervised or self-organizing networks, once they have been trained,
    still need to have their output checked to see whether the obtained structure makes
    any sense with respect to the input data space. This difficult process of validation is
    carried out externally by a supervisor. So in this case too, whatever grounding they
    can provide is still entirely extrinsic. In short, as Christiansen and Chater (1992, p.
    235) correctly remark “[So,] whatever semantic content we might want to ascribe to a
    particular network, it will always be parasitic on our interpretation of that network;
    that is, parasitic on the meanings in the head of the observer”.

    Can evolution act as the supervisor here, or am I looking at a completely wrong time scale? It seems to me that the Z condition is unnecessary if you can account for a system detecting some features more easily than others by them being evolutionarily more useful to detect. I'm Sure that if dogs have mental categories of horses, for example, the number one grounding factor would be their smell. I'm I missing anything here? I'm always wary to bring out evolution when it comes to explanations of the mind but it seems as good of a guiding factor for semantic commitment as anty

    ReplyDelete
  26. Momentarily disregarding the property of consciousness as a means of discerning meaning: I wonder about the role of and degree of grounding which we would determine necessary to be adequate input for determining meaning. As stated in the conclusion: "Maybe robotic TT capacity is enough to guarantee it, maybe not. In any case, there is no way we can hope to be any the wiser." Although it has been established—and emphasised, deservedly so—that correspondence is not meaning, what would be the difference between a computer running a virtual reality software and a robot with the capacity to pass the Turing test? The former is provided with correspondence between a symbol and a referent in its virtual reality, while the latter establishes a similar relationship between a symbol and a real world referent; the primary difference, from our perspective, would be the realness of the referent—but if perceived to be "real" by the system, should show no difference in meaning. Since, in building a robot and designing its parameters by which it may be judged to have adequate symbol-grounding sensorimotor capabilities, we are dictating its subjective "experience" much in the same way that we would, say, construct a virtual reality, I'm not sure precisely what it is between the two that would delineate a solution to the symbol-grounding problem.

    Of course, consciousness cannot be so simply set aside. Returning to a passage earlier in the paper: "But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings. I agree that it is not only that which functional capacities must correspond to, but also that it as belonging to one entity—a robot, for example—may not be confirmed, or felt by another; only that we may project our own experiences in their individually perceived entireties upon entities we judge similar of receiving such empathic treatment. So, as said above, grounding is responsible and sufficient for correspondences, but we would postulate that the semantics of each of those correspondences requires consciousness (to be even considered as such—and herein lies the ineffability of consciousness, etc.).

    ReplyDelete
  27. I have to pull Fodor back into discussion after reading this article. In the case of mirror neurons, I understand the value in studying their processes and their interactions. I can reason how it can be applied to other fields and acknowledge that their study has value. Discovering how meanings are attached to words seems very much like something Fodor would not deem worthy of study because what difference does it make?

    It inspires circular logic and reasoning in that it can only raise more questions than it answers. How do our brains do this? Which mechanisms does it come from? Attaching grounded meanings to meaningless symbols is a process that most acquire without ever understanding why--just like learning/imitating and mirror neurons but what does understanding this other process grant us? Insight. That's all.

    Like Searle in his Chinese Room Argument, it does not matter whether or not he understands what he is doing as long as his manipulations of symbols could pass the Turing Test. Consciousness is not really required to do so--just the right algorithm and combination of ungrounded (to him) symbols. In the same way, it is of no consequence how we attach meaning to words because with this information, we cannot reverse engineer the brain any better than we could before or build a robot to pass the Turing Test forever.


    ReplyDelete
  28. This comment has been removed by the author.

    ReplyDelete