Saturday 11 January 2014

(8b. Comment Overflow) (50+)

(8b. Comment Overflow) (50+)

7 comments:

  1. In this article, I got caught up in thinking about the Minimal Grounding Set and that they are supposedly not unique. It would stand to reason that within a community, one language could be boiled down to fewer than 500 words that could explain everything else. How can each language result in the same words though?

    The article states that some languages have a more economical way of explaining something (that is to say in fewer words) so does that not mean that some of the words in a different MGS might be different. I expect the content words will be more or less the same but with variances in items relating to culture or surrounding. How could they be identical? Especially for certain nouns (i.e. foods, animals) that only exist in a certain place.

    While that in particular may be a silly example, if some of those boiled down content words vary from language to language how can the MGS be unique if people have different ways of expressing the same thing, This then renders the MGS rather useless if it is not universal because then in a sense we are back to kernels and must find a way to further reduce the final lexicon.

    ReplyDelete
  2. “Because we are more social, more cooperative and collaborative, more kin-dependent – and not necessarily because we were that much smarter – some of us discovered the power of acquiring categories by instruction instead of just induction, first passively, by chance, without the help of any genetic predisposition”
    I find myself agreeing with this idea especially on the basis that motivation was a driving force in the development of preposition or the ability to instruct. It is true that we as humans are more social and collaborative but we still have a big emphasis on our ego and our sense of self more importantly where and how do we place ourselves in that community. It might have been possible that their motivation was based on their desire to elevate their status in the tribe and that they did it through instruction since it increase the knowledge of the group as whole.

    ReplyDelete
  3. This article proposes that the origin of language is explained by the evolutionary advantages of instruction based categorization. Nearly all of our behavioral or cognitive abilities come down to categorization, which is ‘doing the right thing with the right kind of thing’ and categories direct our actions, they are fundamental for survival. We can learn these categories either from our experience (trial-error) by induction or by instruction from a knower. Instruction learning is more effective and less risky for the learner, as it takes advantage of all previous induction learning as a base rather than having to learn everything from scratch. Our species, as more kin-dependent, had a lot to benefit from this type of teaching/learning and the results of instruction, which first was passive or involuntary, quickly motivated individuals to use this technique as it conferred considerable advantages for survival. Intentional instruction, the authors argue, was initially gestural: gestural signs for categories possessed by the learner could be combined by the teacher in order to form new unexperienced categories; this is why language was born. The increasing arbitrarity of the symbols and the advantages of them being vocal (free limbs, larger distance communication) contributed for the migration of language from gestural to vocal.

    To assess the Symbol Grounding Problem, since not all categories can be acquired by instruction and for instruction to have an effect we need to possess some grounded symbols, the authors estimate that the smallest set of grounded (content, function, predication) words needed is about 500.

    ReplyDelete
  4. I am mostly interested in the discussion of categorization. “With the exception of a few function words, such as the and if and not, most of the words in a natural language dictionary are content words, and as such they are the names of categories. Categories are kinds: kinds of objects, events, actions, properties, states. To be able to categorize is to be able to do the right thing with the right kind of thing (Harnad 2005).” I had never before heard categorization described in this way. We distinguish the things we see by their shapes and classifying them in accordance with what we can do with them. So a chair could be classified as something that you sit on – this is important to consider because we cannot just say that a chair is something with four legs under a flat surface with a back. A chair can have three legs, a wheel base like a desk chair, or even just one ‘leg’. Categories like this are learned as evidenced by the fact that it would be ridiculous to have all the categories inborn, so few are.
    So we know that the symbols we encounter having meanings that help us categorize them. The symbol grounding problem asks how we connect these meanings with symbols. A proposed solution is sensorimotor experience – going about life and learning through trial and error. “When we successfully learn a new category by sensorimotor induction, our brains learn to detect the sensorimotor shape of the feature(s) that reliably distinguish the members from the nonmembers.” This seems to be a very useful and logical explanation.
    When we look at this in the scope of the origin of language, we see that somehow our species took whatever primitive categories they had and found a way to combine them in new ways and create other categories. The sharing of information in this way helped our species thrive and most likely aided in the language ability that distinguishes us from many other species.

    ReplyDelete
  5. Harnad

    "We conjecture that language began when attempts to communicate through miming became conventionalized into arbitrary sequences of shared, increasingly arbitrary category names that made it possible for members of our species to transmit new categories to one another by defining and describing them via propositions (instruction)." p.1-2

    The miming explanation provides a plausible explanation for language evolution, however I'm left wondering how the ball really got rolling on miming.. why has this only developed into language for humans and not other animals? Why are there cases of quasi language evolution in primate species existing today if it seems like such a logical step in optimizing a groups ability to utilize its environment?

    "And when did the evolutionary hard-coding of language into our genes and brains end and the historical soft-coding through learning and experience take over?" p.3

    I think the hard-coding problem is at the crux of the difficulty in understanding language. How might universal grammar have come about? The miming conjecture seems plausible however there is sill a large explanatory gap between how miming could have lead to adaptations in the brain.

    ReplyDelete
  6. "The symbols and symbol combinations simply have the remarkable property that if the syntactic rules are well chosen they can be systematically interpreted as the true propositions of arithmetic."

    Arithmetic can express truth (and false) statement, it can also express ill formed statements (e.g. 1 *), but they wouldn't be thought as part of the Arithmetic, because they are completely arbitrary statements that have neither truth or false. The ability to learn arithmetic is inborn, sure, we need to learn how to write numbers, but we have the ability of knowing what is a well-formed Arithmetic statement and what isn't.

    Language is also a symbol system (leaving out the semantics), any well-formed statement in natural language has a truth/false value. And our ability to tell if a statement is well formed, just like arithmetic, is inborn. This inborn ability is UG. And interestingly, not all grammar rules are part of UG, but only those that governs our thought. But that is not saying that our ability to think is limited by our language, rather, it is our language that is limited by our ability to think (UG).

    ReplyDelete
  7. I am curious about the minimum grounded size (MGS) that the article discusses. The authors use the english dictionary and the current human range of words to deduce the amount of grounded words we have, and the amount of grounded words necessary to create the rest of the words that exist in the english language. This practice does not tell any absolute minimum (instead only relates the minimum within today's current language, a snapshot of language at a given moment of development) and it does not relate why that number is the number it is, or the character of that kernel of meaning which allows us to construct worlds of abstraction on top of it. i.e. What is it about certain words (which here can be read as experiences, since their meanings are not learned but induced through sensorimotor experience) that give them the power to build so much on top of them? What are their linguistic functions and allowances?
    In addition, as time goes on, more and more words are grounded as we have more and more experiences, which then further allow us to extrapolate upwards to create more words and abstract meanings, and to destroy (or redefine) words which fail as categories in light of these new experiences. I would argue that a static language, one in which the experiences which led to grounded words are no longer occurring (i.e. no new experiences), would not be able to express all things and thus would not be a natural language. I think the constant back referencing with lived experience is what makes a language dynamic, and what allows it to remain real despite all of its abstracted value. The ground beneath us is always changing as time turns and thus we are always re and un grounding our words. Without this, the abstract tower that is language would inevitably crumble. Thus induction based learning (experience) is key to our constant grounding and ungrounding that occurs at every moment. In the end of the article, the authors re center this experiential and sensorimotor based learning, and I think this needs more attention. The two types of learning are a tradeoff. The more efficient way (learning through instruction/language) is less grounded and thus leaves us to a higher chance of miscategorizing, and thus meaninglessness. The less efficient way (learning through experience, and thus induction) is way less efficient and risky, yet is richer and leaves no room for miscategorization and is imbued with a world of meaning. Neither should be postured as more or less important in the human mind and our ability to have language. Both are crucial.

    ReplyDelete