Saturday 11 January 2014

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.

The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard. Behavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de Chomsky. À babord: Revue sociale es politique 52.

92 comments:

  1. If I understood correctly.. Pinker argues for a nativist view of language acquisition and universal grammar and does so through showing multiple studies where a child’s environment does not have an effect on their linguistic outcomes (therefore by default option it must be something innate).
    In his last section 9.3, parameter-setting and the subset principle is discussed.

    “A striking discovery of modern generative grammar is that natural languages seem to be built on the same basic plan. Many differences among languages represent not separate designs but different settings of a few "parameters" that allow languages to vary, or different choices of rule types from a fairly small inventory of possibilities. The notion of a "parameter" is borrowed from mathematics. For example, all of the equations of the form "y = 3x + b," when graphed, correspond to a family of parallel lines with a slope of 3; the parameter b takes on a different value for each line, and corresponds to how high or low it is on the graph. Similarly, languages may have parameters (see the chapter by Lasnik).”

    In this sense, an example is given with English versus Spanish language, where an English speaker would have to say a subject prior to a tensed sentence (the predicate) (ex: She goes to the store), whereas in Spanish one can say a tensed sentence without a subject (ex: Goes to the store). This is explained by a “null subject” parameter, where the subject can be deleted without ruining the grammatical meaning of the sentence. The subject would equate to the parameter of +b. The structure of the sentence still has the same form (3x) whether the subject (+b) is included or not, therefore it is considered grammatically correct in its respective language.
    Pinker then brings about an important question of “how the child sets parameters”. He explains that parameter settings are ordered in accordance to a default case, rather than it being acquired through negative evidence (recognizing which of their sentences are not grammatically correct ones based on the feedback of others). Pinker explains that a default case is measured and chosen through the Subset Principle which is a principle that argues that a setting generating the smallest language has a fixed word order (rather than it being free-ordered). Therefore the original fixed word order is what is used to construct a grammatically correct sentence order.

    If I understood this correctly, I don’t quite understand where this idea of parameter-setting and the subset principle line up with Pinker’s argument. Is Pinker trying to argue that these processes are innate?? And if he is doing so, is he trying to show that parameter settings and the subset principle are two things that are working from universal grammar, or that these are the things which cause for universal grammar, or are these rules to account for the exceptions of different linguistic propositions to show that a universal grammar does exist?

    ReplyDelete
    Replies
    1. Learning What Is and Isn't in a Category: Positive and Negative Examples

      The rules of Universal Grammar (UG) are not learned, because they are unlearnable from the data available to the language-learning child ("poverty of the stimulus"). So UG is innate.

      But the "parameters" of UG (e.g., whether a language is a Subject-Verb-Object language like English or and SOV or VSO language) can be and is learned.

      What Pinker means by learning the parameters from an ordering starting with a default case, without negative evidence, is that you can learn whether your learn whether your language is SOV or SVO if, for example, the default case is SVO, and if you hear positive example that are not SVO, then you can know yours is SOV without having to make mistakes and produce SVO first, and then get corrected.

      But UG itself cannot be learned that way, from positive examples only, as I tried to show with the simple example of trying to learn the category "Laylek" from positive examples only.

      To learn rules or features, you need to sample positive and negative examples (members and non-members of the category) with corrective feedback, so that your brain can figure out what features distinguish them.

      So lack of negative examples is poverty of the stimulus: the child neither hears UG errors nor does it commit UG errors, and get corrected. It only makes errors on conventional grammar (and gets corrected).

      Delete
  2. On: Language and Thought.
    “Is language simply grafted on top of cognition as a way of sticking communicable labels onto thoughts (Fodor, 1975; Piaget, 1926)? Or does learning a language somehow mean learning to think in that language? A famous hypothesis, outlined by Benjamin Whorf (1956), asserts that the categories and relations that we use to understand the world come from our particular language, so that speakers of different languages conceptualize the world in different ways. Language acquisition, then, would be learning to think, not just learning to talk.
    This is an intriguing hypothesis, but virtually all modern cognitive scientists believe it is false (see Pinker, 1994a). Babies can think before they can talk (Chapter X). Cognitive psychology has shown that people think not just in words but in images (see Chapter X) and abstract logical propositions (see the chapter by Larson). And linguistics has shown that human languages are too ambiguous and schematic to use as a medium of internal computation: when people think about "spring," surely they are not confused as to whether they are thinking about a season or something that goes "boing" -- and if one word can correspond to two thoughts, thoughts can't be words.”

    Reopening the can of worms again which was discussed in 6a & 6b; categorization & categorical perception.
    I stumbled upon a psychology professor by the name of Lera Boroditsky and read up on some of her work. (http://edge.org/conversation/how-does-our-language-shape-the-way-we-think)
    She is a strong advocate that that learning a language makes people think differently according to the given spoken language. The web page covers multiple of her studies, and I will briefly cover one of them.
    She looked at languages which have grammatical gender (ex: French la cle vs. le pont). She took speakers of two different languages which give opposite grammatical gender to an object/noun: German speakers(GS) and Spanish speakers (SS), and asked them to describe the object.
    Object: Key
    GS- [masculine], described it as: “hard”, “heavy”, jagged”, “metal”, “serrated”, and “useful”
    SS- [feminine], described it as: “golden”, “intricate”, “little”, “lovely”, “shiny”, and “tiny”
    Although both languages see the exact same key, and would notice the same features if asked, it is interesting to note that the features they picked out to describe are very different from each other based on their language’s grammatical gender assignment of a given noun.

    ReplyDelete
    Replies
    1. Yes, Lera does wonderful work on how language differences influence thought (not to mention that she is heart-stoppingly beautiful!).

      And, yes, her findings, too, are (weak) Whorfian effects.

      But of course they do not touch on the questions we were discussing today, which were about (1) whether or not language can express every possible proposition and (2) whether there are propositions that can be expressed in some languages and not others.

      Weak Whorfian effects are like mild biases that affect whether you are slightly more likely to see or say this rather than that, or to see or say it sooner.

      There's nothing remotely as dramatic as one of the original (alleged) Whorfian Effects, which was that your language (specifically, your vocabulary) causes the color quality differences you see in the rainbows. It doesn't. They're innate. But maybe language can massage them a little.

      Delete
  3. “For example, all languages in some sense have subjects, but there is a parameter corresponding to whether a language allows the speaker to omit the subject in a tensed sentence with an inflected verb.”

    “The parameter-setting view can help explain the universality and rapidity of the acquisition of language, despite the arcane complexity of what is and is not grammatical (e.g., the ungrammaticality of Who do you think that left?).”

    Pinker argues for innate principles of language, and, in particular, the concept of parameter settings. The parameters are set from a subset with more restrictive constraints (ex. utterances have subjects, unless there is positive evidence for pro-drop in languages like Spanish) to account for the poverty of stimulus argument, which states that there is an absence of negative evidence. I agree with Pinker in this view that there must be something innate about the language learning capacity that allows every child to learn language, given some input. The similar stages of acquisition in different linguistical concepts provides further evidence for an innate system; if there were no innate system and language was based on general intelligence, children would acquire different concepts at different stages in no clear order and there would be great individual differences.

    Although the paper focuses on first language acquisition, I wonder what role UG and parameter settings play in late second-language acquisition. Are there any parameters to set and can they be reset? If there were parameters, then would these parameters be preset to the same parameters one would have at birth, before any language is required, or would the parameters be set to those from your first language, since there are instances of transfer from the first language to second when learning.

    ReplyDelete
    Replies
    1. In a study, Elissa Newport showed that UG principles were still accessible for second language learning. She tested Chinese native speakers who had learned English as a second language. In English, subjacency prevents speakers from moving elements out of certain phrases. Subjacency is thought to be a UG principle. This constraint does not exist in Chinese. But E. Newport was able to show that when her participants were asked to rate English sentences that violated the subjacency constraint, they performed worse than native speakers of course, but above chance. That means that more than 50% of the time, they judged violation sentences as being ungrammatical. E.Newport interpreted this result as being an evidence that UG principles are still active in second language learners.

      Delete
    2. Also, second language learners probably just have to reactivate some of the parameters they had switched off after having been exposed to their native language. As Newport shows, this presupposes that the alternative parameters are still accessible when stimulated. As they have more and more experience in their second language, people become better at ignoring/inhibiting their native language parameters or at switching from the set of parameters of their first language to the set of parameters of their second language.

      Delete
    3. So UG is available in the L2, according to Newport? To me it seems like the L1 settings get replicated and then reset for the L2. However, if UG were really activated, then wouldn't that mean that all second language learners should be able to acquire the second language proficiently, to native-like competency, since children who can access their UG all seem to be able to learn the language? That is clearly not the case - perhaps it is only partially activated? Or it is activated, but the individual needs to improve at inhibiting the native language, like you said above?

      Delete
    4. Even if UG is still activated, learning an L2 (after a certain age) is not learning your L1. As you said, the L1 settings/parameters are here and competing with the new parameters you are trying to acquire. You have no previous parameters when you are learning your L1, which makes it easier to acquire (UG does not correspond to parameters but rather to principles that determine all the possible parameters a language can have) .

      Delete
    5. Vivian, the question is how children acquire syntax (Universal Grammar, UG), not "language." Syntax is just a part of language. There is no problem about how the child acquires vocabulary, or pronunciation, or even ordinary ("Penny-Ellis") grammar. It's either via observation and trial-and error-induction (with error-corrections), or via explicit verbal instruction.

      The problem is with how the child acquires UG. Because what the child says and hears during the (short) language-learning period does not provide a basis for learning UG by induction (no violations of UG, no "negative evidence"; so nothing to correct, and no way for any inductive device to discover the rules that distinguish UG-compliant from non-UG-compliant utterances).

      That's the "Poverty of the Stimulus": You can't learn UG from positive evidence only.

      All bets are off for 2nd-language learning (L2). The big hurdle is learning the first language (L1). Our UG-grammaticality judgments for our L1 are flawless; but not for our L2.

      Delete
  4. I find Universal Grammar unconvincing. There are a few main reasons for this. Firstly, it takes for granted the fact that we cannot learn language from input alone. I do not find this convincing. There is a quote somewhere in criticism of Pinker, which goes something like this:

    “Pinker’s line of reasoning: Children learn language. I do not understand how children learn language. Therefore, language learning is innate.”

    A great deal of Universal Grammar’s thrust depends on the “poverty of stimulus” argument – that the data we receive is too limited to learn language. But why must it be impossible for children to learn from negative evidence? There is a great deal of work being done on probabilistic learning – patterns that appear often are reinforced, and patterns that do not appear are not. This is why certain phrases just “sound right” to native speakers of a language. And, as Pinker himself mentions, the plasticity of the human brain decreases with age; synapses are trimmed down with disuse. So it doesn’t seem too odd to think that children are able to learn tricky things like language more effectively than adults. There is absolutely no need for negative feedback in order to correct negative evidence – we don’t need to be told not to crawl to get places, we simply observe that crawling is something rarely done beyond a certain age, and decide to walk instead.

    And as for the argument that we all learn Universal Grammar equally well, I would say that there is a core of abilities that any human must have in order to operate in society. When someone does not have this ability, it’s labeled as being a disorder. So, we have agrammatics and anumerosities and autistic individuals – everyone aside from these special populations is able to learn grammar, math, and social cues to a level beyond a certain minimum (and then there are people whose prose is outstanding, whose math is groundbreaking, and whose social sense projects an easy charisma that instantly wins you over). So there is a gradient of sorts even in achieving “Universal Grammar” – but by conveniently categorizing people who fail to achieve this level as having a disorder, and by conveniently categorizing the core that is shared by everyone as being “Universal” through broader and broader inter-linguistic comparisons, Pinker and Chomsky can hold on to their fallacious logic that “everyone learns it = innate”.

    ReplyDelete
    Replies
    1. "I find Universal Grammar unconvincing."
      Children can learn language. That's what has everyone floored. Any sort of inductive learning that goes down has to be constrained. Those constraints are what Pinker and Chomsky are referring when they talk about UG.
      I don't understand the poverty of the stimulus argument either, but that kids can make the sort of statistical inferences they do implies there are some sort of crazy constraints going on. Word learning is also a crazy problem. Human children solve the vanishing intersection. What programs tell them to do so? See Quine 1962, Meaning and Translation.

      Delete
    2. "When someone does not have this ability, it’s labeled as being a disorder."
      "Disorder" does an injustice. Abnormal is the proper word. Everything is on a continuum. When you have seven billion people, and the vast majority of them behave similarly, the phenomena needs an explanation. This is just science. Looking up at the stars and finding a pattern. But we're talking about biology here, so you can have some variance. If UG applies to people, don't expect it to show up each and every time. Proving the extreme cases exist doesn't rid you of your responsibility to prove the phenomena.

      Delete
    3. I agree that neither Pinker nor Chomsky know everything there is to learn about language acquisition but I am inclined to believe them anyway despite the gaps. A counterargument was proposed that children pick up sentence structure by subconsciously assigning true or false values to structures they hear and forming algorithms. These algorithms are internalized and allow for future production of grammatically correct sentences. This was refuted because it was considered impossible that a child could have developed so many formulas especially when they are confronted with varying sentence structures all the time. New utterances happen every day and even if a sentence structure is unfamiliar, there is still the chance it will be understood. While this mapping may not be the correct answer, I think Universal Grammar is much closer. Also children often stick in a pattern of mistake even after they have been corrected by parents as with the example of "broked" but eventually they self correct seemingly regardless of how many times they were told they were wrong.

      I wonder if probabilistic learning takes into account that children are often better at acquiring grammar than the parents they learn it from. Children born deaf often have parents that learn sign language to communicate with their children. Despite the fact that interaction with parents is the basis from which we develop language (usually), children learn a grammar that is more correct than what they have been exposed to. It seems that there could hardly be any other possibility than a Universal Grammar or some other innate capability we have. The poverty of the stimulus argument supports this in addition to what we hear not being enough to have developed linguistic skills so quickly and efficiently.

      This is a fair point but is this population of people large enough to state that Universal Grammar cannot be innate? Linguistic deficits often arise due to stroke or brain abnormalities (specifically in the areas associated with language) but had these people developed under different circumstances, plenty of that population would likely have developed language like the rest of us.

      Delete
    4. I have to agree with Corinne on this one. I, for one, am a proponent of the poverty of the stimulus argument. You asked “why must it be impossible that children learn from negative evidence?” While I guess because children don’t necessarily need negative feedback to learn how to understand (and perhaps produce) language. If negative feedback is necessary to learn the rules of language, then why are there some sentences that are rarely uttered; (for example *John asked Mary to look at himself). It’s pretty unlikely most children utter or have hear their parents utter such examples, but they know that they are wrong. Doesn’t it seem to make sense that children have some sort of Universal Grammar? There is clearly some part of language that is heritable – otherwise, why is language special to humans? Why can’t that heritable thing be Universal Grammar?

      Elaborating on that, the article gives an example from Stromswold (1994) where there was a boy who could perfectly understand complicated sentences perfectly, even though he was unable to talk. If he cannot produce any utterances, than how could he ever receive corrective feedback?

      Also, when you talk about people with varying degrees of prose ability, I don’t think that has anything to do with Universal Grammar. Universal Grammar is about being able to learn the rules of a language. Everyone has different capabilities; we’re not talking about individual differences here. Universal Grammar is about the parts of language that everyone possesses (thus, universal). Even though people have different styles of speaking, they’re still all following the same grammatical rules. Like both Alex and Corinne already mentioned, a lot of language disabilities, like aphasias, occur only after ischemic strokes and damage to the brain – I daresay that this is not part of a ‘normal’ gradient and that we are merely conveniently categorizing people who do not fit the norm.

      Delete
    5. I agree with Dia, I am not entirely convinced by Universal Grammar. As stated in Pinker’s article “Synapses continue to develop, peaking in number between nine months and two years (depending on the brain region), at which point the child has 50% more synapses than the adult.” Although I know very little about UG and the poverty of the stimulus, it seems to me that plasticity and synapses could pay a huge role in learning a language. 50% more synapses than an adult is an enormous amount. So like Dia mentioned, it doesn’t strikes me as off that kids are able to learn so much more in 4 years than an adult would. So I’m not yes convinced that it is innate. In part 5 of the text, Pinker gives example of generalizations we should make and use these to explain UG he states: “In each of the examples, a learner who heard the (a) and (b) sentences could quite sensibly extract a general rule that, when applied to the (c) sentence, yield version (d).” and give the following example:
      (a) Irv drove the car into the garage.
      (b) Irv drove the car.
      (c) Irv put the car into the garage.
      (d) *Irv put the car.
      He then states that “The solution to the problem must be that children's learning mechanisms ultimately don't allow them to make what would otherwise be a tempting generalization” and “It is because of the subtlety of these examples, and the abstractness of the principles of universal grammar that must be posited to explain them, that Chomsky has claimed that the overall structure of language must be innate, based on his paper-and-pencil examination of the facts of language alone” Yet to me this is not enough to explain that it is innate. The use of the verbs drove and put or different. Drove is always used in the context: “I drove something” whereas put is always used in the context: “I put something on/in something.” Therefore with put, you always need to “put something somewhere”, you can’t just “put something”. Children might be making these generalizations such as: the word put must always be in the context of I put something somewhere, which is why they wouldn’t say “Irv put the car.” In this case it seems to me that it would be a learned generalization and not an innate one.

      Delete
    6. There have been a few people who have cited the incredible brain plasticity of infants, and the high number of synapses (50% more than an adult!) as arguments that do not support the theory of Universal Grammar and innateness. However, I think that the latter does not disprove the former. Both the anatomical landscape and the knowledge of an innate set of grammar rules may be part of the biological make-up of a child which allow him to learn language. There is clearly evidence that learning a language is much easier as a young child, but clearly learning language cannot be fully dependent on a high volume of brain synapses, since adults can still learn second languages even if they are not as skilled in it. Therefore language learning, while facilitated by synapses and brain plasticity, is not dependent on it. On the other hand, we cannot prove that language can be acquired without an innate knowledge of Universal Grammar because the nature of the hypothesis is such that we would never have a situation in which we could “remove” Universal Grammar as a variable during language acquisition.

      The argument Dia proposes about the power of probalistic learning is interesting, and more convincing than arguments about the power of brain plasticity (which even Pinker admits gives children an advantage over adults). I do not know very much about probalistic theory but my initial hesitation to accept this proposal as an alternative to UG is the extremely fast rate at which children learn language, especially once language acquisition starts. This snowball effect is difficult to explain since I can’t imagine that once a child starts acquiring language, suddenly the amount of frequent situations increases exponentially the way that child’s mastery of language does.

      Delete
    7. @Alex

      I'm quoting your response to which I am responding to.

      "When someone does not have this ability, it’s labeled as being a disorder."
      "Disorder" does an injustice. Abnormal is the proper word. Everything is on a continuum. When you have seven billion people, and the vast majority of them behave similarly, the phenomena needs an explanation. This is just science. Looking up at the stars and finding a pattern. But we're talking about biology here, so you can have some variance. If UG applies to people, don't expect it to show up each and every time. Proving the extreme cases exist doesn't rid you of your responsibility to prove the phenomena.

      I think I know what you mean but I disagree. Theories have to account for everything; you can't just chalk it up to extreme cases. They need to be explained somehow.

      Assume I have a theory that says all dogs are white, and dogs can only be white. Finding a single black dog, even if I have a million white dogs, suggests my theory is in need of revision. Perhaps I could now state that instead of all dogs having a white phenotype, they all have white genotype, and something occurred to alter the expression in that one dog. Or any other explanation, but either way my theory would need to be modified to account for this finding.

      To clarify, this says nothing against UG not being present in some abnormal people. That clearly could be due to abnormal cognitive development or a host of other factors. It's just that you made a similar comment on my post last week about the language that was potentially an exception to UG on similar grounds.

      Delete
    8. We do not know what it is like to learn language as a child, and never will. I think that Universal Grammar is our reflexive response to what we see as nothing short of a miracle - but appealing to how amazing it is that children learn language so fast and so accurately do not actually provide evidence for UG. UG is a tempting solution to account for the fact that such a thing occurs, but simply saying "children rarely make these mistakes so it must be constrained" ignores the fact that a child or infant's cognitive capacities are extremely different from ours, and that these capacities are able to learn a great deal of things, from how to perceive blobs of colour as objects to subtle social cues to language. The only thing is that we see language-learning as uniquely amazing (probably because it is so obvious and striking), but these other things as perfectly normal. But consider how amazing it is that children learn anything at all - and surely we aren't going to argue that every one of these needs to be constrained by hard-coded Universal Something? UG might exist, but I'm not convinced by the current arguments.

      Delete
    9. Dia, I think you underestimate the task of learning a set of rules as complex as UG from positive evidence alone. (Think of the "Laylek" example.) Statistics would help for specific utterances, and for simple, obvious, local rules. But UG applies to all possible utterances. And, no, no one has yet found a UG-specific deficit in 1st-language learners (though they've found lots of other deficits).

      Corinne, child/adult differences and L1/L2 differences are beside the point: No one (and no induction mechanism, statistical or otherwise) can learn UG from the positive-only data said and heard by the L1 child. But it certainly is far from obvious that there had to be a Universal Grammar at all, shared by all languages, let alone an unlearnable one, for there to be language!

      Angela, the uniquely human capacity to learn language is certainly innate, but why should there be an innate, unlearnable UG at all, rather than lots of learnable Penny-Ellis grammars? (A child could understand without talking, even for Penny-Ellis grammars, because it can hear others making mistakes and being corrected -- but probably the child would not be perfect in Penny-Ellis grammars if it never spoke, and might think some utterances are correct that aren't. Maybe it requires less grammatical accuracy to understand than to speak.)

      Catherine, yes, "drove" and "put" differ in the way you note, but UG rules are not local to specific verbs; they apply to kinds of verbs, and the kinds are themselves features of UG. (This is why it's always easier to argue for or against UG without knowing UG...) But maybe there is an opening for semantics (the "non-autonomy of syntax" here...

      Jessica, yes, neither synaptic loss nor the superior L1 learning power of children explains how they -- or any inductive mechanism -- can learn the unlearnable based on the impoverished, positive-only data available to them. (Induction already includes statistical or probabilistic trick in the book.) The problem is not speed but insufficient data.

      Andras, I think disorder is a red herring here. There are plenty of neurological disorders of grammar, but none of them seems to be UG-specific: They affect Penny-Ellis grammar too.

      Dia, this is not about language learning, but about UG learning. Before Chomsky, it was not even known that there was a UG that all languages obey. Then when it was discovered, and the nature of the UG rules began to be worked out, it turned out that they were unlearnable by the child (or by any inductive mechanism). This is not the "miracle" of language learning but the poverty of the stimulus (for UG-learning), and the "difficult" problem that it leads to: If UG is innate, how did it get there?

      Harnad, Stevan (1976) Induction, evolution and accountability, In: Origins and Evolution of Language and Speech (Harnad, Stevan, Steklis , Horst Dieter and Lancaster, Jane B., Eds.), 58-60. Annals of the New York Academy of Sciences 280.

      Delete
  5. “Without negative evidence (and even in many cases with it), there is no general-purpose, all-powerful learning machine; a machine must in some sense "know" something about the constraints in the domain in which it is learning.”

    I feel as if Pinker purposely avoided reconciling his “negative evidence principle” with Chomsky’s Universal Grammar (which I don’t think he did a very good job at defining and explaining explicitly). Pinker did argue that humans are born with language acquisition “cognitive devices”, which explains why children can learn to talk rapidly without explicit and constant feedback. However, children do need at least SOME feedback (his “negative evidence” and “positive evidence”) to set some parameters within language (for kidsib: by parameters, I mean rules that apply to the language that the child is learning. For instance, in Spanish, the subject can be omitted in some occasions, which is not the case in English).

    This reminds me of the symbol grounding problem; what is the basic minimum of words that need to be grounded directly in order to fully understand our environment (e.g. to ground “zebra”, you need to have grounded “horse” and “stripes” beforehand)? Thus, what is the basic minimum of negative and positive evidence needed for a child to learn the language accurately? Pinker perhaps purposely avoided this question from not knowing the answer.

    Because I would argue that the amount of “evidence” received do matter to more than just language acquisition. There are studies demonstrating that children differ in the complexity of their language (the diversity of vocabulary) depending on whether they come from poor or rich settings. And that difference probably comes from the fact that wealthy parents have more time to educate and converse with their children. So is it that children can all speak English in a grammatical and accurate fashion, but that those from rich settings only have “additional glitters and frosting” from having a wider vocabulary? Or are the differences more complex than just “knowing more words”?

    ReplyDelete
    Replies
    1. As I understand it, Pinker reconciles the "negative evidence" with Universal Grammar using the subset principle.

      The subset principle says that children assume that out of the multiple ways grammar could work, the smallest subset of principles that can explain grammar is the subset to trust. This means you don't need negative evidence because when a child gets enough positive evidence to form a rule "A", they also assume that anything that is "NOT-A" is incorrect. Irregular examples like "broke" instead of "breaked" have to be learned individually on top of these rules, but the general set of rules are learned quickly, with only a relatively small amount of examples of correct language.

      As far poor/rich children and vocabulary differences, I think it is just an effect of "knowing more words". Access to natural language means you can express everything that can be expressed, and the vocabulary differences are peripheral words outside of the language's kernel that could be defined to any kid, even if they haven't been yet.

      Delete
    2. Florence, the problem is learning UG, not learning parameters on UG. Everyone agrees that UG parameters are learnable by induction. It's UG that's not learnable. But you're right that Pinker is a bit vague on negative evidence. In general, induction (of anything nontrivial) requires both positive and negative evidence: what's in the category and what's not in it. The rest is just about how much positive/negative evidence is needed for a particular category learning problem. But with UG, there's no negative evidence at all. (Yes, all learning requires an innate learning mechanism, but that's not specific to UG.)

      All normal children, rich and poor, in all languages, who have heard (and spoken) language up to about age four have a full-blown UG (though probably not a full-blown Penny-Ellis grammar, vocabulary, or pronunciation).

      Joseph, I believe the "subset principle" applies to parameter-learning, not UG-learning. (It probably also applies somewhat to Penny-Elllis grammar-learning.)

      Delete
  6. “We know that even preschool children have an extensive unconscious grasp of grammatical structure, to the experiments on discussed in the previous section, but how has the child managed to go from sounds and situations to syntactic structure?”

    In this article, Pinker explains how children manage to acquire a language without formal education. He talks about the input children receive and the potential algorithms that they may use to acquire the rules of a language. Last year, I took a course on bilingualism and learned about the acquisition of two languages simultaneously. What I took away from the course was that in a simultaneously bilingual child (a child that has received consistent and equal input of two different languages from birth), the two grammars of the language are kept separate. In other words, the grammar of one language will not influence the grammar of another language. I imagine that the same algorithms (discussed by Pinker) used to acquire one language would also be used in simultaneous bilingualism. I know that the grammars are separate, but how are they kept separate? How does the brain prevent one grammar from influencing another in simultaneous bilinguals?

    Note: From what I learned, events such as code switching seem to be examples of language overlap (especially code switching within a sentence or clause), but upon closer inspection, one can notice that the grammatical framework of a sentence or clause will follow the rules of just one language. The words of another language may be used in another language’s grammatical framework.

    ReplyDelete
    Replies
    1. “I know that the grammars are separate, but how are they kept separate? How does the brain prevent one grammar from influencing another in simultaneous bilinguals?”

      Simultaneous bilinguals learn different rules for each language they speak (when the different grammars of each language do not overlap) and probably learn to inhibit the inappropriate rules when speaking a particular language based on contextual cues. Some studies using executive control tasks have shown that bilinguals might be better at inhibiting behaviors that would be inappropriate responses to specific inputs. For example, bilinguals often perform better than monolinguals in Stroop tasks.

      Delete
    2. The thing that I find most interesting about bilingualism, is how children automatically know that the sounds they are hearing are two separate languages. It seems extremely complex, yet children learn how to do it easily and relatively flawlessly.

      Bringing up bilingualism also made me wonder something else. How would Pinker explain children who learn a first language by age 3, and then after repeated exposure to a new language at around age 4 (when the child starts going to school), the child may eventually lose the ability to speak their first language.

      Delete
    3. I would be curious to know if a child who loses her first language could still recognize grammatically correct sentences even if she had little to no vocabulary. As an English speaker, I can still identify if a sentence sounds right, even when it consists of mostly made up words. If the child still retained the rules of Universal Grammar as shaped by his first language, in addition to a second set for his second language, I would hypothesize that the child should still recognize grammatically correct basic sentences of the first language, even without vocabulary. Since Universal Grammar is innate, I expect its initial parameters would be retained by the child more than vocabulary.

      Delete
    4. Angela has brought up a good point, one that I've been wondering (and have experienced myself). I grew up speaking Chinese and learned English when I was 7 or 8. From this time forth, I have lost most of my ability to formulate sentences in Chinese but I can still understand it, at least conversational stuff. When do I try to speak, my mom often corrects my grammar and pronunciation of certain things.

      I know quite a few other people that have grown up with one language, learned a second one, then lost most of the first. But they, like me, can still understand what is spoken to them and have difficulty replying back.

      What is the distinction between speaking and comprehension? How can one extinguish but not the other?

      Delete
    5. One distinction that might explain the dissociation between speaking and comprehension is that when you are trying to speak in a non-native language, you have to actively inhibit your L1 parameters (or the parameters of the language you are most fluent in). That could potentially prevent you from speaking in this language even if you know which syntactic structures you should use to speak it. However, when listening to your non-native language, you do not really have to inhibit any parameters (except if the same structure is used in both languages but means two different things in each language). You just have to remember what the structures of this non-native language mean.

      Delete
    6. The thing is most people are able to distinguish between languages not just by noticing that the words are different but more noticeably, I think, by the phonetics of the languages. For example the letter r is pronounced differently arabic, french and english. Moreover some languages make use of different alphabets and with different alphabets comes different letters and in some instances letters that dont exist in other languages which would account for sounds (for lack of a better word), that wouldn't be heard in one language compared to the other.
      For what is of speaking multiple languages I think it is the reinforcement of shifting strategies that would enable to do so. What I mean by that is that in order to speak well in different languages we should reinforce (practice) our ability to apply the right parameters with the right vocabulary. I would say the parameters are still there even though it is the switching between one or the other that could cause problems in speaking, and for what is of the vocabulary, the more you are in contact with language the more your vocabulary will be refreshed

      Delete
    7. Reginald: Pinker is not careful to distinguish what's true of UG learning from what's true of Penny-Ellis Grammar learning. Some P-E Grammar can be learned from unsupervised learning (co-occurrence statistics), some from trial-and-error induction (pos/neg evidence) without instruction ("formal education"). But UG cannot be learned (by the L1 child) at all, because of the poverty of the stimulus (no neg evidence); so it is innate.

      As to keeping multiple languages (and their UG parameter-settings) apart: Why do you think this would be a problem? To use a musical example, the note F# occurs both in the key of A major and in the key of G minor. So when you sing or play or hear an F#, how do you know you are in A major or in G minor? The answer is in the literal meaning of the overused word context: it's because of the notes that came before, and come after, which can't be in both A major and G minor. Ditto for the meaning of a polysemous word (i.e., one with many meanings: "bear" can mean the verb for having children or the noun for the brown furry animal. Which one it means is picked up (and kept up) from context. (By the same token, there are some identical spoken words in French and English -- texte/text for example. So when you hear it or say it, how do you keep track of which language it's in? Context. No mystery. Which is not to say that words from another language don't occasionally slip in by mistake...)

      Angela, Jessica, Jocelyn: There are no rules on how long it takes to forget a language!

      In some ways, it takes less to understand a language than to speak it; and comprehension comes earlier than production.

      Marion, Fouad: Yes, phonology is a big cue to which language you are speaking. But I think the speculation about parameter-inhibition is just that, speculation.

      Delete
  7. "Possessing a language is the quintessentially human trait: all normal humans speak, no nonhuman animal does."

    i am still struggling to understand some basic assumptions, which are essential to understand many of the arguments of the class, especially with regard to language. just to be sure i understand what is being assumed, here is a thought experiment:

    if you brought a group of signers ('signers') into an environment of pple ('non-signers') that has never encountered anything but their own verbal language.

    to make it even more extreme, let's say that signers are also visually very different so it cannot be assumed that we feel what they feel. so, for example that signers are somewhat humanoid but also really different (green-looking, 4-arms, 1 eyes, 8 feet tall..) while non-signers are physically homogeneous and have never seen, or heard of the existence of being that are humanoid but not human...

    is it assumed that the non-signers, by virtue of possessing language and being able to generalize beyond themselves to other non-signer member, would necessarily be able to detect that the signers are also using language albeit a totally different one that they cannot understand and possibly not even accurately perceive?

    ReplyDelete
    Replies
    1. That’s an interesting question, in my opinion (once the ‘non-signers’ have stopped hiding away from the ‘signers’) the ‘non-signers’ would be able to identify that the ‘signers’ have a language as long as their signing language accounts for all of the intricacies of a language, or if you prefer, communication. I think the same would happen if we were able to see that when dogs bark they are not simply “being dogs” but are actually communicating with to each other. One of the things I think you’re missing is that language is extremely powerful and in the way we have developed it, it has permitted us a communicative advantage that no other species has, and this has translated into the complicated social structures that we have and our technology.

      Delete
    2. I think that what is telling of communication or language is our ability to 'read' an interaction. If I can recognize some form of back and forth that might even involve some kind of understanding, then I may conclude that they are speaking a language. Yet my ability to discern what I just described, although it doesn't depend on my knowledge of their verbal language, depends on my knowledge of their body language - one that I am familiar in decoding. So, if like ab proposes, they were to communicate in a way that was completely foreign to me (in which I could not even detect body language that was familiar) I would perhaps have no sense that they were communicating at all.

      Delete
    3. 1. Distinguish language in particular from communication in general.

      2. The TT (whether T2 or T3) is about language, not just communication. (Hypothetical inter-species or interplanetary communication complicates it needlessly, without providing any insight.)

      3. All languages (vocal or gestural) are completely intertranslatable: you can say (and translate) anything, in any of them.

      4. The reason TT requires human T3 capacity rather than, say, canine or primate T3 capacity is the uniqueness and power of language, and the fact that we are incomparably better at communicating with and mind-readng humans than other species -- alas, for the other species, who are unable to ask us, verbally, not to hurt them. Psychopaths would ignore their suffering even if they could speak, but normal decent people would treat them far better than they do now.

      Delete
  8. Steven Pinker argues that humans have an innate capacity for language. He argues that since all humans speak a language and no other species does so, it is a skill specific to humans. But where did we evolve this skill, if even our closest relatives, chimpanzees, are nowhere close to producing a language. And how does a young child learn to speak grammatically? There are infinite combinations of words in a language, and yet a three-year-old is capable of producing sentences that make grammatical and semantic sense. Most skills we learn are developed through laborious trial-and-error. Learning to handwrite involves many repetitive worksheets in order for a child to learn to trace, then copy, and then produce letters on their own. Pinker does not believe that children experience to enough trial-and-error to come up with grammatical sentences as naturally as they do. When learning a language, children are only exposed to positive evidence, that is, examples of grammatically correct sentences. They are not exposed to any examples ungrammatical sentences. That they could learn a language solely on the basis of these positive examples seems to indicate that there must be an innate language capacity.

    I don’t this argument entirely convincing. I have trouble with one of the argument’s premises, that children do not encounter enough negative evidence. Children are exposed to negative evidence in the form of their parents’ corrections. And, yes, children are only exposed to positive examples in the form of their parents’ speech. But it seems plausible that if their parents were to provide examples of what not to say, this would lead to children making more mistakes, as they would have been then exposed to those kinds of ungrammatical sentences. I think negative evidence comes from the examples of what is not heard. If a child never hears the sentence “A banana ate I,” they take that as an example of negative evidence and learn that that is ungrammatical. They combine that with the positive evidence from hearing “I ate a banana,” and conclude that our grammar demands the order of subject-verb-object, not verb-object-subject.

    I don’t believe there has been enough research to definitively say that the way children acquire knowledge is so inexplicable by other means that it must be an innate capacity. It’s a bit like saying all objects have the innate capacity to fall down before doing enough research to discover the force of gravity.

    ReplyDelete
    Replies
    1. It's not just that children don't hear negative examples (of UG), they don't produce them either, so there's nothing to correct. (They hear and produce and get corrected plenty on Penny-Ellis Grammar.) And UG is a set of (infinitely generative) rules, not specific cases. So not hearing "XYZ" does not help...

      Delete
  9. The speed and quality with which children acquire language is miraculous. Language is so complicated, they are so good at it so quickly, and they learn it from scratch. They don't have access to the tools (reading or guided instruction) that adults use to learn skills.

    Pinker argues that children have innate rule-building systems that allow them to accomplish this task. They learn quickly to pick out and categorize phrases and the rules that guide phrase structure. Soon after they learn to combine these phrases into infinite permutations.

    I agree with Pinker (and Chomsky) that there must be innate universal grammar for all the reasons put forward in the article: universal features, quick learning, understanding of rule based quirks that statistically would be difficult to absorb etc. I am especially curious about what kind of substrate accounts for this innate grammar. There must be physical organization in the brain that corresponds to any innate logical structure, but science probably has a long way to go before we know enough about brain function to make any good guesses.

    ReplyDelete
    Replies
    1. Finding some substrate that accounts for the innate grammar would provide some great evidence, but I don't know if that is or will be possible. Running is innate, unless I have misunderstood what 'innate' is - we know how to run, we can do it, without anyone telling us how. We know running occurs, but, as far as I know, there's no 'running' trait in our brains that allow us to say "this is what gives us the instinct to run!" Language learning requires some innate principles to get the ball rolling, but it is the experience and input that allows it to stay in motion. I do agree that there is some innate aspect to language, but I also think that a lot of things in first language acquisition that we can't account for (ex. universal stages, etc) are just lumped into this category, as there is yet to be a better explanation.

      Delete
    2. I'm in agreement that the speed that children acquire the capacity for language is incredible. I think this goes far in suggesting innate nature for this skill. (And this is in accordance with some of the facts that are discussed, that of there being an exceptionally high amount of neurons in the brain of a child. Synaptic pruning occurring as development progresses - so that we have less neurons as we are more mature.) I think this is very interesting, and almost seems overlooked by these studies that I've read on it. I would expect facts like this to be more consequential and more observed.

      However, regarding what you state here:

      "I agree with Pinker (and Chomsky) that there must be innate universal grammar for all the reasons put forward in the article: universal features, quick learning, understanding of rule based quirks that statistically would be difficult to absorb etc. I am especially curious about what kind of substrate accounts for this innate grammar. There must be physical organization in the brain that corresponds to any innate logical structure, but science probably has a long way to go before we know enough about brain function to make any good guesses."

      I disagree. I don't think there necessarily has to be a specific physical structure which underlies our capacity for universal grammar. I don't see evidence of this, and I don't think its sufficiently plausible to speculate. If there were cases of localized brain damage, where the patient was seemingly making errors of universal grammar, but able to converse or read or write to some degree, I would be more inclined to accept this suggestion.

      As it is, I suspect there may be regions which are more specialized underlying the capacity for universal grammar, the same way certain broad regions support other specific cognitive functions (such as speech, writing, or vision). But no modular specific neural region.


      Finally, I agree that science still has a long way to go, both regarding this and other aspects of brain function.

      Delete
    3. Vivian, Language is not like running. It is the capacity to produce and understand a potentially infinite set of propositions. We learn to run, but the machinery is innate. What is the machinery of language? We learn pronunciation, vocabulary and Penny-Ellis Grammar, but not UG.

      Andras, there are no UG lesions, but if you need to know you UG to produce and recognize UG, and we can, and UG is unlearnable, then there is no choice but that it must be innate unless UG is an inescapable property of language itself.

      Delete
  10. In questioning the mechanisms behind language acquisition, trying to understand how it relates or differs to other cognitive processes is key. Essential to studying the importance of language is looking at how it might be related to thought. Pinker makes the point that a lot can be learned about UG from studying the inputs and outputs to the system. While this is fair enough, I think the real difficult task is trying to understand how UG even came to be formed in the first place. How does it fit into our accepted notions of biological evolution and what does the evolution of UG say about our species.

    "Hence language acquisition depends on an innate, species-specific module that is distinct from general intelligence."

    While studying inputs and outputs may help us get a better idea of how the innate, species-specific module functions, I ultimately think more effort needs to be placed on the structure within the brain itself to understand its evolutionary significance and how it shapes how we acquire language.

    "Presumably language evolved in the human lineage for two reasons: our ancestors developed technology and knowledge of the local environment in their lifetimes, and were involved in extensive reciprocal cooperation. This allowed them to benefit by sharing hard-won knowledge with their kin and exchanging it with their neighbors (Pinker & Bloom, 1990)."

    I think the emphasis on language acquisition in children is crucial not only in that it demonstrates the patterns language development, however, analyzing the brain's structural changes maybe shed light on the early stages of the language "organ" evolution in the brain.

    "Language acquisition is so complex that one needs a precise framework for understanding what it involves -- indeed, what learning in general involves."

    More emphasis needs to be placed on studying language acquisition in children of different language, most studies involve English speaking children which might bias the generalizations made about language acquisition.

    ReplyDelete
    Replies
    1. Pinker writes, “So even if language acquisition, like all cognitive processes, is essentially a “black box”, we know enough about input and output to be able to make precise guesses about its contents”.

      I do wonder how precise Pinker expects to be able to get. Is he referring to the level of neuroscience, or does he just hope to continue with theoretical frameworks for the mechanism underlying language acquisition?

      You emphasize the importance of studying actual brain structures to understand how evolution has shaped our language abilities, however I struggle to see whether studying the neuroscience will help at all.

      If the central debate is over what aspects of language are innate versus learned, then input and output seems sufficient to trace the process of acquisition. We can identify the areas of the brain that are necessary for speech and hearing, however this capability seems separate from the actual question of where language abilities lie. Determining the genetic underpinnings of UG seems to be beside the point. It is enough to know that it is innate, and that much we have learned by studying input-output relationships.

      Delete
    2. What Pinker means is that UG explains what the child needs to know in order to be able to learn (any) language.

      Delete
  11. “For example, children can learn a language without the special indulgent speech from their mothers; they make few errors; and they get no feedback for the errors they do make. And it can't be an across-the-board decline in learning. There is no evidence, for example, that learning words (as opposed to phonology or grammar) declines in adulthood.”

    I think that this is one of the most important points Pinker makes about acquisition—if one is going to go the critical period route (that the ability to acquire language declines after a certain age) then it is important to specify that there are multiple critical periods for different aspects of language. Phonetics and phonology—the sounds and contrastive sounds in a language— is a critical period that closes early on in comparison to learning words.

    As a linguistics student, I find that it’s really common to hear a sort of mantra that “if it’s not from UG, it’s memorized.” This extends to the lexicon, which explains why Pinker argues that learning words (essentially just memorization) has no critical period in the same way that phonology and syntax do.

    While I personally feel that UG does not have to be the “you’re born with it or you memorize it” mentality often presented to undergraduates in linguistics, I do believe that there is a strong motivation to consider language as a module within the brain and within our cognitive capacity. Language acquisition itself could warrant its own module, separate from the language module, which develops along with the child’s cognitive abilities. I think this explains why kids undergenerate determiners and auxiliaries at first, but then gradually incorporate them into their grammar as they get older:

    2;3: Play checkers. Big drum. I got horn.
    3;1: I like to play with something else. You know how to put it back together. I gon' make it like a rocket to blast off with. You want - to give me some carrots and some beans? Press the button and catch - it, sir. Why you put the pacifier in his mouth?

    How else could such dramatic changes happen in such a small amount of time? A language acquisition module in the brain, coupled with UG in the language module, could be the mechanisms that drive this rapid development.

    ReplyDelete
    Replies
    1. Learning words (categories), whether via sensorimotor induction or verbal instruction is certainly not just "memorization."

      Delete
  12. Pinker’s support for a biologically hard-wired language-learning algorithm:
    - It makes evolutionary sense
    - You can have complete language ability even with low general intelligence (e.g. Williams Syndrom)
    - There is a strict window for acquiring native fluency in a language
    - Feral children aside, everybody becomes profficient in at least one language
    - From learnability theory: a general learning algorithm would need a lot of negative evidence to figure out any specific grammar, but it does not seem that we get that negative evidence

    “Here is the most basic problem in understanding how children learn a language: The input to language acquisition consists of sounds and situations; the output is a grammar specifying, for that language, the order and arrangement of abstract entities like nouns, verbs, subjects, phrase structures, [...] Somehow the child must discover these entities to learn the language.”

    Here is the most basic problem with this picture: we are assuming that language somehow exists independently from their speakers. So there is a thing that is English and the child somehow has to learn it. We are not talking about the child’s contribution to language and that makes the origin of language a complete mystery. How did we get language in the first place if the only thing children are doing in 2015 is to learn it?

    We know that language is there, that it’s only spoken by people, that you cannot find it anywhere where there are not at least two people, and that when there are at least two people, then language is most certainly there. Therefore, we should not ask how language is learned by an individual, but how it emerges from the collective behaviour of a dyad or a community. In other words, we have to be open to the possibility that UG is not something individuals have, but something that emerges from the non-linear interactions of what individuals can do. For example, when I dance with somebody, the set movements possible are not merely the sum of our respective repertoires, but a lot more.

    ReplyDelete
    Replies
    1. The issue here is not language but grammar -- and not Penny-Ellis Grammar, which is learned, but UG.

      Delete
  13. “Language acquisition is one of the central topics in cognitive science. Every theory of cognition has tried to explain it; probably no other topic has aroused such controversy. Possessing a language is the quintessentially human trait: all normal humans speak, no nonhuman animal does. Language is the main vehicle by which we know about other people’s thoughts, and the two must be intimately related”

    The opening of this article by Pinker inspires some of the same thoughts as I had after last week’s readings, namely where we draw the line in what is “language” in cognitive science. I appreciate the considerable differences between human language and nonhuman animal communication, but in terms of the issues of cognitive science, are the two really so dissimilar? Nonhuman animals communicate with sounds, and this is the main vehicle by which they communicate and understand others’ thoughts. Therefore, language (or communication) and knowing the thoughts of others are intimately related in both humans and nonhuman animals. Studying language acquisition in humans is considerably more complicated and challenging than in animals, because you can’t perform the same kind of isolation experiments with humans. However, since we can perform such elaborate experiments on animal communication, would it not be valuable to be consulting animal models in understanding the cognitive science of human language? Advances in neuroethology have shown us the exact extent to which certain animals’ communication systems are innate vs learned; could these not be valuable analogies to human language acquisition? I think the line drawn between human “language” and animal communication is much too decisive, and prevents us from consulting potentially valuable models.

    ReplyDelete
    Replies
    1. I really agree with your point and would be interested in learning more about the results that have come from animal models. Animals indisputably communicate in a way that often parallels human communication. Perhaps the issue is that we don't now what it is like to be an animal, so we would never know the relationship between language and thought in animals. In humans, we know that language use reflects our thoughts and some conjecture that language merely exists as a representation of propositional beliefs (Pinker & Bloom 1990). If we do not understand the minds of animals first-hand, can we draw conclusions that are relevant in humans?

      Delete
    2. Julia, the difference between language and other forms of communication is that in (any) language you can express every possible proposition. (That does not mean that studying animal communication is not relevant to understanding language. -- But it may be less relevant to understanding UG.)

      Delete
    3. While I certainly believe in the potential value of consulting animal models in modelling human language, I very much understand this point about animal communication not being very relevant to understanding universal grammar specifically. Perhaps if we used animal models of communication to better understand human language acquisition in a very general way it could provide a more comprehensive base on which to build our knowledge of more specific aspects of human language.

      Delete
  14. “…the progressively widening bottleneck for early word combinations presumably reflects a general increase in motor planning capacity. Conceptual development (see Chapter X), too, might affect language development: if a child has not yet mastered a difficult semantic distinction, such as the complex temporal relations involved in John will have gone, he or she may be unable to master the syntax of the construction dedicated to expressing it.”

    Language acquisition occurs during the same time period as the maturation of many other cognitive capacities. The above quotation from Pinker’s paper supporting a biologically hard-wired language learning system conveys the idea that progress in ability to produce and comprehend language depends on and is intertwined with the development of other intellectual skills. This is something that I agree with. Although Hydrocephalic children with very low IQs but eloquent language skills and stroke patients with dramatically reduced language abilities but intact IQs show that language ability and generalized intelligence are distinct and separate things, I think that acquiring language and acquiring information encoding capabilities are dependent on one another. Pinker hypothesizes that the overproduction of new synapses in young children may be a requirement for babbling, first words and early grammar but isn’t this overproduction of new synapses in anticipation of environmental experience a more general process that characterizes the early development of cognitive abilities and species typical behaviours? So overproduction of synapses would be just as useful for memory development (which is important in language acquisition because it is what eventually allows children to stop making overgeneralizations like ‘breaked’) as it is for babbling.

    ReplyDelete
    Replies
    1. Distinguish questions about language in general from questions about UG in particular.

      Delete
    2. Universal grammar explains that "how" part of a huge chunk of our cognitive capacities (namely language). I think that what I was trying to get at in my previous commentary is that UG seems to entwined with other cognitive capacities in the sense that UG contributes to cognition and some aspects of cognition contribute to UG. UG is the hard-wired device that allows us to learn grammar for whatever language we're surrounded by as we grow up, but it's not a completely isolated aspect of cognition. Our grammar ability growth is synchronized or balanced with the development of things like motor planning and conceptual development. In addition, things like categorization of objects and events are central to generalized cognition in addition to UG.

      Delete
  15. In the paper Language Acquisition, Pinker tries to explain how we acquire language through analyzing children's language learning process. He argues that language is learned through inputs, and he talks about positive and negative inputs, which I find interesting and reminds me of last week's reading.

    I agree with Pinker that language is learned, because we are not born with the ability to speak, but I do also agree with Chomsky that Universal Grammar is innate, which I argued last week as well. I think Pinker left out Universal Grammar in last week's reading, and I kind of think the same right now as well because when Pinker discusses positive and negative inputs, he seems to think that positive and negative evidence are both necessary for kids (section Learning Ability), but he later mentions some research results indicating that negative evidence is not really needed (section Negative Evidence). I personally think that this discrepancy exists because of the differences between normal grammar and Universal Grammar. I believe in order to fully develop normal grammar, positive and negative evidence are important. However, for Universal Grammar, which is totally different from normal grammar (honestly, I do not know what exactly Universal Grammar is, but it seems to bear in every language), it is unlikely for it to come from negative evidence because we do not learn Universal Grammar; we do not learn by experiencing it with a feedback or with someone telling us we are wrong (that is normal grammar, such as tenses and plurals); although I do not know what exactly Universal Grammar is, it is innate and therefore what we say, is right in Universal Grammar already, and what we hear from other people, should be right in Universal Grammar already as well because Universal Grammar is innate. In this case, does this mean because of Universal Grammar, some parts of language acquisition does not require negative evidence? I could not say that we do not need negative evidence at all, because I think when my mom or my teachers corrected me when I said something wrong, it meant something in my language learning process, but it seems to be true from the researches that Pinker mentions in his paper that sometimes we do not negative evidence. I would like to attribute this characteristic to Universal Grammar.

    Another thing about Pinker's paper is his argument of part-of-speech, and I think I can agree with him mostly. He thinks that generally we just look at the word and see which part-of-speech that word is, or the phrase is. I think this is a process of categorization: we categorize words and phrases into different categories with distinct functions in sentences. I think this is pretty much right, and so far I cannot think of another better way to explain how we understand words and phrases, especially those recursive phrases in a sentence. Linguists developed the X-bar theory, which is basically a way of categorization, but with more constraints and more rules to follow.

    ReplyDelete
    Replies
    1. We learn ordinary grammar by induction (with + and - evidence and correction) or by instruction. We "learn" UG from + evidence only -- not errors, no correction, no instruction. So Chomsky's conclusion is that UG is innate.

      Yes, we need semantics to understand part-or-speech. But perhaps syntax (or even UG) is more "semantically penetrable" than just that.

      Delete
  16. "Hence language acquisition depends on an innate, species-specific module that is distinct from general intelligence."

    I think Pinker should clarify what he means by general intelligene. If what he means by it just extends to how fast our brains work and how we associate different inputs coming in from the outside world I would really agree with what he said. The reason for that is as he has written in his paper, we acquire language through picking out of phrases and from that we categorized some words and through some algorithm we deduct the parameters pertaining to the language in question. I would argue that all this deduction and categorization is in somewhat some form of intelligence. As we know some people have some sort of "talent" for languages and end up learning 10 or more language in a lifetime, which also made me doubt what he said in the beginning about how we can't learn a language as perfectly as when we are kids. He might be right on the speed of learning but some people do learn how to speak a specific language perfectly with the right intonation and accent as they have already grown up.
    What I wanted to say is that since there is something pertaining to "language intelligence" how isn't the acquisition of language linked to general intelligence?

    ReplyDelete
    Replies
    1. I think Pinker means that UG (not language) is not learned by induction or instruction (the two vehicles of "general intelligence").

      Delete
  17. “But language acquisition has a unique contribution to make to this issue. As we shall see, it is virtually impossible to show how children could learn a language unless you assume they have a considerable amount of nonlinguistic cognitive machinery in place before they start.”

    But what type of machinery? Does this innate machinery influence how we think and thus also how we represent our thoughts in language or just how we learn to represent our thoughts in language? I can’t help but assume language isn’t just “grafted on top of cognition as a way of sticking communicable labels onto thoughts”. It might have been initially if the evolutionary pressure which selected for language acquiring abilities was based solely on sharing information between people. It seems quite obvious that, in our present state, our language abilities also help improve thinking. Is deductive thought even possible without language? In the least, it seems to help with memory as a sort of chunking mechanism. When calculating something like 13.5 X 142 without help of external memory holders, it seems evident that language provides an extra internal memory holder. I’ll multiply 13.5 by 100, get 1350 and maintain that while multiplying 42 X 13.5… while still repeating 1350 to myself, I will then multiply 13.5 X 40 + 2 X 13.5… and so on until I arrive at my answer. It seems this same advantage would apply to chunking people, relations, events, implications, etc.

    2.2 Dissociations between Language and General Intelligence

    The double dissociation between language and general intelligence disorders, to me, seems like the most damning evidence in favor of the framework that posits language as a special adaptation that cannot simply be assumed to come into being through greater general intelligence. Maybe Putnam’s view that “general intelligence [tries to solve] the problem of how to communicate with other humans over the auditory channel” could be how the Baldwin Effect initially got rolling and lead to evolved specialized innate language abilities. (Putnam, 1971; Bates, 1989)

    4.1 Learnability Theory

    “Don't giggle me” is used as an example of linguistic phenomenon which has no good reason to stop due to a lack of negative feedback.

    Maybe the concept associated to the world “giggle” simply hasn’t been refined yet. Maybe it’s being over-extended as a verb that can be used to describe someone doing the action of giggling and the action of inciting giggling (with the “make me” being implicit in the mentalese but lacking from the available language repertoire). Maybe its not so much negative feedback that’s lacking, but rather better alternatives that have yet to have received enough positive feedback. Or maybe the child is confusing “don’t tickle me” with the unwanted consequence of “giggling”, a sort of slip of the tongue that even adults commonly make. I find myself doing these quite often (ex: you mean to say “edited” and/or “annotated” but you say “editated”). These types of errors should almost be expected in a framework where words and sentences are simply the winners in the present context of activation.

    (CONTINUED IN REPLY)

    ReplyDelete
    Replies
    1. 5 What is Learned

      “For example, in English one can say Darwinisms (derivational -ism closer to the stem than inflectional -s) but not Darwinsism. It is hard to think of a reason how this law would fit in to any universal law of thought or memory…”

      I would suggest the derivational is closer to the stem because it generally would be in most cases due to a universal law of thought: to convey information as efficiently as possible. Most “ism”s are connected to nouns or adjectives. Attaching an “ism” to the plural of the noun would be redundant, while there simply aren’t plural adjectives. Normally “ism”s are attached to words to express a system of beliefs or practices, so saying “terrorsism” would simply be interpreted as meaning the same thing as “terrorism” to a person who had previously acquired the word “terror” and “terrors”. Even the example of “Darwinsism” proves this point since it clearly is an exception to the rule! If the brain is good at detecting patterns, I can’t help but think ignoring exceptions is an inevitable consequence since the world isn’t nearly as neat as we’d like it to be.

      7 What and When Children Learn

      It is claimed that the Minimal Distance Principle isn’t applied when children understand “Mary was told by John to leave”. This seems perfectly reasonable, but if we assume in mentalese, that a child really understands this as “John told Mary to leave”, would the principle still be applied to the deeper structure?

      Maybe this could be revealed through some kind of recognition test which would show a visualization of the scenario alongside false variations. So, you’d have something like a picture/drawing of a boy looking meanly at a girl while pointing away as the correct answer with one variation on the same theme (girl and boy flipped) as the incorrect answer. The question would be, are the reaction times in the correct answers slower for “Mary was told by John to leave” or “John told Mary to leave”? If it’s the former, this would seem to make sense.


      And finally, we know we can think without language, but am I (and others possibly) using mentalese in a greedy way, hiding issues in a posited ~thing~ that we have barely any access to without recourse to the surface structure of living languages? Essentially, is positing rules in mentalese a dirty trick used to escape the burden of falsification?

      Delete
    2. Pinker is not careful enough in sorting out what is true about UG and about learning ordinary (Penny-Ellis) grammar. He is also not careful to distinguish when he is speaking about language in general or about UG in particular.

      Delete
  18. I find it interesting that Pinker brushes over this experiment by saying that speech doesn’t necessarily correlate with the context in which it is said. A more interesting interpretation of this experiment would be that children don’t learn language without some sort of feedback. By this I don’t mean that parents saying “That sentence was ungrammatical, Timmy”, but the much more subtle cues present in every interaction, such as emotional reactions of parents to their children’s verbal behavior, or generally contextual features of speech.

    As we’ve learned in previous weeks, feedback is crucial to cognition, with sensorimotor interaction being the basis for learning about the world. It would not be implausible that the same mechanism responsible for categorization, which requires the abstraction of features from objects, is responsible for learning grammar, which is in essence a set of categories derived from the abstraction of structures in sentences.

    ReplyDelete
    Replies
    1. the quote didnt make it through, but it should of read " Children do not hear sentences in isolation, but in a context. No child has learned language from the radio; indeed, children rarely if ever learn language from television. Ervin-Tripp (1973) studied hearing children of deaf parents whose only access to English was from radio or television broadcasts. "

      Delete
    2. A lot of misunderstanding comes from not distinguishing UG from ordinary grammar, and other aspects of language, such as pronunciation and vocabulary.

      Delete
    3. “It would not be implausible that the same mechanism responsible for categorization [learning based on sensorimotor interactions], which requires the abstraction of features from objects, is responsible for learning grammar, which is in essence a set of categories derived from the abstraction of structures in sentences.”

      Nick, I don’t disagree that contextualizing, semantic based, learning helps systematize language. As Pinker notes, children can “use semantic properties of words and phrases as evidence that they belong to certain syntactic categories”, such as noun, verb, auxiliary, noun phrase, verb phrases, and so on. But, (to me at least), UG would dictate that language input to a child, regardless of its syntactic form (being correct or incorrect), when consolidated with semantic context, would always result in UG conforming output. If I understand correctly, providing a model child, for the entirety of its development, with only grammatically incorrect phrases such as “the baby seems sleeping” or “John liked Mary’s pictures of himself”, in certain relatable contexts, will still provide knowledge of the context, but won’t result in the child verbalizing incorrect grammar. In this theoretical design, (which I admit is highly unethical) fundamentally incorrect syntax would convey meaningful information, yet the child’s output would say nothing about the relationship between non-abiding UG abstractions and semantics conveyed in the input. In conceptualizing UG then, it seems as though context helps establish malleable pieces in the framework of UG structure, which must be governed by an independent, innate mechanism. Regardless of whether UG results from our genes directly, or the developmental pruning of synapses, in agreement with your prior point, it seems as though communicated sensorimotor units are fundamental for the eventual expression of grammatical language. This is exemplified by the cases in which children, who lack language exposure, grow up to be mute, yet, might still have some innate grammar that cannot be verbalized in speech. Better yet, the children who develop original, UG compliant, languages, solely by the communication of simplistic non-lingual words with other children, show how unnecessary syntactic input really is.

      Delete
  19. “In other words, without negative evidence, if a child guesses too large a language, the world can never tell him he’s wrong.”

    Pinker’s paper supports linguistic nativism—he thinks that there are some things that kids know about the structure of language that they didn’t learn from experience, but that they were born knowing.

    The idea of negative evidence is really important to demonstrating Pinker’s claim. The idea is this: kids might learn which words and sentences structures are part of English (or whatever language is their mother tongue) by listening to their parents, their family etc. speak. For example, a kid will learn that “car” is a word and that it means a car by watching his mum point at a car and say "car." A kid could learn that “Mom drives the car” is a sentence in English after hearing his father say it. But learning a language isn’t a matter of memorizing every single sentence that is sayable in the language, because there are unthinkably many sentences and everyday I can make one up that has never been said before! So kids probably generalize from one sentence to another. After hearing their parents say “Mum drives the car” and “Dad drives the car,” they might learn that they can say “Grandma drives the car” or “Grandpa drives the Zamboni.” But they might also make generalizations that are wrong! In other words they might “guess too large,” and assume that there are sentences in English that are not actually in English (ex : Who did John see Mary and?). So kid need negative evidence to learn a language, they need to know which sentences can’t be said. Problem is that this evidence seems lacking. The empirical studies that Pinker discusses show that parents don’t really correct the grammatical errors that their kids make while they are learning their first language! “If children don’t get, or don’t use, negative evidence, they must have some mechanism that either avoids generating too large a language or that can recover from such over-generation.” It is because children don’t get negative evidence that Pinker argues that the rules of Universal Grammar are innate. There are some things about language that we don’t learn from experience.

    ReplyDelete
    Replies
    1. That's right. Only trouble is that in Pinker's chapter he does not make a clear enough distinction between Universal Grammar and Penny-Ellis Grammar (where there is both positive and negative evidence. (And some of the examples he uses are not UG but P-E...)

      Delete
  20. I would like to give a few reasons on what I think why the claim about UG might not be valid:

    Firstly, there's a lot of other factors that affect baby's learning of language. As mentioned in the article: no interference from another language, more likely to conform, and rapid changing inside brain. It could be just as likely to be the aggregation of those factors that create an illusion of Universal Grammar.

    Secondly, If UG exists, it should always be there. (The reason for it is trivial: UG are just grammars, and we never lose those grammars otherwise we can't talk anymore) And adults should still be able to take advantage of it just as babies. But the fact is, adults learn language must slower than babies, with much less efficiency and never reach the level of first language. Then, UG must not be the explanation that sets baby's acquisition of language apart from adult's. So in this case, UG is again trivial to babies' special ability of language learning.

    Another doubt rise from the fact that there are languages, namely Pirahã, that does not follows universal grammar. Chomsky's reply(wikipedia) to this is that recursion is an innate capacity, which in this specific case, is not shown in this language; But it is still there. In other words, UG is not gone. It's just hidden and not shown in Piraha language. But the question is, where does Piraha people's innate knowledge on UG came from. What I mean is, they must had adapted UG somewhere in the long history of evolution(otherwise they can't learn UG-compliant language). If we assume that this UG came from the extensively usage of a UG-complaint language. Then, it doesn't make sense that they would "downgrade" their old language to their nowadays Piraha language that is, in some sense, "inferior" than UG languages. So does that mean UG is a more general knowledge that exists before we start using language? Then evolutionary view on language acquisition through adaptation must be wrong. And it could be UG that lead to our ability to learn language, which seems hard to accept.

    Also as mentioned in the video, UG is probably incomplete. A set of inborn grammar is too arbitrary. Whenever we find a grammar that cannot be categorized as "conventional grammar" because poverty of evidence, we just say that it is universal grammar. But you can always find a new grammar that has to be universal as well.

    But is UG there at all, or is it just an illusion beneath which is baby's amazing ability to learn under poverty of evidence? I don't know. But I would look at more studies on baby's ability to learn other things under poverty of evidence.

    ReplyDelete
    Replies

    1. If bees were interested in anything besides honey, would they be able to use their communication like language?

      No. Their code (and their capacities) are not like Paraha (and human language capacities), allowing you to say anything and everything that can be said. (But human language could integrate the waggle-code for location and direction, transformed into words…

      What makes Fodor's idea about an innate category organ so ridiculous? Why can't we liken the genesis of our ability to categorize to the big bang?

      Our ability to categorize (and learn categories), evolved, like walking and the heart, and has plausible ordinary evolutionary Just-So-stories: no need for a Big-Bang Theory.

      Besides, what Fodor said was innate was categories, not our capacity to learn them. If that had been true, it would have meant a poverty-of-the-stimulus argument for category learning. But there is not PoS for category learning. It’s perfectly ordinary trial and error induction, with plenty of positive and negative evidence

      Delete
    2. Enting, don't believe everything you hear about what you can or can't say in Piraha. And although UG includes recursion, UG is not the same thing as recursion. Counting and number names can be added to any language. It's just more vocabulary plus calculation skills. Recursion is probably there in Piraha; if not, it is easily added. And even without it there's nothing you can't say in Piraha (or any other language)...

      Delete
    3. AR: If bees were interested in anything besides honey, would they be able to use their communication like language?

      SH: No. Their code (and their capacities) are not like Paraha (and human language capacities), allowing you to say anything and everything that can be said. (But human language could integrate the waggle-code for location and direction, transformed into words…

      AR: What makes Fodor's idea about an innate category organ so ridiculous? Why can't we liken the genesis of our ability to categorize to the big bang?

      SH: Our ability to categorize (and learn categories), evolved, like walking and the heart, and has plausible ordinary evolutionary Just-So-stories: no need for a Big-Bang Theory.

      Besides, what Fodor said was innate was categories, not our capacity to learn them. If that had been true, it would have meant a poverty-of-the-stimulus argument for category learning. But there is not PoS for category learning. It’s perfectly ordinary trial and error induction, with plenty of positive and negative evidence

      Delete
  21. “Before children have learned syntax, they know the meaning of many words, and they might be able to make good guesses as to what their parents are saying based on their knowledge of how the referents of these words typically act (for example, people tend to eat apples, but not vice-versa).”

    This reminds me of our account of categories–doing the right thing with the right kind of thing. Which makes sense, because this example is about learning the name of a category. But this example shows that “doing the right kind of thing” might be critical for transmitting the category to others, not just for demonstrating knowledge of that category. When the knower of a category demonstrates the action that typifies a category, the learner sees a direct example of what that category is, and what they should do to demonstrate knowledge of that category.

    ReplyDelete
    Replies
    1. Showing (miming) is not telling (language). But, yes, knowing the meaning of words can help you understand a (short) sentence even without the help of syntax. But not a long sentence; nor any and every possible sentence (proposition).

      Delete
  22. “Children clearly need some kind of linguistic input to acquire a language. There have been occasional cases in history where abandoned children have somehow survived in forests, such as Victor, the Wild Boy of Aveyron… Occasionally other modern children have grown up wild because depraved parents have raised them silently in dark rooms and attics…The outcome is always the same: the children, when found, are mute. Whatever innate grammatical abilities there are, they are too schematic to generate concrete speech, words, and grammatical constructions on their own.”
    Pinker is discussing how children do need at least some kind of exposure to language in order to produce it; however, mere exposure to language is not a sufficient criterion for producing language. I think that exposure may be necessary but not sufficient for language acquisition. With the case of the Wild Boy of Aveyron, we cannot assume that he did not have some sort of mental disability present before he was subjected to total isolation in the forest. Also, with feral children in general, it may be that they are unable to learn human language after isolation because they have missed the critical period for language acquisition. That is, however, only if we accept the critical period hypothesis, which posits that there is a time period where language acquisition is much easier in a linguistically rich environment. This hypothesis has been the subject of a lot of debate, although the case studies of feral children and second-language acquisition would suggest that there is an ideal period for language acquisition in young children.
    Most studies on feral children in the 20th century often suggest that they were mentally and/ or physically disabled at birth, but we cannot infer causality if we have not actually determined the temporal ordering of the isolation and the disabilities. Perhaps it is the case that our human language acquisition capabilities are majorly innate, but we need a lot of linguistic and social input from our environment in order to activate them? That seems to be the most realistic explanation.

    ReplyDelete
  23. Language (not communication) is a human-specific capacity that relies on brain structures that we do not share with our ancestors. It develops with the maturation of our brains, with a peak at around 4 to 6 years old, which is why kids learn languages more easily (and also the fact that there isn’t another language competing) and language abilities do not depend on general intelligence. The aim of the article is to assess the process of language acquisition with the tools of the Learnability Theory (target, environment, strategy, success criterion). Linguistic research and theory have confirmed and explained the grammatical complexification of children’s language in their first years of existence and have identified (somewhat) universal patterns of development. It is thought that most of children’s learning is due more to positive evidence of the language spoken in their community, paired with it’s context and less to negative feedback on errors, baby-language lessons or prosodic elements. It also defends the chomskian view that there are some grammatical rules that every child is born with (UG) and that their linguistic and visuo-sensorial experience will take a specific form of language in their mind according to a subset of those rules. For example, children pick out SVO structure, in repeated exposition to the utterance of this type of sentences, by hearing the sentence (e.g. ‘The cat eats the fish’, ‘The baby is in the bath’) in its context, where she can clearly identify that a subject is undergoing some process with (or without) an object.

    ReplyDelete
  24. Pinker provides arguments and data to convince the reader that we are born with universal grammar. One way that he describes this phenomenon is "the allowable mental representations and operations that all languages are confined to use." Some of his many arguments include:
    -The predictable rate and pattern at which children develop their language skills
    -The correlation between this rate/pattern and the development of the brain and its information-processing abilities
    -The fact that learning language cannot depend on correlating sound and meaning
    -Universal error patterns
    -A varieties of studies that show young childrens' mastery of complex grammar (Pinker discusses data that show that children construct complex sentences using almost all that is required in adult grammar by the age of 4 and that they accurately use grammatical rules 90% of the time, no matter what language is being used.)

    The point that Pinker focuses on in this paper is the lack of negative evidence (direct correction by a parent when making a grammatical error) that is exposed to children. This leaves a large whole in the learnability theory. This theory argues that the successful use of language is influenced by three assumptions: that there is a specific language that the child is aiming to learn, that the child is in an environment where he/she is exposed to linguistic information, and finally, the aspect that comes under threat, that there learning strategy (which could be described as the process followed in order to develop language).

    About the lack of negative evidence, Pinker states:

    "This has several consequences. For one thing, the most general learning algorithm one might conceive of -- one that is capable of hypothesizing any grammar, or any computer program capable of generating a language -- is in trouble. Without negative evidence (and even in many cases with it), there is no general-purpose, all-powerful learning machine; a machine must in some sense "know" something about the constraints in the domain in which it is learning."

    He argues that if children make so few grammatical mistakes, despite the fact that they are not directly being told when they are making one, there must be something that we are born with that guides us in the right direction.

    "Many universal properties of language are not specific to language but are simply reflections of universals of human experience." (...) "The theory of universal grammar is closely tied to the theory of the mental mechanisms children use in acquiring language; their hypotheses about language must be couched in structures sanctioned by UG."

    In these statements, Pinker explains that any "Universal Grammar" that we are born with is directly involved with and can only be understood by the way that we learn language. We must also take into consideration other human capacities, even those that do not seem to have anything to do with language, in order to understand the origin and the function of specific language rules.

    ReplyDelete
  25. I was bothered by the easy dismissal of negative evidence as an input in language acquisition. I think the experimental examples they give have some considerable caveats and that it is hard to deduce from them that there is no negative feedback from the parents, or surroundings, in the child’s learning. But from what I then understood, it is only to say that negative evidence cannot be the principal mechanism by which language ‘comes into order’. The main reason is that although the child receives negative feedback, it is not systematic (every time there is a grammatical error and only for grammatical errors). Rather it would be by some ‘mental mechanisms’, like the Blocking principle, that this would occur.

    ReplyDelete
    Replies
    1. What Pinker means here is that if you know the meaning of John, apple and ate you need neither UG nor Penny-Ellis grammar to figure out that Mark ate the apple, not the apple ate the Mark or the ate appled the Mark, etc.

      But this doesn't work for longer and more complicated strings of words. Try figuring out the meaning of a Latin sentence using only the translations of the individual words, in their original order, for something more complicated than "Marcus pomum comedit" (which, by the way, means exactly the same thing -- apart from emphasis -- whether you say "Marcus pomum comedit," "comedit Marcus pomum" or "Pomum comedit Marcus" because Latin has the pro-drop parameter setting and an inflected-case parameter-setting, so word order is almost completely irrelevant except for style and emphasis...)

      But there's a clue here: The unordered string of (grounded) content words (minus function words and inflections) already conveys some meaning, even without the grammar. The grammar just concerns the part of speech, the inflections and the order. But how can you figure those out in the first place without knowing the meanings of the content words? This is the still unanalyzed "interface" between semantics and syntax. There is none of this in mathematical propositions, because they are just syntactic. The syntax is completely autonomous. Not so for the rest of language.

      That's why "Colorless green ideas sleep furiously" is not meaningless (and "Squiggle-less squaggles squoggled squoogly" is cheating).

      But miming someone doing something with something is not a proposition, any more than a cat being on the mat (or miming a cat being on a met) -- or a picture, or a video -- is a proposition.

      And every proposition defines (at least) one category: The subject category is a member (or a subcategory) of the predicate category: The cat is in the category, things-on-the-mat. Marcus is in the category things-that-ate-the-apple...

      Delete
  26. It’s interesting to re-read a paper that you discussed in a different capacity for a different cognitive science class…
    I think the idea of parameter setting to be particularly interesting. I was raised with two languages and sometimes I find it interesting how my brain tackles the differences. Like how in Russian there is a lot more flexibility with word order while English has a strict word order. That can get me into trouble because sometimes I create sentences that don’t always make sense in English.
    This leads me to the Subset Principle which basically tells us that there is some default configuration for these parameter switches. Pinker suggests that fixed word order is the default - I’m curious to see how people with free word order languages manage to deal with learning English rather than relying solely on evidence of native English speakers learning languages with freer word orders.
    The rest of the chapter before this focused on negative and positive evidence. Negative evidence is basically just error feedback which we know is not how children learn languages. You can tell a child how to correctly say ‘I broke the toy’ and even ask him to repeat after you, the child will most likely keep saying ‘I breaked the toy’ until something happens with time and they stop making the mistake. As for positive evidence, that’s just know that something is part of the language that the child is learning. That gives no information except saying the child hears a specific language in his environment and tunes into that one.
    I’m fascinated by the different things mentioned like how the children of a community where they only have individual words but no grammar, create complex grammars on their own. This is how creole is derived from pidgin. In sign language, similar things have happened in communities. There are more interesting cases like the case of “motherese” or the oversimplified way the mothers babble to their children. This is not necessary, no matter what mothers say. There exist places in the world where they don’t believe that they should directly speak to their children until they have something worth saying.
    So, children learn languages in a fascinating variety of ways but one constant seems to be the exposure to at least part of a language or a simple word inventory. How the parameters get set and changed from what are considered default values, I have no clue, but being exposed to a full grammar is apparently not necessary.

    ReplyDelete
    Replies
    1. What is a mystery is not how UG's parameters are set: That is by ordinary inductive learning. The mystery is how UG gets in there. It's not by induction or instruction, because there is no negative evidence. But if it's inborn, we still don't know how it got there. It's a stretch to tell even an evolutionary Just-So story. (But remember that "I breaked" is not a UG violation; it's a Penny-Ellis grammar violation, negative evidence, and as such it is learnable by induction of instruction.)

      Delete
  27. I just can't shake this idea that there can exist learned categories without the requirement for negative feedback. Often positive assertions about the world imply their own negatives. In the case of numbers, this is obvious, given a rule for primes or even just a set of primes, all others can be deduced without ever testing what is not prime. In my opinion, this applies to the world as well. This is observable in late language acquisition. Most people try beer for the first time after much of their language has been acquired, and are used to the process of identification of novel experience, and are not in danger of assuming that every other novel experience that they have after that point is "beer". Of course this is blurrier earlier in language acquisition, by why should the same principle not apply: in designating X as "X", we create an implicit negative—that X is somehow different from everything else. Even if there exists something similar to beer, which is not beer, making my category flawed or imperfect, it does not mean that I do not have a quite rigorously defined category without any negative feedback. This learned categorization without negative feedback, of course, would destabilize the nativist argument for language, though not destroy it entirely: the bootstrapping problem is interesting enough to still warrant some innate capacity to deduce syntactic structure from semantics and context, in that it requires some separation of nouns from verbs (and the associated NP from VP) in order to learn language is certainly an innate structure that allows for this bootstrapping to occur, though it seems much simpler and more fundamental than what UG proposes.

    As a side note: Pinker's notion of a pre-defined ordering of "parameters" seems unusual to me. If the only way for a child to set "parameters" is to hear a sentence that cannot be expressed given its current parameter setting, then why are languages that use alternative parameters developed at all?

    ReplyDelete
  28. Pinker states: "The second problem is that, without prior constraints on the design of the feature-correlator, there are an astronomical number of possible intercorrelations among linguistic properties for the child to test. To take just two, the child would have to determine whether a sentence containing the word cat in third position must have a plural word at the end, and whether sentences ending in words ending in d are invariably preceded by words referring to plural entities. Most of these correlations never occur in any natural language. It would be mystery, then, why children are built with complex machinery designed to test for them -- though another way of putting it is that it would be a mystery why there are no languages exhibiting certain kinds of correlations given that children are capable of finding them."

    Although I agree with his theory about the necessary design constraints of a potential feature-correlator discounting the use thereof as the primary means of language acquisition, the result of this mental exercise—the conclusion that, given this mechanism, it would be a mystery why certain correlations never appear in language, given the capability to recognise them—seems to be rather inconclusive, actually. I fail to see how it follow from this line of reasoning, that the processing mechanism could exercise any constraint on what it is supposed to be processing. Put in the language of the paper, Pinker seems to be treating a dearth of positive evidence as some sort of casual support for reducing the available data to a subset—which, as we have seen in the case of language acquisition itself, may be practical but not necesssarily true.

    I do have more doubts about his assertion that: "This raises the question of how the child sets the parameters. One suggestion is that parameter settings are ordered, with children assuming a particular setting as the default case, moving to other settings as the input evidence forces them to." The following examples about the Subset Principle and word order are satisfying enough, but I wonder, given this phrase, how one would avoid suggesting a particular language—the one whose parameter settings are most sitting at "default"—is easier (and thus possibly superior) to learn.

    ReplyDelete
    Replies
    1. Being able to induce UG from word co-occurrence statistics (on content words, let alone squiggles and squoggles) seems to me as likely as being able to induce word meanings fromco-occurrence statistics on squiggles and squoggles -- and both on the order of waiting for chimpanzees to type out Shakespeare. My guess is that all of these problems are NP-complete (unsolvable before the heat death of the universe). Remember that even Chomsky and the teams of linguists sitting around a table took decades to do it with the aid of their own UG+/UG- grammaticality judgments, "supervised" by their own UG-compliant brains. Imagining that the child's brain can do that from scratch from word co-occurrence statistics in what it says and hears by age four...

      Delete
  29. From Chomsky's universe:
    "Chomsky has often described himself as a Cartesian: a proponent of Descartes’ theory of innate ideas. But perhaps he is closer to being a Platonist, in that the innateness and universality of the ideas is not a result of evolutionary selection but of the natural laws of the physical universe and perhaps even the universal laws of formal logic and mathematics."
    I do not know if my understanding of this passage is correct but I found it resonated with the important question of WHY UG came about in the first place? If it exists, why do we all possess this innate rulebook of language?
    The passage I pasted made me think that it may have to do with a combination of chance and constraints imposed by the natural world. Why did my other organs develop the way they did? Partly because of the function they serve (So evolutionarily, precisely what function does UG serve?) but partly because of chance and the randomness of the physical and biological circumstances that shaped their appearance.
    Although we try to move away from 'just so' stories, I find it almost creationist to instill so much faith in the idea that there was a plan for everything or a reason for everything. Yes, I think that, if there is such a thing, UG ought to have evolved to serve a specific function (of which we are still unsure), but details of the rules or arrangements which UG contains may not bear as much significance as we want them to.

    ReplyDelete
    Replies
    1. UG is a pretty big thing to not have much significance, or adaptive value...

      Delete
  30. From Chomsky's universe:
    La section sur "Chomsky le Platonicien" m'a un peu confuse dans l'extrait de l'univers de Chomsky, surtout sur le point que UG est liée à la structure de la pensée. Cela ne ressemble-t-il pas à la relativité linguistique de Whorf (qui, si je comprends bien, dicte que la langue maternelle guide et construit une perception du monde différente)?

    Est-ce que Chomsky affirme que c'est la UG qui structure nos pensées (ou vice-versa)? Cela me semblait un peu vague de dire que UG et la pensée sont "liés" (bien que ce soit possible qu'on ait employé le mot "liés" volontairement puisque c'est peut-être tout ce que l'on sait à date).

    ReplyDelete
    Replies
    1. Grammaire universelle: effet Whorfien?

      Florence, l'hypothèse de Whorf est que la langue détermine -- ou influence, au moins -- la perception ainsi que la pensée.

      Evidemment c'est trivial que ce qu'on me dit influence ce que je pense. Et la perception, c'est autre chose (comme la perception catégorielle et l'arc-en-ciel). C'est plutôt ma façon de penser qui est en jeu. Et ce que Whorf avait souligné c"était les différences de perception et de pensée qui provenaient des différences parmi les langues des locuteurs

      Mais même comme ça, ce que Chomsky dit à propos de la pensée est encore trop vague pour mettre dans ce cadre. Si je le comprends bien, d'aprés lui il y a une façon de penser qui est uniquement humaine (ou peut-être unique tout court), très puissante, et étroitement liée avec la langue. Chomsky ne soutient pas que c'est les pensées elles-mêmes qui sont contraintes par la GU: c'est leur expression verbale qui est contrainte par la GU.

      Ça c'est ce qui dit Chomsky, et j'avoue que je n'ai pas complètement compris ce qu'il veut dire. Donc ce qui suti n'est que mon interpretation:

      Si effectivement toute langue humaine peut exprimer toute les propositions possibles, et les propositions sont des affirmations sujet/prédicat qui sont soit vraies soit fausses, alors il y a peut-être une similarité ou même un isomorphisme entre une proposition mentale et une proposition verbale telle que l'expression verbale doit respecter la GU. Ce qui n'est pas contrainte par la GU dans sa forme verbale est impensable.

      Ne m'interroge pas plus loin sur cette question, car je n'en sais plus rien!

      J'ajouterais juste que ceci ne me semble pas un effet Whorfien: Whorf insistait sur les différences de perception ou de pensées selon les différences entre les langues. D'abord, la GU est universelle à toutes les langues. Et en plus, selon Chomsky, c'est la pensée qui influence la langue dans ce cas, pas l'inverse.

      Mais tu as raison que l'idée de Chomsky qu'il peut y avoir des idées ou des vérités qui sont impensables ou inconcevables a un peut le goût des tenants de l'hypothèse de Whorf qui voulaient que dans certaines langues certaines pensées seraient impensables.

      Delete
  31. My problem with this article is what it didn't contain - namely, more analysis or even just summary of what we know about 'the language of thought,' as proposed by Fodor. Pinker briefly mentions in his introduction that "langauage is the main vehicle by which we know about other people's thoughts, and the two must be intimately related." He makes a powerful statement like this - must be connected - without any support. After that, there is almost no mention of the mental, which is a shame because after what we have learned and discussed so far I would think that that should form the groundwork of any discussion about language acquisition. Pinker references Fodor's 1975 LOTH paper, but barely incorporates it into this particular work. I understand that it may not be Pinker's field of expertise, but I find it hard to believe that he would study language acquisition in such depth without considering or addressing the fact that 'thought acquisition' (by which I mean ability to think, reason, have a train of thought, etc) must somehow happen in tandem. Or does it happen before? Does the LOT develop at the same time as natural language? Independently? How interconnected are they? I think these are the kinds of questions that should be answered in order to better address the problem of language acquisition. I just found that all the data presented in Pinker's article was not getting at the deeper levels of language that go on in the growing child's mind, and was left wanting more. In class we touched on the idea that meaning and semantics may be driving the syntax of language - these certainly must have a connection to our LOT, if we have one. Perhaps I am unfairly blaming Pinker for not addressing something I think he should have, but my takeaway from this article is that we won't learn much more from these various linguistic tests and exercises we put children through, and it's time we devised new ways to test for a potentially deeper level of language in the mind.

    ReplyDelete
  32. "Possessing a language is the quintessentially human trait: all normal humans speak, no nonhuman animal does."

    i am still struggling to understand some basic assumptions, which are essential to understand many of the arguments of the class, especially with regard to language. just to be sure i understand what is being assumed, here is a thought experiment:

    if you brought a group of signers ('signers') into an environment of pple ('non-signers') that has never encountered anything but their own verbal language.

    to make it even more extreme, let's say that signers are also visually very different so it cannot be assumed that we feel what they feel. so, for example that signers are somewhat humanoid but also really different (green-looking, 4-arms, 1 eyes, 8 feet tall..) while non-signers are physically homogeneous and have never seen, or heard of the existence of being that are humanoid but not human...

    is it assumed that the non-signers, by virtue of possessing language and being able to generalize beyond themselves to other non-signer member, would necessarily be able to detect that the signers are also using language albeit a totally different one that they cannot understand and possibly not even accurately perceive?

    ReplyDelete
  33. “Learnability theory has defined learning as a scenario involving four parts”
    The parts include: a class of languages, an environment, a learning strategy, and a success criterion. I find this theory lacks any relevant information about the main thing it is trying to explain i.e. “what is learning acquisition, in principle?” In order to understand learning acquisition, the main component would be to understand the learning strategy and this theory gives little information about that. It simply states that the child makes hypotheses about the target language. What should be explained is how children know which hypotheses to make and where these hypotheses come from. Children don’t even know the type of algorithm that needs to be implemented to make these hypotheses. This is what cognitive science is trying to reverse engineer and thus the most important part of language acquisition to cognitive science. Simply saying there is an algorithm does not tell us anything unless we can explain how and why this algorithm is developed.
    Further in the text the author attempts to determine the child’s language-learning algorithm and highlights some of the possibilities. It is claimed that “one possibility is that the child sets up a massive correlation matrix, and tallies which words appear in which positions, which words appear next to which other words etc.” I fully agree with the problems that the authors raises with this theory. Children would never be able to produce a comprehensive matrix for all the possible correlations. Language is dynamic and the number of possible sentences in just one language is infinite – even as adults we would still be trying to complete the matrix, not to mention it would forever be increasing as new words are entered into the target language. The other problem brought up is that the correlations made by the child are not audibly marked in parental speech. This problem I find to be less of an issue as humans are made to detect patterns in speech – but I am in full agreement that children need to initially detect these patterns and the input they receive is not enough to explain how they do this.

    ReplyDelete
  34. "Children do not hear sentences in isolation, but in a context. No child has learned language from the radio; indeed, children rarely if ever learn language from television."

    I found this part of the reading to be particularly interesting, especially in relation to the symbol grounding problem. It seems to me that a child trying to learn a language from the radio or television would be doing something akin to trying to learn a language from a dictionary without any grounding. That said, I'm surprised that children can't learn from television since, in contrast to radio, television includes a number of visual queues that I assumed could be used to ground words with their meaning.

    "In interacting with live human speakers, who tend to talk about the here and now in the presence of children, the child can be more of a mind-reader, guessing what the speaker might have meant (Macnamara, 1972, 1982; Schlesinger, 1971)."

    It wasn't particularly clear to me how exactly the interaction with human speakers aids the process of mind-reading. I would argue that television shows, especially children's television shows, also deal primarily with the here and now (that is, the characters in the television shows refer to things near to them). Beyond this, the characters in children's television shows often speak directly to the camera (as though they are holding a real conversation with the children in their living room). Why is this not a satisfactory way for children to learn languages? What aspects are included in real-world interaction, but not television shows, that allow children to learn language?

    It seems to me that children should be just as capable of mind-reading the characters on screen. Perhaps more so than in real life, since the television characters often overemphasize their behaviors. Similarly, in scenarios where the characters engage with children through the television, many young children seem unaware that they aren't really interacting with a human being.

    Perhaps it has something to do with whether or not children are immersed in a context, rather than whether or not they're interacting with humans as opposed to TV characters. Now I'm curious to know if children can learn language through Skype conversations with other humans in real time.

    ReplyDelete