Saturday 11 January 2014

4a. Rizzolatti G & Craighero L (2004) The Mirror-Neuron System

Rizzolatti G & Craighero L (2004) The Mirror-Neuron SystemAnnual Review of Neuroscience 27L 169-92 

A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others’ actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism—the mirror-neuron mechanism—that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language. 




60 comments:

  1. Some of the challenge in replying to this paper (especially prior to reading the Fodor paper) lies in figuring out how it fits in with the course material; given that next week's topic is "Why is there controversy over whether neuroscience is relevent to explaining cognition?", I take it that the opinion to be formed around this paper is "do we care about the mirror-neuron system? why or why not? For that matter, do we care about neuroscience at all in explaining cognition?"

    So if what's at stake is the position of neuroscience in the curriculum of the cognitive scientist, what are we to make of the increasingly detailed and obscure information generated by neurologists, often at the price of the dignity or life of animals (which used to include humans, but, since the Nuremberg trials, is largely limited to compulsory experimentation on non-human animals and far less invasive techniques on humans).

    I don't know about the relative costs behind neurological experimentation or whether any given experiment is "worth it", but there are always things to take away if our question is "what is cognition" and, specifically, how did humans come to be so good at cognizing. In this paper, the presence of a mirror-neuron system teaches us a great deal of the way we perceive (for example, why communicative gestures are almost instinctual; the basis of empathy; the evolution of complex gestures and language from basic ingestive gestures). What is this understanding good for? The same as any study of human cognition; if our goal is to successfully reverse-engineer cognition, then neurology has something very valuable to input in showing how sentience came about and how we built up our perception of the world from simple building blocks - for example, that one must open one's mouth wider to ingest larger objects - to complex behaviours – such as words and language.

    ReplyDelete
    Replies
    1. Explaining our mirror capacity

      The questions to ask yourself are two:

      (1) Did finding that there were mirror neurons tell us anything that would help as generate (reverse-engineer) the T3 capacities that are correlated with firing by mirror neurons?

      (2) If it had done so, would it have been worth hurting animals to find out? (See: "Doing the Right Thing" Psychology Today, yesterday)

      Delete
  2. This imitation learning is not unique to humans though. There have been many, many studies with birds showing that young birds learn courtship songs through imitation of a tutor's song (zebra finches). This is done by storing a copy of the correct song in the young bird's brain for comparison later. There is evidence that there are mirror neurons in zebra finches also, as they fire in synchrony with various aspects of the song. But part of the hypothesis is that imitation learning in humans facilitates the understanding of actions done by others. Do birds understand what they are learning when they imitate a courtship song? Despite that, this similarity could represents an evolutionary convergence of communication learning that is based on memory. In accordance to that, the paper indicated that watching movements can cause priming for that movement.

    "... hand gestures and mouth gestures are strictly linked in humans...." I wonder if this could be a cultural/regional effect, at least for the speech part. Speakers of certain languages use their hands more during discourse than others--compare English speakers to Chinese speakers for example. Hand gestures are used a lot more by English speakers in conjunction to speech, maybe this could partially account for the increased activity. I would be interested in seeing a comparative study on that.

    Lastly, I like to suggest an alternate view on the evolution of language--that of which follows Philip Lieberman's studies based on anatomical and brain comparisons between modern humans, neanderthals, and monkeys. This model says that the anatomical structures which allow for speech evolved at the expense of vegetative functions in our ancestors (having the correct vocal tract length for a variety of vowel products and tongue length). In addition, this anatomical change is accompanied by neural changes that support language (mainly cortical-basal ganglia circuitry). Maybe the mirror neurons could be involved in supporting this model by facilitating the neural changes that support language. It's hard to know because we are missing the brains of what comes between us and the monkeys. If only we knew whether or not earlier humans and neanderthals also have these and to what extent they functioned (presumbly, both had them).

    ReplyDelete
    Replies
    1. 1. Yes, there are no doubt mirror neurons active during vocal imitation too: Does that explain how to generate any kind of T3 imitation?

      2. There seems to be a natural coupling between vocalizing and gesturing. Cultures may differ in how much they express it (just as they differ in how close people stand to one another). But in later weeks we will consider theories of the origin of language, including why it may have begun with gesture. And in all cultures, the gestural language of the deaf shows the same neurological substrate (aphasia, anomia) as speech.

      3. Yes, the ancient brain relics tell us little about language evolution, how do you mirror neurons might help?

      Delete
    2. 1. It may explain a part of the neural mechanism which generates verbal imitation. But the mirror neurons are only a part of a larger system that compose the biological imitation. I do not think it is needed to generate T3 imitation.

      3. If mirror neurons are necessary for imitation learning, then they could be an integral part of language evolution. I do recall reading somewhere that neanderthals had neither language nor imitation learning. They would reside with our ancestors but did not pick up on their use of the tools. This seems to me like the two co-evolved or that one necessitates the other.

      Delete
  3. I think this paper exemplifies the present dilemma in mapping the brain in present day neuroscience. There's a lot of untidy, but possibly groundbreaking, correlations being observed and documented, but due to a lack of any real theory or framework, we're left conjecturing on top of conjectures. Not only are our cognitive abilities the product of a complex and messy process of evolution combined with the complex and messy process of child development, but the actual phenomenon we're attempting to explain has two seemingly irredeemable perspectives, namely the objective functions the brain performs and the subjective experience we undergo while performing those functions (or rather while those functions are performed by the organism). That relation seems to be an integral part of the whole explanation. It would be strange to say the least to fully explain one side of the coin while ignoring the other side, if we presume there's only one coin (which I do). In science and understanding, it's natural to reify, to create words for "new" phenomena. But cognitive science is in its infancy, and so it seems like many reified words become weasel words unfortunately. Someone might be warranted in using "representation" at first as a place holder for a question without an answer, but as work progresses and correlations are uncovered, many tend to forget we have no idea what we mean when we say "representation".

    This isn't so much a critique so much as an observation about the difficulty in reverse engineering an absurdly complex and messy black box like the human brain. At first, we might not have a choice in using place holders such as "executive function" or "representation" to get any work done. In the paper in question, some pretty revealing correlations are uncovered between different actions and perceptions and the respective activated neural structures which illicit the actions and perceptions. Once these correlations make themselves apparent, theorizing becomes a seductive enterprise. I think it's the only way to go about it, but we must simply be aware these conjectures are fragile and should be when we consider our lack of holistic understanding about what's really going on.

    (Continued in reply)

    ReplyDelete
    Replies
    1. Here's a conjecture I might take for granted more than most which seems to coincide with the conjectures being made by the authors. I see the appeal in using brain structures and their functions as a sort of scaffolding for new functions (exaptation). The authors make use of this reasoning when they write: "A fundamental step, however, toward speech acquisition was achieved when individuals, most likely thanks to improved imitation capacities (Donald 1991), became able to generate the sounds originally accompanied by a specific action without doing the action." Some might see this as an unreasonable conjecture, while I see it as the best we can do with the information available (as long as we don't fall into irrational faith). This reminds me of Skinner's view on language which came to be so disparaged by Chomsky. In a very general sense, I believe Skinner might have been on to something when describing human thinking as privatized verbal behavior. It's clearly not just that (human thinking is more diverse and we'd be hard pressed to think all thinking can't occur without language due to the evidence from other animals), so the conjecture was clearly a drastic oversimplification, but I think it's worthy to consider as long as consideration doesn't turn into unwarranted certainty. I think the trick is to avoid making a word of it. What words reference or seem to reference can become REAL without any good reason simply because any definiens automatically assumes the reality of the respective definiendum (here's a crazy conjecture of mine (and other deflationists): this error in human thinking might be central to understanding the Hard Problem). But whatever your inclinations, even though brain imaging technologies are important, I think kid-siblingification (as Harnad puts it) is the most powerful and indispensable tool we have at our disposal to make sense of this mess.

      Delete
    2. How do correlations between brain activity or structure and T3 activity help explain the causal mechanism generating the T3 capacity?

      Yes, the capacity to imitate was probably a prerequisite for language (but we didn't need mirror neurons to tell us that, did we? we already knew people could imitate). And what if language began gesturally (i.e., as imitative actions themselves, not vocalizations correlated with gestures)?

      Thinking might be "privatized verbal behaviour" (if that means talking to yourself). But before you can start talking to yourself, you need to start talking. (Skinner never had an account of that.) But once you're talking, why talk to yourself? If you are the source of the questions as well as the answers, why not just act on them, and do what needs doing, rather than playing internal peekaboo with yourself?

      (You'll find that the same kind of circularity and superflousness arises also with feeling: Why bother feeling, say, inputs, rather than just detecting them and acting on them, doing whatever internal computations or dynamics are needed feelinglessly, just like a computer or an oven? Welcome to "The Hard Problem" -- which Tom Stoppard has just written a play about.)

      https://www.youtube.com/embed/CLzU6nXNGgo

      (Btw, coining new words is not bad, as long as it solves or at least points out problems. What makes "weasel words" bad is that they cover up problems, or purport to have solved them when they haven't. We'll be talking about "lexicalization" later in the course.)

      Delete
    3. "How do correlations between brain activity or structure and T3 activity help explain the causal mechanism generating the T3 capacity?"

      Not being aware of the causal mechanism down to the finest detail doesn't imply knowledge about correlations between structure and function is irrelevant. Take the phenomenon of flight as an example. We notice bees, birds and bats are able to fly, and yet, without knowing the slightest detail about how lift is produced, we can make a safe assumption that wings have something to do with it. Correlations can be very useful clues.

      "Yes, the capacity to imitate was probably a prerequisite for language (but we didn't need mirror neurons to tell us that, did we? we already knew people could imitate)."

      But mirror neurons clue us in to the fact that our 3rd person perception of a certain behavior and 1st person perception of our own behavior might be inherently linked at the physical level. We knew people could imitate just as we knew birds, bees and bats could fly. Noticing the presence of mirror neurons in imitating animals is somewhat like noticing the presence of wings in flying animals... it's a worthy clue helping us make sense of the function in question.

      "And what if language began gesturally (i.e., as imitative actions themselves, not vocalizations correlated with gestures)?"

      I don't see why those would be necessarily mutually exclusive. Imitative actions could be a form of communication since they weren't arbitrary. I believe they mention this in the paper: pointing at something you want is very close to reaching for something you want and not attaining it. Maybe making a sound simultaneously became more useful still (drawing more attention...not needing to be seen to be "understood") and finally the gesture was inhibited. This is just another "just so story", but at least it fits the fuzzy picture we have of ourselves at both the macro-level of imitation and more micro-level of neural activity. That's at least my optimistic take on neuropsychology. The science is in its infancy, so I don't think we should so quick to throw the baby (useful clues) out with the bathwater (conjectures being conflated with facts).

      (Continue below)

      Delete
    4. (Continued)

      "Thinking might be "privatized verbal behaviour" (if that means talking to yourself). But before you can start talking to yourself, you need to start talking. (Skinner never had an account of that.) But once you're talking, why talk to yourself? If you are the source of the questions as well as the answers, why not just act on them, and do what needs doing, rather than playing internal peekaboo with yourself?"

      "Privatized verbal behavior", in my view, doesn't exactly mean talking to yourself. I feel like that phrasing presupposes a homunculus to explain something, when what we're really attempting to explain is the homunculus itself. Maybe language evolved as a useful tool for sharing information. But when language ability (aligning the same arbitrary vocalizations with the same meanings in the same population) became the norm, it might have been more useful to avoid giving information to some members (cheaters, enemies, members who engage in parasitic behaviors, etc) and so privatizing language might have been part of the causal story that lead to thinking in language in private. Just like Skinner, I have no account of how language started exactly. As for, why "play internal peakaboo"? I would say it's not you talking to yourself, it's the brain talking in private (thinking) about you which might create the effect that you seem to be talking to yourself. And even if it sounds strange, I feel like the explanation is warranted to be strange considering evolution doesn't think ahead and, when introspecting, one does seem to really talk to oneself in a stupidly redundant manner. It might seem ridiculous to posit a sort of talking to oneself, but isn't that really what it feels like? When I walk into a conference and they have fresh coffee that I know I want, I seem to actually tell myself “oh nice! there’s fresh coffee!” when I clearly was well-aware of it. If our private conversation were always vocalized, we’d seem completely insane to others. Do we all not neurotically replay in private both sides of an argument after it occurs (unless I’m crazy)? So, why avoid an explanation that posits a sort of “internal peakaboo” when that seems to be what we actually do?

      “(Btw, coining new words is not bad, as long as it solves or at least points out problems. What makes "weasel words" bad is that they cover up problems, or purport to have solved them when they haven't. We'll be talking about "lexicalization" later in the course.)”

      Right. I think the issue is that experts coin new words as temporary placeholders for problems, but then as time goes on and the word is misunderstood (by being attempted to be understood as a real thing rather than a lack of knowledge), it turns into weasel words, “dark matter” and “dark energy” being the most poignant examples. It pointed out problems initially, but people, through their interactions, eventually come to believe it is the solution to the problem it was initially pointing to.

      Thanks for the reference to the play… too bad it’s in London! I would have considered actually seeing theatre for the 1st time since my 7th grade field-trip. I guess not.

      Delete
    5. Pitfalls of Introspection

      1. Yes, the brain is where the causing is happening: but (as Fodor notes), where does not tell you how.

      2. I already knew before I knew about mirror neurons that when you do that it's the same as when I do this. It's not clear how much more I know about how when I learn that mirror neurons are active during both.

      3. How reaching or pointing to lead to a grunt is an easy one. How miming a cat on a mat can lead the "the cat is on the mat" is not quite so easy...

      4. I can understand why I wouldn't want to share a trade secret out loud. But why would I want to tell it to myself? I already know.

      5. When "it's the brain talking in private (thinking)," who's it talking to (and why)?

      6. Yes, thinking does often feel like verbalizing silently, but that doesn't explain what it's for (any more than imagery explains how we remember of recognize). But maybe the thinking is not really the silent verbalizing, but what generates the silent verbalizing. And the verbalizing may just be a symptom of our language-evolved brain (with the language's main adaptive value being for communicating with others, not verbalizing to oneself). (Inner speech is just an acoustic imagery theory.)

      Delete
    6. 1. Our understanding might be so weak and at such a macro-level that WHERE more often than not doesn't give us any inkling as the HOW (the actual mechanism down to sufficient enough detail to make us feel like the phenomenon has been explained, which I would say would be equivalent to knowing how to build a T3)... but what other avenues do we have for connection the phenomenon of thinking to its physical instantiation than attempting to make it fit within our causal framework in which timing and location are the basic variables governing HOW things act? How can we really dispense with WHEN and WHERE in an attempt to make sense of anything? I understand the pitfalls of naive or simplistic interpretations of neuroimaging results (or at least I think I do), but should we question the basic principles of physicalism when it comes to cognition just because some people have made the mistake of seeing a correlation as an explanation?

      2. Right. I guess this is the fuzziness of attempting to reverse engineer anything. Take the flying example. Did we learn anything about how flight occurs if we can’t make sense of how air movement and wing-shapes work in tandem to create lift? I’d say yeah… we learned wings might have something to do with it. Mirror neurons might clue us in about the overall pattern of activity that allows us to relate (Theory of Mind) and/or imitate. We knew we could imitate and we knew we could relate to other organisms, but how? Well, we don’t know. But we have a clue that the pattern of overlapping activity might have something to do with it. Could it not have been the case that when we see someone perform a particular action and when we perform that action, the activity could not overlap in any consistent way? Looking for answers in mirror-neurons might be like only looking under the lit area of the street to find one’s keys (and this same argument applies to neuroimaging in general), but are there any better alternatives?

      3. Not sure if that’s implied by the theory that gestural communication was inhibited once arbitrary sounds and meanings were connected through the initial middle man of gestural communication. I could be wrong, but I understood as a way for natural selection to “see” the fitness benefits of having a mechanism that associates arbitrary sounds to specific information transfer. I don’t think, or at least I hope, advocates don’t believe that explains how we get from simple associations between arbitrary sounds and their respective “meanings” to the type of understanding exemplified by “the cat is on the mat”. So, yeah, I think you’re right. The difference is more than substantial. I just can’t take difficulties in practice as a valid argument for impossibility in principle. Should we abandon the WHEN and WHERE of weather activity just because we have such a poor understanding of how the trillions of easy to understand interactions add up to the difficult to understand global weather events? I don’t see how the brain somehow escapes the physicalist framework no matter how complex and difficult the problem might be.

      Delete
    7. 4 and 5 and 6


      That’s a great question, but isn’t it the case that we do seem to tell ourselves things we know already? The seeming nonsensicalness of this phenomenon is perplexing, but it seems to be The Hard Problem in disguise rearing its head. If I’m not mistaken, we’re definitely at odds in our interpretation of the HP, so my conjectures here might not make any sense to you. We’ll see.

      If one sees function as providing the effect of phenomenological consciousness (someone feeling), then belief in feeling being perplexing accounts for feeling (IMO, feeling is thinking you’re feeling, with both [you] and [feeling] being abstracted from the world… [you] being the organism and [feeling] being the thing the organism responds to which it can’t quite fit into any category other than the vague category of [feelings]). In the same way, you’re not talking to yourself, but simply talking silently about the world. If someone we’re with you, you’d vocalize it and thus allow the other person to relate, but you’re not, so the brain keeps churning converting stimuli into a linguistic interpretation. To me, it seems like the purpose of language is to convert personal knowledge into a shared coding system, whereby knowledge can be shared. Talking to yourself could just be talking with inhibited motor actions. We will simultaneously feel like senders and receivers because the same activity that our brains undergo when hearing someone else saying “oh nice! there’s fresh coffee!” occurs when 1st hand perceiving fresh coffee, so then the very next linguistic interpretation of that would be “who’s noticing the fresh coffee? I am”. Wouldn’t the same issue of “why are you telling yourself something you already know?” occur when positing the HD? Why can’t the organism simply do the right thing with the right stuff without subjectively feeling anything? Well, I’d say it can’t because feeling (or rather the mystery of feeling) is a by-product of complex categorization. In the same manner, the by-product of believing you’re talking to yourself occurs from complex categorization of one’s own thoughts after the thought has occurred. In case my take isn’t confusing enough, I believe you don’t notice you’re talking to yourself until you’re talking to yourself about talking to yourself. And talking to yourself is thinking, so it can be rephrased as “I believe you don’t notice you’re thinking until you’re thinking about thinking”. It’s all just access to content.

      So, when you ask “why tell myself?” or “what is it for?” I’d answer, you’re simply making a linguistic interpretation that might prove useless if no one’s around to vocalize it to or useless if you don’t privately assume implications from that proposition (There is fresh coffee) combined with another related proposition (My friend, who told me she’s showing up late to the conference, said she was showing up late because she was passing by Tim Hortons to get coffee). I have absolutely no idea how we combine ideas to make reasonable inferences (I should call her to tell her to not bother buying coffee), but it seems like language (and complex categorization) is needed for that purpose. When “Oh nice! There’s fresh coffee!” proves useless… it might have been… in the same manner that diverting our attention to inconsequential moving objects is useless, but only in retrospect.

      Delete
    8. Talking to Yourself

      Whenever knowing when and where the brain correlates of an activity or capacity give us an idea about how to generate that capacity, causally, all my reservations are refuted. Trouble is, it hasn't happened yet. Just more when and where -- and then a lot of shape-reading in the clouds.

      To get from gestural imitation and gestural communication to vocal language, you first have to get to gestural language, which is not just gestural imitation and gestural communication.

      Talking to yourself is a good rehearsal strategy and memory aid -- just like practicing a swimming stroke -- once you have language. But that's surely not the adaptive power of language, nor the reason it evolved. And it's not clear that all thinking is verbal (though probably it's all verbalizable).

      Whatever thinking is, it's a separate question why it's felt. (And the feeling of "ouch" in an amphioxus is surely not "a by-product of complex categorization.")

      And why we verbalize thoughts to ourselves is a separate question from why it feels like something to think, or to verbalize.

      (Now I suggest you continue in a later thread rather than continuing in this one,)

      Delete
  4. “A necessary step for speech evolution was the transfer of gestural meaning, intrinsic to gesture itself, to abstract sound meaning (Rizzolatti & Craighero, 2004)".

    I found their ideas about the two roots of semantics very interesting. If I understand correctly, the more ancient system would be a semantic root unifying the action and the motor representation. Then, abstract semantics (verbal language) started to arise, with an association between the abstract meaning of the action and its corresponding verbal association.

    For instance (to help kid-sib), in the ancient system, hearing the word “walk” would activate the motor representation of the leg (thus, what is necessary to do the walking action). Then, with the emergence of the echo-neurons system, hearing the word “walk” would activate the speech-related motor centre. As the author stated, the functional role of this echo-neuron system is not yet clear.

    This hypothetical "transition from intrinsic-gestural to abstract meaning" could have led to changes in cognition, especially regarding the role of computation in the brain. In my opinion, cognition became less computation-like with the emergence of abstract language.

    As we have repeated many times in the class, computation is symbol-manipulation, in which the symbol is arbitrary. I do think that the rise of language has allowed human to attribute more and more abstract meaning to their environment. For instance, humans have instinctively learned to run in front of a lion. Such behavior could have been done purely in computational terms: if I see stimulus “lion”, then I do the action “run”.

    However, as abstract meaning have started to emerge (via extensive gestural then vocal communication between peers), the stimulus “lion” may have gradually been associated with more complex responses than just “run”, such as: pain, fear, danger, and death (certainly the most abstract of all!). We gained a greater understanding of our environment, something that we could have never achieved if humans had been solitary animals.

    To summarize, I think that the transition to abstract language have made cognition less and less about computation. But if cognition has become less about computation, what has it become “more about”? That I do not know!


    ReplyDelete
    Replies
    1. In language as well as computation (which is a part of language), the shapes of the symbols (in the case of language: words) are arbitrary. That's true whether the language is spoken or gestural. Speech is not more abstract that gesture. But imitation (gesture, pantomime) is not language. Gestural language is something else, but it could well have evolved before spoken language.

      R & C use "language" and "abstract" rather vaguely: Actions are actions. Names for actions (or objects, or events, or features or states) are symbols (words). The actions or objects are the words' referents. The referents can be concrete, like "push" or "chair" or more abstract, like "evaluate" or "beauty." There is also a kind of abstraction occurring if an iconic or imitative gesture is taken to stand for its referent, in which case the imitative similarity becomes irrelevant. (This is Saussure's "arbitrariness" of the sign [symbol].)

      Don't you think a transition from imitative/communicative gesture (pantomime) to arbitrary gesture would be a more direct transition to language than a direct transition from imitative gesture to sound?

      All language is "abstract" (compared to pantomime!). And meanings can be less or more abstract. But they are all abstract. The "abstract" just means picking out some features and ignoring others.

      "Association" is a bit of a weasel word. A word has to be connected somehow to ("grounded" in) its referent: "apples" to apples. Then a word can have connotations (like, dislike, fear, memories.) Those may be more like associations.

      The most abstract thing of all may be "thing."

      I don't see why becoming more abstract means becoming less (or more) computational. Perhaps the first step of treating an imitative gesture as arbitrary can be seen as a step toward computation (because symbol shapes are arbitrary in relation to their referents).

      Delete
  5. Last semester, I took a course called “Human Cognition and the Brain”. In this class, we were told how different functions (e.g. auditory, visual, working memory, etc.) can be localized in different areas of the brain. Many of these studies were done on individuals who had suffered severe brain damage (such as a stroke) or on monkeys. The motivation behind these studies was partially driven by surgeons who were operating on epileptic patients; the benefit of knowing localized functions of the brain is that when these surgeons are making lesions in the patients’ brains to greatly reduce the number of seizures, these surgeons can minimize the loss of function after the operation.

    But after learning that cognition can be defined as “how we do what we do”, I have realized that the course I look last fall was inappropriately named. Learning about the anatomy of the brain and it’s somewhat localized functions has not taught me anything about cognition. I have simply learned that these structures exist in the brain.

    This paper is no exception. I have simply learned that mirror neurons exist in specific locations in the brain and will fire under certain conditions. Rizzolatti & Craighero allude to the deeper questions that are more relevant to understanding cognition, but, in my opinion, fail to answer them. For example, they state, “mirror neurons represent the neural basis of a mechanism that creates a direct link between the sender of a message and its receiver”. Brain imaging techniques will only detail the temporal firing of these neurons and will only tell us that: when the neurons fire. What I would really like to know is, if these neurons represent a mechanism of communication, how exactly does this mechanism work? How does the receiver use mirror neurons to interpret the message being sent?

    ReplyDelete
  6. Causation or correlation?

    The question I kept in mind while reading this article (thanks to Fodor, whose piece I read first) was this: does mirror-neurons research provide a causal explanation for cognition? Specifically, does it explain our capacity to learn by imitation?

    The article asserts (based on psychological experiments) that humans learn by imitation: “[w]hen observers see a motor event that shares features with a similar motor event present in their motor repertoire, they are primed to repeat it” (180). So if we watch someone do something before we attempt it ourselves, we will probably do it better. Does mirror neuron research explain how we learn by imitation? At the very least, this research demonstrates that “the basic circuit underlying imitation coincides with that which is active during action observation” (181). In other words, when we observe an action, certain areas of our brain are activated in a certain order. And when we imitate the action, the same brain areas are activated in the same order. For example, in Nishitani & Hari’s study, people looked at pictures of other people making verbal and non verbal shapes with their lips. Some participants were asked to immediately imitate the lip form after seeing it. The researchers discovered that “[t]he activation sequence during imitation of both verbal and nonverbal lip forms was the same as during observation” (181).

    Is this a causal explanation of how we learn by imitation? Or is the fact that the circuit activated when we observe an action coincides with the one activated when we imitate that action mere correlation? On the basis of the brain research alone, we don’t know if we are looking at causation or correlation.

    So while the mirror neuron research may have provided a causal explanation for how we learn by imitation, we won’t know until we try to integrate what the research shows us into a reverse-engineered robot. People trying to build a robot that will pass T3 (ie, a robot that has all the capacities of a human being) should try to build a robot whose (robotic) brain has the same activation sequences when it observes an action as when it imitates that action (like the human brain). And if the robot that passes T3 has a “brain” that works this way, then we can assert that there is more than mere correlation here. But until then, we will not know.

    ReplyDelete
    Replies
    1. Well, we knew in advance that we could imitate. And we also knew that the brain must be the cause of that capacity, not just a correlate of it. But the question is whether finding neurons that fire when we imitate explains, or even contributes to explaining how brain causes out capacity to imitate.

      Neural correlations are extremely seductive. They are of course very important and helpful clinically -- guiding diagnosis, surgery and rehab. But they can't tell us much about the causal mechanism of our cognitive capacity. Just where and when some of it happens, not how or why,

      Delete
  7. “..The incredibly confusing organization of Broca’s area in humans, where phonology, semantics, hand actions, ingestive actions, and syntax are all intermixed in a rather restricted neural space (see Bookheimer 2002), is probably a consequence to this evolutive trend..”

    The first query I had while reading this paper was about the dependence of the mirror neuron system on visual information. Most of the experiments discussed in the first two thirds of the paper were all visual observation and imitation, ie seeing an image or video and imitating or not and watching the neuroimaging results. I would be very interested to see how the mirror neuron system works (or is impaired) by people who are blind. The way that we learn and “understand” things seems to me almost inseparable from imitation, even if you are learning from reading you are trying to imitate the knowledge that you read on a test or something. Obviously people who are blind are capable of doing almost everything and they also have a mirror neuron system. In fact according to this paper (http://www.jneurosci.org/content/29/31/9719.full) the mirror neuron system can be activated by supramodal sensory representation of actions. I think that an interesting area of study, especially in light of the connection of the mirror neuron system to Broca’s area outlined in this paper, would be to see mirror neuron experiments with participant that have lesions in Broca’s area and (if they exist) blind people with lesions in Broca’s area. I suspect that not all of their mirroring capacities will be impaired, and it might give more evidence towards the purpose of the mirror neuron system and its role in cognition.

    ReplyDelete
    Replies
    1. I'd expect the "mirror system" to be possible in any sensory modality where there is an analogue between input and output, whether gesture or vocalization. A chameleon could even have one for colour...

      Delete
  8. This first reading about the Mirror Neuron Systems is a turn from the other readings we have done, (pleasantly similar to material from other classes) the following reading though shatters the excitement with a very valid question “what has anything about localization in the brain really taught us?” I’ll take on the question that has been put out in the comments about whether finding this mirror-neuron system could help us reverse-engineer the T3 capacities we have been talking about.

    T3 refers to the “total indistinguishability in robotic (sensorimotor) performance capacity” and from this paper and the studies reviewed within it, we can notice that the sensorimotor capacity is intrinsically related to the mirror neuron system that the “basic circuit underlying imitation coincides with that which is active during action observation” (181). Does this tell us that the T3 model mechanism shall use this same sensorimotor capacity to develop the capacity to imitate? By this I mean to have the mirror neuron capacity (programmed) where the robotic sensorimotor capacity lies (this might just be a naïve statement). But then the boundary between T3 and T4 becomes blurry, are we still on T3 or falling into the “indistinguishability in external performance capacity” characteristic of T4?

    Allowing me to know how we do what we do with the “perfect” T3 model, will not increase my chances of survival other than fulfilling personal (mankind) curiosity but it is not some kind of philosophers’ stone. Even if taking into consideration the mirror neuron system (into our T3 robot) opened the door for the T3 we have been looking for, the hurting of animals would not prove worthwhile (as long as it did not prove beneficial in a matter of life or death).

    ReplyDelete
    Replies
    1. It's not easy to design a robot that can imitate the movements of another robot (and so far it can only be done in a limited way, with tricks). But telling a roboticist to do it using mirror neurons doesn't help...

      Delete
  9. I think it is misleading to focus so much on the neurons themselves, as though there were something special (almost magical) about them that would explain how we can relate the movements of other bodies to that of our own. What I keep from the original experiment is that the neural processes which are involved in recognizing a movement and those involved in producing it intersect somewhere and that somewhere is what we call mirror neurons. It also suggests that our ability to perceive a movement is related to our ability to perform it. To that extent, this experiment is in line with insights from continental philosophy according to which perception is not a passive process but one that is conditioned by the way we actively use our bodies.

    “...intransitive meaningless movements produce mirror-neuron system activation in humans, whereas they do not activate mirror neurons in monkeys.”
    Again, the focus here should not be the mirror-neurons themselves but the suggestion that our ability to recognize movements outside of their context is not shared by our close cousins. This is very interesting because this ability is crucial in our ability to understand symbols and symbolic behaviour.

    All in all, I’d say that the experiments involving mirror neurons have interesting things to say about the way perception and behaviour interact. The behaviour of the neurons may allow us to make unsuspected connections, but the neurons themselves have no explanatory power and do not in themselves provide any mechanism (except perhaps for a tiny link in a long chain).

    P.S.: From reading above, I feel like I have to say something about whether mirror neurons could help us in any way in building T3. I think the way it does so is marginal. Like I said above, all this really tells us is that perception and behaviour intersect, and maybe even, to some extent coincide (i.e. you perceive what you can do, the rest looks confused or overwhelming). My mechanism should therefore allow perception and behaviour to develop correlatively. I.e. I am still a long way from T3.

    ReplyDelete
  10. Over the past weeks, as we have discussed how the cognitive scientists are trying to past the Turing Test, the focus has been on reverse engineering cognition. If we follow with the idea of weak equivalence, then by creating a computer or a robot that can pass the Turing Test, we will have understood the fundamental aspects of cognition even though we have not emulated the architecture of the system we’re trying to understand—our brain.

    Rizzolatti et al’s research contributes our understanding of cognition in a different way. Instead of trying to reverse engineer cognition, they are taking the model we already have for cognition and trying to break it apart in order to understand cognition. This approach definitely has its strong points. For one, it is more likely to satisfy supporters of strong equivalence, who see it as necessary to match the structure as well as the function in order to understand cognition. Moreover, this approach is more relevant in some ways, because these scientists are dealing with exactly the stuff we’re made of. Their research can also be viewed as two-fold, since it not only explores how the brain works from a cognitive perspective but also explores theories of brain evolution. The biggest pitfall of this type of research is that it overwhelms us with minute details that can be difficult to string together, leaving messy piles of information. The present neuroimaging techniques only provide relatively crude views of the physical processes in the brain, and therefore do not help us to understand the exact mechanisms that are occurring while we engage in different cognitive tasks. Until we can develop techniques that are more precise, we are not getting at the core question of cognition: how do we do what we do? (Sidenote: reverse engineering does not necessarily address this question either. Instead, it is trying to find one way we do what we do.) There is also the ethical issue of animal testing to consider, and whether the information we are accumulating through testing is useful or relevant enough to justify the pain inflicted on animals.

    ReplyDelete
    Replies
    1. "they are taking the model we already have for cognition"

      But what model is that?

      And it sounds more like little toy bits of T4, skipping T3 altogether (even though it's the most important of T4.

      "reverse engineering does not necessarily address this question either. Instead, it is trying to find one way we do what we do"

      Well, surely one way is better than no way. (And there may be ma many different ways to make a toy fragment, but do you really think there are that many ways to make a T3 robot?

      As to the agony we inflict on animals to find out this little tidbits -- you can already guess what I would say about that! https://www.psychologytoday.com/blog/animal-emotions/201501/doing-the-right-thing-interview-stevan-harnad

      Delete
  11. Rizzolatti and Craighero (2004) mention two of their hypothesis: 1) mirror-neuron (MN) activity mediates imitations 2) MN are at the basis of action understanding (172) [I will be focusing on the latter]. The explanation given about how #2 arises is: “the mirror system transforms visual information into knowledge” (ibid.).
    To “prove” their 2nd hypothesis they studied the MNs in monkeys through various tasks.
    The first study by Kohler et al.:
    PART1- two conditions
    a) monkey sees a person tear a piece of paper and hears the rip
    b) monkey only hears the piece of paper rip
    Results: 15% of the same F5 MN were active in both conditions.
    The MN that were only activated in condition a (therefore condition a minus condition b) were named audio-visual MNs.
    PART2- four conditions (I will be using the same example as above- tearing paper)
    a) monkey sees and hears tear of paper
    b) monkey hears tear of paper
    c) monkey sees a person tear a paper
    d)monkey performs the tearing of a paper based on having experienced either condition a, b, or c
    Main Result: the same MNs were activated for condition a and d
    The second study by Umilta et al. used four conditions:
    a) experimenter places food on table (full vision)
    b) experimenter places food on the table behind a screen (hidden)
    c) experimenter places invisible food [pretend to grasp something] on table
    d) experimenter places invisible food [pretend to grasp something] on the table behind a screen
    NOTE: MNs are recorded when the experiment goes in to grab the food or invisible food that was left there by the experimenter
    Results: More than 50% of the same MNs in condition a were active in condition b. In condition d however, the MNs were not active.

    Through these two studies, Rizzolatti and Craighero concluded that MNs as a mechanism are able to produce action understanding. I am very confused with how they have arrived to this conclusion.
    Firstly, understanding wasn’t given a proper definition therefore I am assumed to take for a dictionary definition.
    Second, why are we looking at understanding solely on the localised point of F5 MNs?? (Fodor would harshly dismiss this (both on the level of F5 MN localisation, and localisation in general))
    Third, more than 50% of the same MNs were active… so what happened to the other 50%?? and why are we automatically assuming this on the part of understanding (even if we compared it to condition d)??
    Forth, how can we assume that these MNs which have fired are actually related to having knowledge?? (regarding Rizzolatti and Craighero quote stated above)

    ReplyDelete
    Replies
    1. continued...

      I chose to explain Rizzolatti and Craighero reasoning of action understanding through the studies with monkeys because it becomes a lot more complex with humans- however it still follows the same kind of principles.

      When Rizzolatti and Craighero explain auditory modality in humans they speculate that a kind of audio-visual MN code evolved for non-object directed actions (ex: onomatopia: mnyam-mnyam to refer to mouth moving with food in it) (185). I find this surprising because through Kohler et al. PART1 study [stated above], I would personally assume that onomatopias would work through the two conditions of a and b, rather than just the former (condition a) because we hear the word alone without an action. However, in any case only 15% of the F5 MN correlated in activity amongst object directed action between condition a and b. I am now probably making the mistake of being sucked into localisation and their meanings, so I will drop this idea.
      Furthermore they speculate that through this evolution, some kind of echo-neuron (EN) system gave rise whereby humans are able to attach a spoken verbal material (such as eat) to the activation of motor MNs (which have coded that mnyam-mnyam is the same as the mouth moving with food in it) (186). With these two (speculative) evolved system they go on to explain the rise of semantics [the study of meanings through words, phrases, symbols and overall language]. In conclusion they admit that understanding semantics solely through these evolved systems is insufficient (187). They however keep to their hypothesis that during speech acquisition a process occurred which gave meaning to sound (that being their speculative evolutionary audio-visual MN and EN).
      What I find problematic in all of this is even if we one day do end up discovering that these MNs and (now hypothetical) EN do exist, how does this help us with understanding cognition? We are simply drawing out metaphors which may not be able to actually explain how this given process occurs / has occurred (Ex: I mean this in the detailed sense of (using the hypothetical explanation of EN): The EN allowed for a certain kind of verb-to-action (verb:eat-to-action:eat) attachment, BUT how? What exactly changed to allow for this attachment process??).

      Delete
    2. The Pooh-Pooh Theory

      Gesture mirror neurons, active when I do something and when someone else does something. Fine.

      Echo neurons, active when I do something and when someone else does something. Fine.

      But the simplistic notion that words and language are short-circuited imitative ("bow-wow") or accompanying "("yo-he-ho") vocal gestures is simplistic nonsense, laid to rest two centuries ago:
      http://mentalfloss.com/article/48631/6-early-theories-about-origin-language

      (This will all come up again, more seriously, when we reach the weeks on language.)

      Delete
  12. "How do correlations between brain activity or structure and T3 activity help explain the causal mechanism generating the T3 capacity?"

    Modern day phrenology at its best. Do neuroscientists and experimental psychologists do any explanatory work, or do they just inform us that human behavior occurs in our head?

    I think (with unimaginable exertion) that augmenting neural activity and seeing what happens gets us closer to seeing what we really are. Rizzolatti et al. aren't just saying "monkey see, monkey do."

    "Actions belonging to the motor repertoire of the observer are mapped on his/her motor system. Actions that do not belong to this repertoire do not excite the motor system of the observer and appear to be recognized essentially on a visual basis without motor involvement." (Rizzolatti, 2004)

    For an action to be "seen," or thought about, the action has to be "performed" in some capacity, even if that just means going through the steps, or neural steps, required to implement a behavior. Imitation means (correct me if I'm wrong) that a creature must be able to perform the behavior in order to talk about it.

    Researching our capacity to understand another creature's intentions is the basis of usage-based grammar. Usage-based grammar suggests that we first have a capacity to engage with another and share their object of attention, and then we glue words on to those objects. We follow a care-taker's eyes and pay attention to the sounds they make. Before there are words, there are objects that both the speaker and listener engage with. Rizolatti et al's experiments show us that there are also behaviors that both a speaker and listener, or actor and spectator engage with.

    ReplyDelete
    Replies
    1. There is some interesting stuff on "motor imagery" and how it can help athletes train and recover from injury.

      Delete
  13. "In the second study (Nishitani & Hari 2002), the authors asked volunteers to observe still pictures of verbal and nonverbal (grimaces) lip forms, to imitate them immediately after having seen them, or to make similar lip forms spontaneously. During lip form observation, cortical activation progressed from the occipital cortex to the superior temporal region, the inferior parietal lobule, IFG (Broca’s area), and finally to the primary motor cortex. The activation sequence during imitation of both verbal and nonverbal lip forms was the same as during observation. Instead, when the volunteers executed the lip forms spontaneously, only Broca’s area and the motor cortex were activated.
    Taken together, these data clearly show that the basic circuit underlying im-itation coincides with that which is active during action observation. […]"

    In general, it might be due to my unfamiliarity with the different reagions of the brain of type of neural transmitter, or their loci but here is just one example (though there are many) where I am completely lost in the neuroscience jargon. how is it exactly that the above-mentioned study supports "clearly show that the basic circuit underlying imitation coincides with that which is active during action observation"? isn’t the point that mirror neuron are activated equally when the action is performed vs observed,and that this is exactly what did not happen in the study?

    ReplyDelete
    Replies
    1. There are different correlates when you see or make lip movements to speak and when you just move your lips the same way...

      Delete
  14. 4a response. February 2nd

    Last class’ discussion of Searle’s paper left me puzzled about a particular definition of the words “thinking” and “understanding”. To me, thought and understanding are distinctly different (And I doubt that anyone in the class, or Searle for that matter, would equate the two words, so I must just be in need of clarification).
    My naïve hypothesis is that thought is a computational process that can lead to the output/outcome/result that is understanding. Thus, in Searle’s CRA, to first of all substitute in the word “thinking” for “cognition” in the first tenet of strong AI (cognition is computation) is a leap that I cannot make sense of.
    It seems plausible to me, as I mentioned, that understanding is not computation but is in fact the result of computation – it is the synthesis, the dynamic interpretation of a computation.
    This idea was reinforced upon reading Rizzolatti’s paper on mirror-neurons. The mechanism for action understanding that exists in humans is significant because it demonstrates a precise neural analog for understanding.

    I’ll try to draw a clear link:
    Input: My piano teacher playing a specific scale on the piano in front of me.
    Internal state/computation: Visual system taking in the information and activation of other sensory neuronal networks
    Output*/Result: Activation of neurons specialized in playing the piano and in movement of my hands and fingers. Precise activation of neurons representative of my understanding and internalization of the scale being played.
    *I think output is the wrong word because “out” implies an external, visible result, whereas many “results” might be internal, as is the case with the activation of mirror neurons.
    So here once more, I am tempted to argue that neuronal processes are computations that give rise to understanding.

    Finally, if such mechanisms exist in other primates, what does that say about the human mind compared to an ape’s mind? If they, too, are capable of action understanding, what (if anything) distinguishes our mentality from theirs?

    ReplyDelete
    Replies
    1. Would you not agree that perceiving that the cat is on the mat, saying that the cat is on the mat, meaning that the cat is on the mat, hearing that the cat is on the mat, understanding that the cat is on the mat, believing that the cat is on the mat, knowing that the cat is and the mat, and wishing that the cate (were) on the mat all have something in common, something we can safely "cognition"? And that it feels like something to be in any of those cognitive states?

      You can imitate your piano teacher playing a scale. (We knew that.) How?

      Apes may not be able to imitate piano playing, but they can imitate (and emulate) a lot of other things apes (and people) do.

      Delete
    2. I agree that each of those are cognitive states, yet each has a unique feeling to it.

      For everyone, this Friday (January 30th)'s episode of the podcast "Invisibilia" features a women with "mirror-touch". Have a listen!
      http://www.npr.org/programs/invisibilia/

      Delete
  15. “Thanks to this mechanism, actions done by other individuals become messages that are understood by an observer without any cognitive mediation”
    Rizzolatti is essentially stating that in order to generate action understanding, mirror neurons must be understood because they are at the base. I don’t quite understand how we got to this point; mirror neurons are just another form of sensory coding. There is nothing about looking at them that can explain how humans generate semantics from actions. They are an adaptive function that allows us to perform certain actions easier in the future. With respect to cognition, it does not tell us anything about how we generate understanding; only that it is correlated to purposeful action that is rewarded.

    “Vygotski explained that the evolution of pointing movements was due to attempts of children to grasp far objects. It is interesting to notice that, although monkey mirror neurons do not discharge when the monkey observes an action that is not object directed, they do respond when an object is hidden, but the monkey knows that the action has a purpose.”
    If we really think that language and understanding has evolved through mirror neurons, then how come monkeys are not able to understand pointing as a means of communication? An article by Tomasello titled Why Don’t Apes Point? (2006) claims that apes as a group are very poor in comprehension of pointing. From Rizzolatti’s article though it is very clear that they do display mirror neurons and that they discharge when an object is hidden and there is a purpose to the action. Pointing at objects should activate their mirror neuron system, but yet there is no evidence that it does. I believe this is evidence that could potentially discredit the theory that mirror neurons are activated by action understanding. If the apes understand through mirror neurons, they should be able to understand the purpose of pointing, just as children do.

    I also believe it would be a great advantage in this article to include a section about how the mirror neuron system develops. If we were able to test children on how active their mirror neuron systems are and if it is present at birth we could understand a lot more about the system as a whole. If it develops throughout childhood and slowly matures then we would have more evidence to believe it is related to understanding because the child slowly starts picking up on more things in the environment.

    ReplyDelete
    Replies
    1. I'm sure apes could imitate an ape or person pointing. What they have trouble with is figuring out why (i.e., that they are trying to draw attention to a thing they are pointing at). (But I think they can learn it.) (Mirror neurons are a non-theory of mirror movement recognition, not a non-theory of pointing.)

      Why would you want to find out how the child's mirror neuron system develops when you can just study how is mirror action recognition capacity develops?

      Delete
  16. Although in and of itself, neuro-localization studies may currently not be able to causally explain cognition, I believe with future developments it may be useful.

    My understanding of cognition, of all mental processes, is that they are emergent consequences of the underlying neurobiology. They occur (and are experienced and felt) because of the cellular workings of the brain. If researchers can elucidate any of these processes and connections, than they are potentially contributing to an understanding of cognition. Neuroscience research, like the study we read by Rizzolatti and Craighero, is helpful for this in the long-run, if still somewhat simplistic in our times. As the technology develops, and precise neuronal firing can be causally related to cognitive processes, we will be well on the way to explaining and understanding cognition.

    To explicitly answer the two questions posed earlier in response to Dia's post:

    Regarding the T3 question, I would argue "no." At this point, the localization data only tells us where certain neurons are active during certain cognitive processes. The information obtained so far would not allow us to reverse engineer the specific T3 capacities associated with mirror neuron firing. In reiterating what I stated in the prior paragraph, I do think once the technology is sufficiently advanced, similar research might help us reverse engineer those capacities.

    Regarding the animal question, no, I don't think so. All we will learn from these studies is crude localization. I don't see this as a benefit which outweighs the costs, of the consequences of animal research.

    Additionally, an interesting note on this topic is the subject of Mirror-touch synesthesia. Look at the Wikipedia here: http://en.wikipedia.org/wiki/Mirror-touch_synesthesia

    "Mirror-touch synesthesia is a condition which causes individuals to experience the same sensation (such as touch) that another person feels. For example, if someone with this condition were to observe someone touching their cheek, they would feel the same sensation on their own cheek. Synesthesia, in general, is described as a condition in which a stimulus causes an individual to experience an additional sensation."

    It is a form of synesthesia which is probably dependent on mirror neurons.

    ReplyDelete
    Replies
    1. Would you be satisfied with an explanation of how a car works that it's an "emergent consequence" of its structure and when you can observe going on in there?

      How "advanced" does observation of "emergence" have to get in order to become a causal explanation?

      Delete
  17. “Mirror neurons represent the neural basis of a mechanism that creates a direct link between the sender of a message and its receiver. Thanks to this mechanism, actions done by other individuals become messages that are understood by an observer without any cognitive mediation” (pg. 183)

    Rizzolatti uses this generalized metaphor to explain the origins of language. Rizzolatti claims that mirror neurons are the mechanism enabling semantic communication between two humans. This broad statement insinuates that mirror neurons produce understanding.

    The pattern I see across all of the studies Rizzolatti includes in this article (and in most neuroimaging studies I’ve ever read) is the unfounded jump from methods and data to explaining broad aspects of cognition. For example, earlier in the article, Rizzolatti poses the question, “Is the understanding by humans of actions done by monkeys based on the mirror-neuron system?” This question hardly seems useful. It is an oversimplification of the problem that glosses over the intricacies of human understanding and attempts reduce it all to the function of mirror neurons. Rizzolatti is guilty of the same thing that Jerry Fodor identifies in scientific journalism. Everyone is eager to connect the big stuff that makes us feel human to underlying biological mechanisms.

    Specifically, in the above quote, Rizzolatti claims that the mirror neurons in our brain take outside information and create understanding. Certainly, if asked, Rizzolatti himself wouldn’t agree that it is so simple, however, when framed this way, even the most complicated neuroimaging studies can be misinterpreted both by scientists and the public. Firstly, mirror neuron systems cannot be looked at as isolated mechanisms - they interact with many other complex processes that all contribute to communication. I like to think that this is widely agreed upon these days, maybe more so than in 2004 when this was published. Neuroscience has generally shifted away from modular understandings of brain function, which helps prevent the oversimplification that Rizzolatti tends towards. Secondly, the idea of “understanding” needs to be separated entirely from our conversation about mirror neurons. Neuroscience gets in trouble when it tries to attribute grand characteristics of “consciousness” and “understanding” to biological mechanisms. Until we can reverse engineer a brain and evade the other-minds-problem to check out exactly what the brain is capable of, we will never know whether these feelings can be attributed to specific mechanisms like mirror neurons.

    ReplyDelete
    Replies
    1. Until further notice, mirror neuron activity is correlated with either my making a movement or my seeing you make the same movement. How I can recognize or understand that it's the same movement -- or, for that matter, how the mirror neurons can detect that it's the same movement -- is in now way explained.

      Now how anyone can pick up a piece of non-explanation like that, and imagine that it explains action understanding, empathy or language understanding is a bit beyond me...

      But be careful, it's 10 years later, but I'm not sure people are that much wiser...

      Delete
  18. “This new capacity should have led to (and derived from) the acquisition of an auditory mirror system, developed on top of the original audio-visual one, but which progressively became independent of it” (Rizzolatti & Craighero, 2004).
    Rizzolatti and Craighero are describing the development of speech acquisition in humans, which they suggest occurred thanks to improved imitation capacities. This most likely occurred when humans became able to generate the sounds originally accompanied by a specific action without doing the action. This would likely make sense, as some apes can use gestures to suggest a desire to perform an action, such as to obtain food. This type of language is also preserved in humans, such as when babies point or reach for something they want or need. Before they can produce verbal language, the young child can already signal that they need something using gestural language. Rizzolatti and Craighero also propose that it may be the case that there are two developmental roots to the semantics of human language: one being closely related to the action mirror-neuron system, and one based on the echo-mirror neuron system. They use the example of Pulvermueller’s study where the researchers compared EEG activations while subjects listened to face- and leg-related action verbs and found that words describing leg action evoked strong in-going currents at dorsal brain areas, close to the cortical leg-area, whereas those of the “talking” type elicited stronger currents at inferior brain areas, next to the motor representation of the face and mouth. I do not think they adequately support their suggestion of the two roots system in the paper, but it does make intuitive sense. The newer root may be what separates us humans from our closest primate ancestors, and may have allowed for the complexity of human language to develop.

    ReplyDelete
  19. Nervous system is far to complex to reverse engineer it entirely. Yet if mirror neurons are so important to the evolution of the human specie, as this text seems to indicate: “we are fully convinced (for evidence see next section) that the mirror neuron mechanism is a mechanism of great evolutionary importance through which primates understand actions done by their conspecifics”, to reverse-engineer the brain as closely as possible, then it would be crucial to recreate the mirror neurons in a computerized T3. However, reading R &C’s paper in no way shows us “how we do what we do” or even if mirror neurons are specifically required to do these things. Moreover, if we wanted to reverse-engineer the brain, this paper would in no way show us how to integrate mirror neurons. I agree with Reginald Oey, this paper is indeed a series of experiments that simply points to a brain region and says what their data seems to tell them this region does, just one example would be: “in the case of the inferior parietal region, it is very plausible that the mirror activation corresponds to areas PF and PFG, where neurons with mirror properties are found in the monkeys” but it teaches me nothing about cognition itself. The goal of reverse-engineering the brain is to understand how and why it works and not only what and when, this paper is therefore not helping us for our main objective.

    ReplyDelete
  20. My thoughts on this article shall be on the optimistic side. The paper goes through experiment after experiment, showing us how mirror neurons do indeed exist. By observing another person doing actions that we recognize, neurons also are active in the motor region of our brain. I believe these relatively recent findings are actually quite important in understanding human cognition. Having this knowledge about the human brain will allow us to go that much further in creating a T3 model of the TT. Having an internal system in place that will attempt to mimic observed behaviour covertly and allowing the model/robot to learn this way seems a necessary implementation if we will ever get to a T3 model. I don't believe this knowledge was around two decades ago.


    "how the mirror neurons can detect that it's the same movement -- is in now way explained." (Harnad, a couple comments above)

    It is true that the above is next explained or at all known. But it is question that could feasibly be answered in years to come. This last years Nobel Prize in medicine, http://www.nobelprize.org/nobel_prizes/medicine/laureates/2014/press.html, is great finding, and perhaps one day, the Nobel Prize could be for understanding "how neurons can detect similar movements". And once we have that, moving on to questions of empathy, language understanding etc. may not seem so far-fetched.

    ReplyDelete
  21. I find the theory according to which language evolved from gestures thanks to the mirror-neuron system rather convincing. The fact that the human mirror-neuron system responds to pantomimes provides a good basis for the human ability to understand reference. If mirror-neuron firing simultaneously with an observed action means understanding it, then it seems like the fact that human beings understand intransitive actions, pure predicates paves the way to understanding what a word refers to without being able to see the referent. Human beings do not need a concrete anchoring in reality to understand what a gesture means, it might work the same way for words.
    But if an individual is able to guess the meaning of an action then he/she is also able to understand what was the meaning the person doing the action intended to put in his/her gesture. Therefore, mirror-neurons could also help explaining the theory of mind and why we attribute mental states to other individuals. An example of that would be that a monkey, whose mirror-neurons only respond to object-related actions, when observing an individual pointing at an object, will look at the pointing finger while a human being will look at what is being pointed to. The human being knows the intention of showing something of the other individual, the monkey does not.

    ReplyDelete
    Replies
    1. Understanding Understanding

      Why should we conclude that "mirror-neuron firing simultaneously with an observed action means understanding it"? And how much of what we talk about (and can talk about) is just referring to gestures we can make? What's the gesture for "The cat is on the mat"? And how did the words "The cat is on the mat" come to refer to that gesture? Doesn't "The cat is on the mat" refer to the cat being on the mat, rather than to a gesture? Isn't saying "The cat is on the mat" just an oral gesture?

      Language is not "mirror-neuron firing simultaneously with an observed action."

      Delete
  22. Given that I have a particular interest in language development (and have taken several linguistics classes), the most surprising aspect of this paper was the following, “the mirror-neuron system represents the neurophysiological mechanism from which language evolved.” There is concrete evidence to support the latter. First, speech evolved mostly from gestural communication, which is known to activate the mirror-neuron system. Second, hand/arm and speech gestures are linked and must, at least in part, share a common neural substrate. Lastly, there is evidence that humans possess an “echo-neuron system”. I can’t help but feel skeptical however, about the claim that language evolved out of the mirror-neuron system. This system reminds me of the behaviorist perspective in that it considers only the input and the output. That is, it “sees” an action (input) and understand how it would “produce” that same action (output). But verbal language is much more complex than “gestural communication”. As I understand it, gestural communication is communication through gestures at a lower-order level, such as those produced by monkeys. This does not include American Sign Language, which is a fully developed language. Perhaps I’m misunderstanding something, but I don’t see how such a complex behaviour can arise from such a simplistic system.

    ReplyDelete
    Replies
    1. Pantomime to Propositions

      The part that the simplistic mirror-neuron view of language skips over is how we got from gestural imitation and communication (pantomime) to gestural language (propositions). More about this in later weeks.

      Delete
  23. “The activation of IFG was particularly strong during listening of mouth actions, but was also present during listening of actions done with other effectors. It is likely, therefore, that, in addition to mouth actions, in the inferior frontal gyrus there is also a more general representation of action verbs. Regardless of this last interpretation problem, these data provide clear evidence that listening to sentences describing actions engages visuo-motor circuits subserving action representation.”

    I want to make sure I’m understanding the concept here: motor neurons are activated most when actions describing noises made by the mouth (speech) versus actions concerning the limbs. The authors then use the evidence from this experiment (among several other experiments) as evidence that mirror neurons play a crucial role in how speech came about. This is part of the idea of an “echo-neuron system” which is “a system that motorically “resonates” when the individual listens to verbal material”. Is the major difference between a motor neuron and an echo-neuron is that the modality is different – one involves speech sounds, while the other involves physical actions pertaining more towards the hands and arms? Wouldn’t the speech aspect still be part of the motor-neurons, since we need to physically manipulate our vocalizers to produce the sounds?

    ReplyDelete
    Replies
    1. We can imitate hand or body movements from seeing them made.

      We can also imitate mouth movements from seeing them made.

      It gets more powerful when we can imitate mouth movements from hearing them made.

      (We can also imitate (some) hand movements from hearing them made -- for example, hitting a table top.)

      No doubt that imitation was needed on the road to language. But none of this imitation is language.

      Delete
  24. In their review paper Rizzolatti and Craighero describe a class of visuomotor neurons that discharge both when a primate performs an action and when they see an identical or related action being performed. They go on to summarize the results of several studies in an effort to support their view that these mirror neurons not only facilitate action understanding and imitation learning in humans but also play a role in language perception. I have a bit of an issue with the claim that mirror neurons facilitate action understanding. Action understanding has multiple levels and being able to recognize an action is on a completely different level than comprehending the intentions behind it. In this paper it seems like by ‘action understanding’ Rizzolatti and Craighero actually just mean ‘action recognition,’ which is sort of misleading.

    “Each time an individual sees an action done by another individual, neurons that represent that action are activated in the observer’s premotor cortex. This automatically induced motor representation of the observed action corresponds to that which is spontaneously generated during active action and whose outcome is known to the acting individual.” (p.172)

    Is this not just a fancy way of saying that we recognize another person’s action because seeing/hearing it sets of the same neurons that are activated when we do it? In the experiment by Umilta et al. that the authors use as ‘proof’ that mirror neurons facilitate action understanding, the final part of the action was hidden behind a screen. Apparently the fact that more than half of mirror neurons still discharged without the full visual representation of the action (which was just grasping something behind a screen) is sufficient evidence that mirror neurons ‘understand’ actions. I feel like there’s a lot more to understanding than knowing that the ‘goal’ of a hand gesture is to grasp an object behind a screen. In my opinion grasping is one of many goals associated with a single action. Was this person grasping the block so he or she could throw it across the room out of anger? Was this person picking up the block so he or she could build a tower with a younger sibling? There are a lot of reasons we do things and I don’t think mirror neurons alone are sufficient to encode our intentions.

    Rizzolatti and Craighero go on to describe an echo neuron system that results in the activation of specific speech related motor areas when an individual listens to verbal stimuli. They say that this echo system might mediate speech perception. Speech perception is the process by which sounds of language are heard, interpreted and understood. Claiming that mirror neurons have this capability seems like a major stretch to me. If mirror neurons don’t understand actions on a level significantly higher than recognition, why would they be able to not only recognize all of the phonemes that make up verbal stimuli but also associate them with the specific meaning they have in the specific context they were heard in? I think that the proposal that the echo neuron system only mediates the imitation of verbal sounds is a lot more realistic. It’s reasonable that an echo neuron system could fascilitate the recognition of phonemes but I’d want to see more evidence supporting its capacity to do anything more complex than that.

    Basically, although the existence of this class of neurons is fascinating, I think there’s A LOT more to cognition than the mirror neuron system and I would disagree with anyone who stated that mirror neurons are crucial to cognition.

    ReplyDelete
    Replies
    1. Miming and Meaning

      "Understanding" actions is certainly not the same as understanding the meaning of words or sentences.

      If someone lunges at me, I understand I"m being a attacked and need to duck.

      And I can imitate lunging.

      But that's a long way from understanding the meaning of the word "lunging" (whether in English or in sign language) -- or the meaning of the three sentences (propositions) above.

      BC: "I don’t think mirror neurons alone are sufficient to encode our intentions." I agree. If they "encode" anything at all, it's what we can mime, not what we can mean (verbally). Ditto for echo neurons (vocal imitation).

      But the existence of echo neurons near gesture mirror neurons may be evidence for the gestural origins of language.

      Delete
    2. This helped me clarify that there is in fact a difference between understanding the meaning of an action and understanding the meaning of say, the words you'd use to describe that action.

      I've been thinking about the implications of what you said about the existence of echo neurons near gesture mirror neurons as evidence for the gestural origins of language (because the transition from gestures to vocalizations is generally troubling to me). My understanding of this is still really shaky but are you suggesting that people had echo neurons and gesture neurons next to each other back in the day when language was still evolving and they might have facilitated a faster transition from gestures to vocalizations? (ie. Bobby makes the gestures he would normally make to communicate a proposition but happens to make a certain vocalization at the same time. If Bobby does this repeatedly Freddie's gesture mirror neurons and echo mirror neurons will become accustomed to being cooactivated and the vocalization will be one step closer to being associated with the meaning in the absence of the gestures (thus freeing the hands.)) Or am I completely off base?

      Delete
  25. "Each time an individual sees an action done by another individual, neurons that represent that action are activated in the observer's premotor cortex. This automatically induced, motor representation of the observed action corresponds to that which is spontaneously generated during active action and whose outcome is known to the acting individual. Thus, the mirror system transforms visual information into knowledge."

    When I read this some alarm bells went off in my head. It seems to me that there are some leaps in this description of mirror learning. For example, suppose I've never kicked a ball before and I see someone kick a ball. According to MN theory, mirror neurons corresponding to that action fire when I see it performed. Then, when I kick a ball for the first time, I learn how to do it faster since there's already some basis for my actions with those neurons that fired when I saw the ball kicked. But how could those 'kick' neurons fire unless I already understood something about what happens when I ball is kicked? I.e. that such and such muscles are used in a certain order. Wouldn't I need to understand that visually in order for my mirror neurons to fire in the first place? What information do they provide that is any different from the visual system neurons that process kicking? I know that they fire again when I kick, and the visual neurons don't, but how is a 'visual+mirror -> motor+mirror' a stronger learning mechanism than 'visual->motor'?




    ReplyDelete
    Replies
    1. Not to defend the mirror-neuron non-explanation, the mapping of the motor homunculus onto the sensory representation of animate motion could be largely innate, rather than based on learning.

      Delete
  26. Clearly there is a lot of frustration over how little the discovery of mirror neurons does to explain the actual mechanisms of language, imitation, etc that they are part of. It’s a well warranted one, too, given how often we see headlines on popular websites like “I Fucking Love Science” proclaiming that scientists have unraveled the mystery of a cognitive process simply by correlating a BOLD signal location with a subject’s response. However, I believe the blame here lies on the interpretation of these experiments, not the experiments themselves. As far as causal mechanisms go, Localization of function is a key first step. After all, the only working hypothesis we have is that all cognitive abilities result from some organization of neurons/glia. Before we can propose mechanisms we need to identify the properties these organizations, and for that we need to know the sum of the properties of the individual neurons that make up this organization.

    In the case of mirror neurons, they show certain response properties that help support some hypothesis and contest others. In other words, they are data from which we can form our ideas for causal mechanisms, not the causal mechanisms themselves. The Author’s form a rather nice theory from their data at that, believing that the cross-modal and imitative response of mirror neurons could have been part of a link between motor and verbal semantic mapping, and contributed to the formation of language

    “From this follows a clear neurophysiological prediction: Hand/arm and speech gestures must be strictly linked and must, at least in part, share a common neural substrate”


    Whether or not this theory holds any ground has yet to be contested, but The Author’s Approach seems reminiscent to Hebb’s when tackling Memory, or to Marr’s when tackling vision. They all attempted to bridge the gap between neuron and function, and if the latter two were able to provide great insight into the field through their studies, I don't see why Rizzolatti’s attempt to do the same should be met with such criticism.

    ReplyDelete
    Replies
    1. You're right that something can be and is learned from such findings (especially about links between gesture and vocalization as a possible clue to the origins of language). And maybe neural data will some day provide a clue to causal mechanisms. But to date it's still true that they have not (except for simple systems like reflexes). And the temptation to find patterns in the clouds instead of pushing on toward seeking T3 causal mechanisms persists...

      Delete
  27. In a way, this reading nicely sums up my disillusionment with neuroscience that eventually led me to switch majors. The authors made a great number of claims about the action understanding and imitation, and they emphasize how sound those claims are, but give no explanation of how action understanding or imitation actually involve mirror neurons.

    When it comes to these two topics, the authors seem content to deal with correlations instead of mechanisms and I simply don't find much value in this. The brain is far too complex to be able to assume that just because neurophysiological events are correlated with behaviors they play a role in generating those behaviors.

    "In this review we present data on a neurophysiological mechanism-the mirror-neuron mechanism-that appears to play a fundamental role in both action understanding and imitation."

    I don't believe that this should be considered a mechanism for imitation or ation understanding unless there's some harder evidence than the evidence presented. As far as I can tell, these are all just correlates. I also have concerns about terms like "action understanding", which I think unnecessarily lump cognition in with physical responses. I'm not certain why they are not using terms like "action tracking" since no causal mechanism has been shown to induce understanding in humans.

    ReplyDelete