Sunday 19 January 2014

PSYC 538/740 Seminar Syllabus

Psychology PSYC 538/740, Winter 2015:
Categorization, Communication and Consciousness

Time: Wed 8:30-11:30
Place: BURN 1B23
Instructor: Stevan Harnad
Office: Stewart W7/3m
Skype: sharnad 
Google+hangout: amsciforum@gmail.com
E-mail: harnad@uqam.ca (don’t use my mcgill email address because I don’t check it regularly)

Open to students interested in Cognitive Science from the Departments of Linguistics, Philosophy, Psychology, Computer Science, or Neuroscience.

Overview: What is cognition? Cognition is whatever is going on inside our heads when we think, enabling us to do all the things we can do-- to learn and to act adaptively, so we can survive and reproduce. Cognitive science tries to explain the internal mechanismthat generates that know-how. The brain is the natural place to look for the explanation, but that’s not enough. Unlike the mechanisms that generate the capacities of other bodily organs such as the heart or the lungs, the brain’s capacities are too vast, complex and opaque to be read off by direct observation or manipulation. The brain can do everything that we can do. Computational modeling and robotics try, alongside behavioral neuroscience, to design and test mechanisms that can also do everything we can do, thereby explaining how the brain does them. The challenge of the celebrated "Turing test" is to design a model that can do everything we can do, to the point where we can no longer tell apart the model’s performance from our own. The model not only has to generate our sensorimotor capacities – the ability to do everything with the objects and organisms in the world that we are able do with them -- but it must also be able to produce and understand language, just as we do. What is language, and what was its adaptive value such that we are the only species on the planet that has it? And what is consciousness? We are not the only conscious organisms, but what is consciousness for? What is its function, its adaptive value?

Objectives: This course will outline the main challenges that cognitive science, still very incomplete, faces today, focusing on the capacity to learn sensorimotor categories, to name and describe them verbally, and to transmit them to others, concluding with cognition distributed on the Web.

0. Introduction

What is cognition? How and why did introspection fail? How and why did behaviourism fail? What is cognitive science trying to explain, and how?

1. The computational theory of cognition (Pylyshyn, Turing) 
What is (and is not) computation? What is the power and scope of computation? What does it mean to say (or deny) that “cognition is computation”?


Readings:

1a. Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press  http://eprints.ecs.soton.ac.uk/12092/

2. The Turing test
What’s wrong and right about Turing’s proposal for explaining cognition?


Readings: 

2a. Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 http://cogprints.org/499/  

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer  http://eprints.ecs.soton.ac.uk/12954/

3. Searle's Chinese room argument (against the computational theory of cognition)
What’s wrong and right about Searle’s Chinese room argument that cognition is not computation?


Readings:

3a. Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457  

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press. http://cogprints.org/1622/

4. What about the brain?
Why is there controversy over whether neuroscience is relevent to explaining cognition?


Readings:  

4a. Rizzolatti G & Craighero L (2004) The Mirror-Neuron System. Annual Review of Neuroscience 27L 169-92  

4b. Fodor, J. (1999) "Why, why, does everyone go on so about thebrain?" London Review of Books 21(19) 68-69.  http://www.lrb.co.uk/v21/n19/jerry-fodor/diary

5. The symbol grounding problem
What is the “symbol grounding problem,” and how can it be solved? (The meaning of words must be grounded in sensorimotor categories.)


Readings:

5. Harnad, S. (2003) The Symbol Grounding Problem. Encylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   http://eprints.ecs.soton.ac.uk/7720 

[Google also for other online sources for “The Symbol Grounding Problem”]

6. Categorization and cognition
That categorization is cognition makes sense, but “cognition is categorization”? (On the power and generality of categorization.)


Readings:

6a. Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.   http://eprints.ecs.soton.ac.uk/11725/

6b. Harnad, S. (2003) Categorical Perception. Encyclopedia of Cognitive Science. Nature Publishing Group. Macmillan. http://eprints.ecs.soton.ac.uk/7719/

7. Evolution and cognition
Why is it that some evolutionary explanations sound plausible and make sense, whereas others seem far-fetched or even absurd?


Readings: 

7a. Confer, Jaime C., Judith A. Easton, Diana S. Fleischman, Cari D. Goetz, David M. G. Lewis, Carin Perilloux, and David M. Buss (2010) Evolutionary Psychology Controversies, Questions, Prospects, and Limitations. American Psychologist 65 (2): 110–126 

7b. Bolhuis JJ, Brown GR, Richardson RC, Laland KN (2011) Darwin in Mind: New Opportunities for Evolutionary Psychology. PLoS Biol 9(7)

8. The evolution of language
What’s wrong and right about Steve Pinker’s views on language evolution? And what was so special about language that the capacity to acquire it became evolutionarily encoded in the brains of our ancestors – and of no other surviving species – about 300,000 years ago? (It gave our species a unique new way to acquire categories, through symbolic instruction rather than just direct sensorimotor induction.)


Readings: 

8a. Pinker, S. & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences13(4): 707-784.  http://pinker.wjh.harvard.edu/articles/papers/Pinker%20Bloom%201990.pdf 

8b. Blondin Massé et al  (2012) Symbol Grounding and the Origin of Language: From Show to Tell. In: Origins of Language. Cognitive Sciences Institute. Université du Québec à Montréal, June 2010. http://eprints.ecs.soton.ac.uk/21438/

9. Chomsky and the poverty of the stimulus
A close look at one of the most controversial issues at the heart of cognitive science: Chomsky’s view that Universal Grammar has to be inborn because it cannot be learned from the data available to the language-learning child.


Readings:

9a. Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.), An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press. http://users.ecs.soton.ac.uk/harnad/Papers/Py104/pinker.langacq.html 

9b. Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments. Linguistic Review 19: 9-50 http://www.ucd.ie/artspgs/research/pullum.pdf

10. The mind/body problem and the explanatory gap
Once we can pass the Turing test -- because we can generate and explain everything that cognizers are able to do -- will we have explained all there is to explain about the mind? Or will something still be left out?


Readings: 

10a. Dennett, D. (unpublished) The fantasy of first-person sciencehttp://ase.tufts.edu/cogstud/papers/chalmersdeb3dft.htm 

10b. Harnad, S. (unpublished) OnDennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problemhttp://cogprints.org/2130 

10c. Harnad, S. (2002)  Doing, Feeling, Meaning and Explaining   

10d. Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-Brainer. Artificial Intelligence in Medicine 44(2): 83-89 http://eprints.ecs.soton.ac.uk/14430/ 

10e. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012, Summer Issue

11. Distributed cognition and the World Wide Web
Can a mind be wider than a head? Collective cognition in the online era: the Cognitive Commons.


Readings: 

Clark, A. & Chalmers, D. (1998) The Extended Mind. Analysis58(1) http://www.cogs.indiana.edu/andy/TheExtendedMind.pdf 

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins  http://eprints.ecs.soton.ac.uk/16602/

X. For Psyc 740 grad students only:

       Readings:
Chalmers, D.J. (2011) "A Computational Foundation for the Study of Cognition".  Journal of Cognitive Science 12: 323-57 

Harnad, Stevan (2012) The Causal Topography of Cognition. Journal of Cognitive Science. 13(2): 181-196 [commentary on: Chalmers, David: “A Computational Foundation for the Study of Cognition”] 

Chalmers, D.J. (2012) "The Varieties of Computation: A Reply to Commentators". Journal of Cognitive Science, 13:211-48.

12. Overview

Drawing it all together.

Evaluation:


1. Blog skywriting -- quote/commentary on all 24 readings: 30 marks


2. Class discussion --  (do more skywritings if you are shy to speak in class) 20 marks

3. Midterm -- 6 online questions (about 250 words for each answer): 10 marks

4. Final -- 8 online integrative questions  (about 500 words each answer) : 40 marks


Use your gmail account to register to comment, and either use your real name or send me an email to tell me what pseudonym you are using (so I can give you credit).

Every week, everyone does at least one blog comment on each of that (coming) week’s two papers. In your blog comments, quote the passage on which you are commenting (italics, indent). Comments can also be on the comments of others. 

Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again. Also, check how many comments have been made, and if they are close to 50, go to the overflow comments because blogger only allows 50 in each batch. (Each paper has room for a first 50 and then an oveflow 50.) 


Also do your comments early in the week or I may not be able to get to them in time to reply. (I won't be replying to all comments, just the ones where I think I have something interesting to add. You should comment on one another's comments too -- that counts -- but make sure you're basing it on having read the original skyreading too.)

For samples, see summer school: http://turingc.blogspot.ca



Saturday 11 January 2014

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of:


(Opening Overview Comment Overflow) (50+)

(Opening Overview Comment Overflow) (50+)

The blogger software only accepts 50 comments, so when skywriting reaches 50, please switch to the overflow comments link, otherwise your comment will not appear. (Always check if your comment appears after you have posted it.)

1a. Pylyshyn, Z (1989) Computation in cognitive science.

Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overfiew: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)


Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

(1a. Comment Overflow) (50+)

(1a. Comment Overflow) (50+)

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 



Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

(1b. Comment Overflow) (50+)

(1b. Comment Overflow) (50+)

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"





1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: Part I and Part 2
3Le modèle Turing (vidéo, langue française)

(2a. Comment Overflow) (50+)

(2a. Comment Overflow) (50+)

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

(2b. Comment Overflow) (50+)

(2b. Comment Overflow) (50+)

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 




see also:




(3a. Comment Overflow) (50+)

(3a. Comment Overflow) (50+)

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

(3b. Comment Overflow) (50+)

(3b. Comment Overflow) (50+)

4a. Rizzolatti G & Craighero L (2004) The Mirror-Neuron System

Rizzolatti G & Craighero L (2004) The Mirror-Neuron SystemAnnual Review of Neuroscience 27L 169-92 

A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others’ actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism—the mirror-neuron mechanism—that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language. 




(4a. Comment Overflow) (50+)

(4a. Comment Overflow) (50+)

4b. Fodor, J. (1999) "Why, why, does everyone go on so about thebrain?"

Fodor, J. (1999) "Why, why, does everyone go on so about thebrain?London Review of Books21(19) 68-69. 

I once gave a (perfectly awful) cognitive science lecture at a major centre for brain imaging research. The main project there, as best I could tell, was to provide subjects with some or other experimental tasks to do and take pictures of their brains while they did them. The lecture was followed by the usual mildly boozy dinner, over which professional inhibitions relaxed a bit. I kept asking, as politely as I could manage, how the neuroscientists decided which experimental tasks it would be interesting to make brain maps for. I kept getting the impression that they didn’t much care. Their idea was apparently that experimental data are, ipso facto, a good thing; and that experimental data about when and where the brain lights up are, ipso facto, a better thing than most. I guess I must have been unsubtle in pressing my question because, at a pause in the conversation, one of my hosts rounded on me. ‘You think we’re wasting our time, don’t you?’ he asked. I admit, I didn’t know quite what to say. I’ve been wondering about it ever since.



See also:

Grill-Spector, K., & Weiner, K. S. (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience, 15(8), 536-548.

ABSTRACT: Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction.


(4b. Comment Overflow) (50+)

(4b. Comment Overflow) (50+)

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or: Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or: https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.


If you can't think of anything to skywrite, this might give you some ideas:
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445.
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and future. Topics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

(5. Comment Overflow) (50+)

(5. Comment Overflow) (50+)

6a. Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization

Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.  

We organisms are sensorimotor systems. The things in the world come in contact with our sensory surfaces, and we interact with them based on what that sensorimotor contact “affords”. All of our categories consist in ways we behave differently toward different kinds of things -- things we do or don’t eat, mate-with, or flee-from, or the things that we describe, through our language, as prime numbers, affordances, absolute discriminables, or truths. That is all that cognition is for, and about.



(6a. Comment Overflow) (50+)

(6a. Comment Overflow) (50+)

6b. Harnad, S. (2003b) Categorical Perception.

Harnad, S. (2003b) Categorical PerceptionEncyclopedia of Cognitive Science. Nature Publishing Group. Macmillan.


Differences can be perceived as gradual and quantitative, as with different shades of gray, or they can be perceived as more abrupt and qualitative, as with different colors. The first is called continuous perception and the second categorical perception. Categorical perception (CP) can be inborn or can be induced by learning. Formerly thought to be peculiar to speech and color perception, CP turns out to be far more general, and may be related to how the neural networks in our brains detect the features that allow us to sort the things in the world into their proper categories, "warping" perceived similarities and differences so as to compress some things into the same category and separate others into different categories.



Pullum, Geoffrey K. (1991). The Great Eskimo Vocabulary Hoax and other Irreverent Essays on the Study of Language. University of Chicago Press.

(6b. Comment Overflow) (50+)

(6b. Comment Overflow) (50+)

7a. Confer et al (2010) Evolutionary Psychology Controversies, Questions, Prospects, and Limitations

Confer, Jaime C., Judith A. Easton, Diana S. Fleischman, Cari D. Goetz, David M. G. Lewis, Carin Perilloux, and David M. Buss (2010) Evolutionary Psychology Controversies, Questions, Prospects, and LimitationsAmerican Psychologist 65 (2): 110–126 DOI: 10.1037/a0018413

Evolutionary psychology has emerged over the past 15 years as a major theoretical perspective, generating an increasing volume of empirical studies and assuming a larger presence within psychological science. At the same time, it has generated critiques and remains controversial among some psychologists. Some of the controversy stems from hypotheses that go against traditional psychological theories; some from empirical findings that may have disturbing implications; some from misunderstandings about the logic of evolutionary psychology; and some from reasonable scientific concerns about its underlying framework.  This article identifies some of the most common concerns and attempts to elucidate evolutionary psychology’s stance pertaining to them. These include issues of testability and falsifiability; the domain specificity versus domain generality of psychological mechanisms; the role of novel environments as they interact with evolved psychological circuits; the role of genes in the conceptual structure of evolutionary psychology; the roles of learning, socialization, and culture in evolutionary psychology; and the practical value of applied evolutionary psychology. The article concludes with a discussion of the limitations of current evolutionary psychology.



(7a. Comment Overflow) (50+)

(7a. Comment Overflow) (50+)

7b. Bolhuis JJ et al (2011) Darwin in Mind: New Opportunities for Evolutionary Psychology

Bolhuis JJ, Brown GR, Richardson RC, Laland KN (2011) Darwin in Mind: New Opportunities for Evolutionary PsychologyPLoS Biol 9(7): e1001109.







Evolutionary Psychology (EP) views the human mind as organized into many modules, each underpinned by psychological adaptations designed to solve problems faced by our Pleistocene ancestors. We argue that the key tenets of the established EP paradigm require modification in the light of recent findings from a number of disciplines, including human genetics, evolutionary biology, cognitive neuroscience, developmental psychology, and paleoecology. For instance, many human genes have been subject to recent selective sweeps; humans play an active, constructive role in co-directing their own development and evolution; and experimental evidence often favours a general process, rather than a modular account, of cognition. A redefined EP could use the theoretical insights of modern evolutionary biology as a rich source of hypotheses concerning the human mind, and could exploit novel methods from a variety of adjacent research fields.

(7b. Comment Overflow) (50+)

(7b. Comment Overflow) (50+)

8a. Pinker, S. & Bloom, P. (1990). Natural language and natural selection

Pinker, S. & Bloom, P. (1990). Natural language and naturalselectionBehavioral and Brain Sciences13(4): 707-784. 

Many people have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the by‐product of selection for other abilities or as a consequence of as‐yet unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with every tenet of Darwinian theory ‐‐ that it shows no genetic variation, could not exist in any intermediate forms, confers no selective advantage, and would require more evolutionary time and genomic space than is available. We examine these arguments and show that they depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers clear criteria for when a trait should be attributed to natural selection: complex design for some function, and the absence of alternative processes capable of explaining such complexity. Human language meets this criterion: grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the position that language is an adaptation, but this reasoning is unsound: communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, language acquisition in the child should systematically differ from language evolution in the species and attempts to analogize them are misleading. Reviewing other arguments and data, we conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo‐Darwinian process.





(8a. Comment Overflow) (50+)

(8a. Comment Overflow) (50+)

8b. Blondin Massé et al (2012) Symbol Grounding and the Origin of Language: From Show to Tell

Blondin Massé et al (2012) Symbol Grounding and the Origin of Language: From Show to Tell. In: Origins of Language. Cognitive Sciences Institute. Université du Québec à Montréal, June 2010.



Organisms’ adaptive success depends on being able to do the right thing with the right kind of thing. This is categorization. Most species can learn categories by direct experience (induction). Only human beings can acquire categories by word of mouth (instruction). Artificial-life simulations show the evolutionary advantage of instruction over induction, human electrophysiology experiments show that the two ways of acquiring categories still share some common features, and graph-theoretic analyses show that dictionaries consist of a core of more concrete words that are learned earlier, from direct experience, and the meanings of the rest of the dictionary can be learned from definition alone, by combining the core words into subject/predicate propositions with truth values. Language began when purposive miming became conventionalized into arbitrary sequences of shared category names describing and defining new categories via propositions.

(8b. Comment Overflow) (50+)

(8b. Comment Overflow) (50+)

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.

The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard. Behavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de Chomsky. À babord: Revue sociale es politique 52.

(9a. Comment Overflow) (50+)

(9a. Comment Overflow) (50+)

9b. Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments

Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments. Linguistic Review 19: 9-50 




This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that al- legedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We ana- lyze the components of this sort of argument carefully, and examine four exem- plars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position. 

(9b. Comment Overflow) (50+)

(9b. Comment Overflow) (50+)

10a. Dennett, D. (unpublished) The fantasy of first-person science


Extra optional readings:
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014


Dennett, D. (unpublished) The fantasy of first-person science
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."

Dan Dennett's Video (2012)



Week 10 overview:





and also this (from week 10 of the very first year this course was given, 2011): 

(10a. Comment Overflow) (50+)

(10a. Comment Overflow) (50+)

10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

Harnad, S. (unpublished) OnDennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

The mind/body problem is the feeling/function problem (Harnad 2001). The only way to "solve" it is to provide a causal/functional explanation of how and why we feel...








(10b. Comment Overflow) (50+)

(10b. Comment Overflow) (50+)

10c. Harnad, S. (2011) Doing, Feeling, Meaning and Explaining

Harnad, S. (2002)  Doing, Feeling, Meaning and Explaining

It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how.

(10c. Comment Overflow) (50+)

(10c. Comment Overflow) (50+)

10d. Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer.

Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-BrainerArtificial Intelligence in Medicine 44(2): 83-89

Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just “functed.” But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test, which is to explain all of our performance capacity, but without explaining consciousness or incorporating it in any way in our functional explanation.




(10d. Comment Overflow) (50+)

(10d. Comment Overflow) (50+)

10e. Harnad, S. (2012) Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue



The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

(10e. Comment Overflow) (50+)

(10e. Comment Overflow) (50+)

11a. Clark, A. & Chalmers, D. (1998) The Extended Mind.

Clark, A. & Chalmers, D. (1998) The Extended MindAnalysis. 58(1) 



Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes.

(11a. Comment Overflow) (50+)

(11a. Comment Overflow) (50+)

11b. Dror, I. & Harnad, S. (2009) Offloading Cognition onto Cognitive Technology

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins 



"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our “Cognitive Commons,” in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature. 

(11b. Comment Overflow) (50+)

(11b. Comment Overflow) (50+)

**X1. Chalmers (2011) "A Computational Foundation for the Study of Cognition"

Chalmers, D.J. (2011) "A Computational Foundation for the Study of Cognition".  Journal of Cognitive Science 12: 323-57

[This is for Grad Students taking the course.]



Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions.

Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science.

(**X1. Comment Overflow) (50+)

(**X1. Comment Overflow) (50+)

**X2. Harnad 2012 "The Causal Topography of Cognition"

Harnad, Stevan (2012) The Causal Topography of CognitionJournal of Cognitive Science13(2): 181-196 [commentary on: Chalmers, David: “A Computational Foundation for the Study of Cognition”]


[This is for grad students taking the course]

The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a furnace can be simulated but not implemented computationally. Heating is a dynamical property, not a computational one. A computational simulation of a furnace cannot heat a real house (only a simulated house). It lacks the essential causal property of a furnace. This is obvious with computational furnaces. The only thing that allows us even to imagine that it is otherwise in the case of computational cognition is the fact that cognizing, unlike heating, is invisible (to everyone except the cognizer). Chalmers’s “Dancing Qualia” Argument is hence invalid: Even if there could be a computational model of cognition that was behaviorally indistinguishable from a real, feeling cognizer, it would still be true that if, like heat, feeling is a dynamical property of the brain, a flip-flop from the presence to the absence of feeling would be undetectable anywhere along Chalmers’s hypothetical component-swapping continuum from a human cognizer to a computational cognizer -- undetectable to everyone except the cognizer. But that would only be because the cognizer was locked into being incapable of doing anything to settle the matter simply because of Chalmers’s premise of input/output indistinguishability. That is not a demonstration that cognition is computation; it is just the demonstation that you get out of a premise what you put into it. But even if the causal topography of feeling, hence of cognizing, is dynamic rather than just computational, the problem of explaining the causal role played by feeling itself – how and why we feel – in the generation of our behavioral capacity – how and why we can do what we can do – will remain a “hard” (and perhaps insoluble) problem.

(**X2. Comment Overflow) (50+)

(**X2. Comment Overflow) (50+)

**X3. Chalmers Response "The Varieties of Computation: A Reply to Commentators"

Chalmers, D.J. (2012) "The Varieties of Computation: A Reply to Commentators". Journal of Cognitive Science, 13:211-48.

Publication of this symposium, almost twenty years after writing the paper, has encouraged me to dig into my records to determine the article’s history. The roots of the article lie in a lengthy e-mail discussion on the topic of “What is Computation”, organized by Stevan Harnad in 1992. I was a graduate student in Doug Hofstadter’s AI laboratory at Indiana University at that point and vigorously advocated what I took to be a computationalist position against skeptics. Harnad suggested that the various participants in that discussion write up “position papers” to be considered for publication in the journal Minds and Machines. I wrote a first draft of the article in December 1992 and revised it after reviewer comments in March 1993. I decided to send a much shorter article on implementation to Minds and Machines and to submit a further revised version of the full article to Behavioral and Brain Sciences in April 1994. I received encouraging reports from BBS later that year, but for some reason (perhaps because I was finishing a book and then moving from St. Louis to Santa Cruz) I never revised or resubmitted the article. It was the early days of the web, and perhaps I had the idea that web publication was almost as good as journal publication. 

5 Computational sufficiency 

We now come to issues that connect computation and cognition. The first key thesis here is the thesis of computational sufficiency, which says that there is a class of computations such that implementing those computations suffices to have a mind; and likewise, that for many specific mental states here is a class of computations such that implementing those computations suffices to have those mental states. Among the commentators, Harnad and Shagrir take issue with this thesis.

Harnad makes the familiar analogy with flying, digestion, and gravitation, noting that com¬puter simulations of these do not fly or digest or exert the relevant gravitational attraction. His diagnosis is that what matters to flying (and so on) is causal structure and that what computation gives is just formal structure (one which can be interpreted however one likes). I think this misses the key point of the paper, though: that although abstract computations have formal structure, im¬plementations of computations are constrained to have genuine causal structure, with components pushing other components around.

The causal constraints involved in computation concern what I call causal organization or causal topology, which is a matter of the pattern of causal interactions between components. In this sense, even flying and digestion have a causal organization. It is just that having that causal organization does not suffice for digestion. Rather, what matters for digestion is the specific bi¬ological nature of the components. One might allow that there is a sense of “causal structure” (the one that Harnad uses) where this specific nature is part of the causal structure. But there is also the more neutral notion of causal organization where it is not. The key point is that where flying and digestion are concerned, these are not organizational invariants (shared by any system with the same causal organization), so they will also not be shared by relevant computational implementations.

In the target article I argue that cognition (and especially consciousness) differs from flying and digestion precisely in that it is an organizational invariant, one shared by any system with the same (fine-grained) causal organization. Harnad appears to think that I only get away with saying this because cognition is an “invisible” property, undetectable to anyone but the cognizer. Because of this, observers cannot see where it is present or absent—so it is less obvious to us that cognition is absent from simulated systems than that flying is absent. But Harnad nevertheless thinks it is absent and for much the same reasons.

Here I think he does not really come to grips with my fading and dancing qualia arguments, treating these as arguments about what is observable from the third-person perspective, when really these are arguments about what is observable by the cognizer from the first-person perspective. The key point is that if consciousness is not an organizational invariant, there will be cases in which the subject switches from one conscious state to another conscious state (one that is radically different in many cases) without noticing at all. That is, the subject will not form any judgment— where judgments can be construed either third-personally or first-personally – that the states have changed. I do not say that this is logically impossible, but I think that it is much less plausible than the alternative.



Harnad does not address the conscious cognizer’s point of view in this case at all. He addresses only the case of switching back and forth between a conscious being and a zombie; but the case of a conscious subject switching back and forth between radically different conscious states without noticing poses a much greater challenge. Perhaps Harnad is willing to bite the bullet that these changes would go unnoticed even by the cognizer in these cases, but making that case requires more than he has said here. In the absence of support for such a claim, I think there remains a prima facie (if not entirely conclusive) case that consciousness is an organizational invariant.