Commentary on:

The cognitive functions of language

by Peter Carruthers

Language Isn't Quite That Special

Joanna J. Bryson

University of Bath
Department of Computer Science
Bath BA2 7AY, United Kingdom
http://www.cs.bath.ac.uk/~jjb


Abstract:

Language isn't the only way to cross modules, nor is it the only module with access to input as well as output. The reason we don't generally work across modules is because it is generally a bad idea -- it leads to combinatorial explosion in search and planning. Language is special in that it's a very good a vector for mimetics, so tends to be associated with the catalogue of useful cross-module concepts we acquire culturally. Further, language is indexical so it can facilitate computationally expensive operations. I argue that current evidence cannot distinguish Carruthers' model from the above, but propose an experiment that may.

Carruthers has provided an excellent review of the various ways in which language can affect cognitive function, and described some of the most exciting new data currently available on the subject. In this commentary I will take a position largely sympathetic with Carruthers' viewpoint on the general architecture of mind, and also of his ambition of creating a `suitably weakened' version of Dennett's Joycean machine hypothesis. However, my position on the role of language is weaker still than Carruthers. I do not believe language is the only way to `cross' cognitive modules. I also do not believe current evidence can distinguish my view from Carruthers. To do so, I propose some experiments extending those of Hermer-Vazquez et al. (1999), described in Carruthers' section 5.2, which might help distinguish these theories at the end of my commentary .

Carruthers takes the view that modules of mind are necessarily fully disjoint, except where they are combined by language. In fact, modules may frequently interact, provided the individual modules are able to exploit each other's processing. To argue this point requires a clear definition of `module'. I believe that the most critical feature of modularity is that individual modules support and are supported by specialised representations. During both evolution (for the species) and development (for the individual), preferred synaptic organisations are learned for recognising regularities that are useful for controlling actions and achieving goals. These regularities may arise from the external environment, as communicated by the senses, or from neighbouring neural systems -- from the neuron's perspective there's no difference. This view of the development of modularity in the individual is similar to that of Bates (1999) and to some extent to that of Karmiloff-Smith (1992), whose work emphasises the developmental aspects of skill specialisation. On the species level, it echos Livesey (1986)'s account of the evolution of brain organisation.

What are modules for? From a computation perspective, the answer is clear: modularity combats combinatorics by focusing computational resources on a fertile but strictly limited substrate of potentially useful ideas (Minsky, 1985; Bryson, 2001). This explains why cross-modular processing is unusual -- it is generally a bad idea. Evolution and / or learning have shown the agent that the fertile solutions for this problem (that addressed by one module) lie here (within the module.) Searching other modules takes time and working memory, and is unlikely to be productive. Nevertheless, animal behaviour requires modules to interact, and probably not simply at `centralised' switching locations such as the basal ganglia (though that probably happens, see e.g. Gurney et al. (1998).) A common example is the visual system, where bidirectional channels of visual processing seem to flow through much of the cortex, with various regions specialised to particular parts of sensory processing and expectation setting (Rao and Ballard, 1997). These regions qualify as modules by my criteria, because they have discrete representational maps dedicated to different levels of visual abstraction.

So modules communicate without language. I would also argue that, as elegant as the argument of the LF cycle is, it seems unlikely that language is the only module capable of perceiving its own output. First, it seems unlikely that language is composed of only one module, any more than vision is. But second, we know from Rizzolatti et al. (2000)'s `mirror neuron' work that at least one module exists in non-human primates that is used for both sensing and action.

What makes language special then? It may be just two things. First (returning to the Joycean Machine (Dennett, 1991)), language is tightly associated with cultural knowledge. Doing difficult computational work quickly is one of the big advantages to having a massive number of processors operating simultaneously, provided these processors can communicate their results. This is just what humans do. Thus if some individuals do manage to find cross-modular strategies which are useful, those strategies are likely to be collected in our culture and passed on linguistically. Things that look cross-modular might even be newly developed skills. It would be hard to know to what extent they leverage native abilities, and that extent may well vary between individuals. Spelke and Tsivkin (2001)'s beautiful study appears to demonstrate that for large-magnitude approximate number values, most people utilise their innate magnitude estimation module. However, it does nothing to prove whether we ever truly learn to exploit the small exact-number module, or whether, as we learn to count, we develop a new module which remains heavily dependent on our lexicon.

The second special thing about language is that it is indexical. That is, it is a compact way to refer to a concept, and by compact I mean that it doesn't take as much working memory, or carry as much qualia or its associated emotional and motivational baggage as a full concept. This property of symbols has been demonstrated by Boysen et al. (1996) with chimpanzees. Two chimpanzees are presented with two bowls containing different numbers of pieces of fruit, and then one subject is asked to select the bowl the other chimp will receive. Chimps are selfish, so the goal of all subjects is to point at the smaller bowl, but the subjects find themselves incapable of this indirection in the face of the fruit itself -- they always point at the larger bowl which they want. However, they can do the task when the bowls contain numerals, which they are already trained to understand. Hauser (1999) speaks further on this task and its relationship to self inhibition.

This conciseness of symbols like words may explain the experiments by Hermer-Vazquez et al. (1999). Language allows the concepts of `left' or `right' to be retained in working memory (or even the phonological loop) while the other processing that needs to be done to combine navigational information is performed. We already know that learning new navigational strategies even in rats requires a resource that's also implicated in episodic memory recall: the hippocampus (McClelland et al., 1995; Bannerman et al., 1995).

There is no reason, with the possible exception of parsimony, to believe the account presented here rather than Carruthers', or his instead of mine. However, this question might possibly be resolved using chimpanzees or bonobos. Kuhlmeier and Boysen (2002) have already been conducting experiments on chimpanzees' abilities to combine cues in navigation through using their innate ability to map from a small scaled model of their enclosure to the actual enclosure. Unfortunately, their chimps do not have any symmetric spaces suitable for the Hermer-Vazquez et al. (1999) experiment, but perhaps such a space could be constructed. If LEFT and RIGHT happen to be concepts adult chimps acquire, they may be able to solve the task. Perhaps, if apes can learn symbols for `left' and `right' this experiment might be best conducted by one of the ape-language labs. If my indexical argument holds, then it may be that only apes who have learned symbols for these concepts could succesfully complete the task. If it is in fact the phonological loop that is critical, then it may only be chimps able to carry or gesture the symbol that will be able to navigate successfully.

Acknowledgements

Thanks to Valerie Kuhlmeier for discussion and Will Lowe for proof reading.

Bibliography

Bannerman, D. M., Good, M. A., Butcher, S. P., Ramsay, M., and Morris, R. G. M. (1995).
Distinct components of spatial learning revealed by prior training and NDMA receptor blockade.
Nature, 378:182-186.

Bates, E. (1999).
Plasticity, localization and language development.
In Broman, S. and Fletcher, J. M., editors, The changing nervous system: Neurobehavioral consequences of early brain disorders, pages 214-253. Oxford University Press.

Boysen, S. T., Bernston, G., Hannan, M., and Cacioppo, J. (1996).
Quantity-based inference and symbolic representation in chimpanzees (Pan troglodytes).
Journal of Experimental Psychology: Animal Behavior Processes, 22:76-86.

Bryson, J. J. (2001).
Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents.
PhD thesis, MIT, Department of EECS, Cambridge, MA.
AI Technical Report 2001-003.

Dennett, D. C. (1991).
Consciousness Explained.
Little Brown & Co., Boston, MA.

Gurney, K., Prescott, T. J., and Redgrave, P. (1998).
The basal ganglia viewed as an action selection device.
In The Proceedings of the International Conference on Artificial Neural Networks, Skövde, Sweden.

Hauser, M. D. (1999).
Perseveration, inhibition and the prefrontal cortex: A new look.
Current Opinion in Neurobiology, 9:214-222.

Hermer-Vazquez, L., Spelke, E. S., and Katsnelson, A. (1999).
Sources of flexibility in human cognition: Dual-task studies of space and language.
Cognitive Psychology, 39(1):3-36.

Karmiloff-Smith, A. (1992).
Beyond Modularity: A Developmental Perspective on Cognitive Change.
MIT Press, Cambridge, MA.

Kuhlmeier, V. A. and Boysen, S. T. (2002).
Chimpanzees (Pan troglodytes) recognize spatial and object correspondences between a scale model and its referent.
Psychological Science, 13(1):60-63.

Livesey, P. J. (1986).
Learning and Emotion: A Biological Synthesis, volume 1 of Evolutionary Processes.
Lawrence Erlbaum Associates, Hillsdale, NJ.

McClelland, J. L., McNaughton, B. L., and O'Reilly, R. C. (1995).
Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory.
Psychological Review, 102(3):419-457.

Minsky, M. (1985).
The Society of Mind.
Simon and Schuster Inc., New York, NY.

Rao, R. P. N. and Ballard, D. H. (1997).
Dynamic model of visual recognition predicts neural response properties in the visual cortex.
Neural Computation, 9(4):721-763.

Rizzolatti, G., Fogassi, L., and Gallese, V. (2000).
Cortical mechanisms subserving object grasping and action recognition: A new view on the cortical motor functions.
In Gazzaniga, M. S., editor, The New Cognitive Neurosciences, chapter 38, pages 538-552. MIT Press, Cambridge, MA, second edition.

Spelke, E. S. and Tsivkin, S. (2001).
Language and number: A bilingual training study.
Cognition, 78(1):45-88.

About this document ...

Language Isn't Quite That Special

This document was generated using the LaTeX2HTML translator Version 2K.1beta (1.47)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 bbs-carruthers

The translation was initiated by Joanna Bryson on 2002-12-10

page author: Joanna Bryson