Gazzaniga’s Modules

January 3, 2012

I’ve been reading Michael Gazzaniga’s 2009 Gifford Lectures, now published as Who’s In Charge? Free Will and the Science of the Brain. I can’t say that I think he’s untied the knot of the free will problem, but the book contains some interesting observations about split brain patients, brain scans, and modules. Most of this post is about modules, but Gazzaniga’s remarks about brain scans deserve top billing.

These remarks come in the last and most useful chapter, which focuses on problems in brining neuroscience into court. Gazzaniga provides a long list of such problems, and anyone who is interested in neuroscience and the law should certainly read it.

The general theme is this. Extracting statistically significant conclusions from brain scans is an extremely complex business. One thing that has to be done to support meaningful statements about the relation between brain regions and our mental abilities is to average scans across multiple individuals. This kind of averaging is part of what is used to generate the familiar pictures of brain regions “lighting up”.

But in any court proceeding, the question is about the brain of one individual only. Brain scans of normal, law-abiding individuals often differ quite noticeably from averages of scans of people doing the same task. So, inferences from an individual’s showing a difference from average in a brain scan to conclusions about that individual’s abilities, proclivities, or degree of responsibility are extremely difficult and risky.

The individual differences in brain scans show that our brains are not standard-issue machines. Brains that are wired differently can lead to actions that have the same practical result across a wide variety of circumstances. This implies that there is a limit to how specialized each part of our brains can be.

But what about modularity? Don’t we have to think of ourselves as composed of relatively small sub-networks that are dedicated to their given tasks?

Here is where things get interesting; for in Gazzaniga’s book, there seem to be two concepts of “module” at work, although the distinction between them is not clearly drawn.

The first arises out of some observations that have been known for a long time, but are not often referred to. (They’re on pp. 32-33 of Gazzaniga’s book.) One of these is that while our brains are 2.75 larger than those of chimpanzees, we have only 1.25 times more neurons. So, on average, our neurons are more distant from each other. What fills the “extra” space is connections among neurons; but if the same degree of connectivity among neurons were maintained with the extra distance, there would have to be many more miles of connecting lines (axons) than there actually are. So, in us, the degree of connectivity is, on average, less than that in chimps. There are still groups of close-lying neural cells that are richly connected, but the connections of one group to another are sometimes relatively sparse. We have thus arrived, by a sort of physiological derivation, at modules.

It must be noticed, however, that this explanation of the existence of modules does not say anything about what kind of functions the small, relatively well connected groups might be performing. So, this explanation does not contribute any reason for supposing that there are “modules” for anything so complex as – to take a famous case – detecting people who cheat on social rules. There is good evidence that we have a well developed ability to detect cheaters, and that this ability fails to extend to similar problems that are not phrased in terms of social rules. But it is another question whether there is one brain part that is dedicated to this task, or whether we can do it because we have several small groups of neurons, each of which does a less complicated task, and whose combined result enables us to detect cheaters with ease.

Modularity reaches its apogee when Gazzaniga introduces the “interpreter module”. The job of this item is to rationalize what all the other modules are doing. It is the module that is supposed to provide our ability to make up a narrative that will present us – to others, and to ourselves – as reasonable actors, planning in accord with our established desires and beliefs, and carrying out what we have told ourselves we intend to do.

According to the interpreter module story, we can see this inveterate rationalizer at work in many ways. It reveals itself in startlingly clear ways in cases of damaged brains. Some of the patients of interest here are split brain patients; others have lost various abilities due to stroke or accidents. Some parts of their brains receive less than the normal amount of input from other parts. Their interpreter modules have incomplete information, and the stories they concoct about what their owners are doing and why are sometimes quite bizarre.

But people with intact brains can be shown to be doing the same sort of rationalizing. For example, Nisbett and Wilson (1977) had people watch a movie. For one group, the circumstances were normal, for another the circumstances were the same except for a noisy power saw in the hall outside. Participants were asked to rate several aspects of the movie, such as interest, and likelihood of affecting other viewers. Then they were asked whether the noise had affected their ratings. In fact, there was no significant difference in the ratings between the non-distracted group and the group exposed to the noise. But a majority of those in the latter group believed that the noise had affected their ratings.

While false beliefs about our own mental processes are well established, I am suspicious of the “interpreter” story. The interpreter is called a “module” and is supposed to explain how we bring off the interesting task of stitching together all that goes on, and all that we do, into a coherent (or, at least, coherent sounding) narrative. But we might be doing any of very many things, under many different circumstances. To make a coherent story, we must remain consistent, or nearly so, with our beliefs about how the physical and social worlds operate. We must anticipate the likely reactions of others to what we say we are doing. So, according to the interpreter module story, this “module” must have access to an enormous body of input from other modules, and be able to process it into (at least the simulacrum of) a coherent story.

To me, that sounds an awful lot like an homunculus – a little person embodied in a relatively tightly interconnected sub-network, that takes in a lot of inputs and reasons to a decently plausible output that gets expressed through the linguistic system. That image does nothing to explain how we generate coherent, or approximately coherent, narratives; it just gives a name to a mystery.

It would be better to say that we have an ability to rationalize – to give a more or less coherent verbal narrative – over a great many circumstances and actions. Our brains enable us to do this. Our brains have parts with various distances between them; somehow, the combined interactions of all these parts results in linguistic output that passes, most of the time, as coherent. We wish we understood how this combined interaction manages to result in concerted speech and action over extended periods of time; but, as yet, we don’t.

[Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain, The Gifford Lectures for 2009 (New York: Harper Collins, 2011). Nisbett, R. E. & Wilson, T. D. (1977) “Telling More Than We Can Know: Verbal Reports on Mental Processes”, Psychological Review 84:231-259. This paper describes many other cases of false belief about our mental processes. The physiological comparison between humans and chimpanzees, and its significance, are referenced to Shariff, G. A. (1953) “Cell counts in the primate cerebral cortex”, Comparative Neurology 98:381-400; Deacon, T. W. (1990) “Rethinking mammalian brain evolution”, American Zoology 30:629-705; and Ringo, J. (1991) “Neuronal interconnection as a function of brain size”, Brain, Behavior and Evolution 38:1-6.]


%d bloggers like this: