Gazzaniga’s Modules

January 3, 2012

I’ve been reading Michael Gazzaniga’s 2009 Gifford Lectures, now published as Who’s In Charge? Free Will and the Science of the Brain. I can’t say that I think he’s untied the knot of the free will problem, but the book contains some interesting observations about split brain patients, brain scans, and modules. Most of this post is about modules, but Gazzaniga’s remarks about brain scans deserve top billing.

These remarks come in the last and most useful chapter, which focuses on problems in brining neuroscience into court. Gazzaniga provides a long list of such problems, and anyone who is interested in neuroscience and the law should certainly read it.

The general theme is this. Extracting statistically significant conclusions from brain scans is an extremely complex business. One thing that has to be done to support meaningful statements about the relation between brain regions and our mental abilities is to average scans across multiple individuals. This kind of averaging is part of what is used to generate the familiar pictures of brain regions “lighting up”.

But in any court proceeding, the question is about the brain of one individual only. Brain scans of normal, law-abiding individuals often differ quite noticeably from averages of scans of people doing the same task. So, inferences from an individual’s showing a difference from average in a brain scan to conclusions about that individual’s abilities, proclivities, or degree of responsibility are extremely difficult and risky.

The individual differences in brain scans show that our brains are not standard-issue machines. Brains that are wired differently can lead to actions that have the same practical result across a wide variety of circumstances. This implies that there is a limit to how specialized each part of our brains can be.

But what about modularity? Don’t we have to think of ourselves as composed of relatively small sub-networks that are dedicated to their given tasks?

Here is where things get interesting; for in Gazzaniga’s book, there seem to be two concepts of “module” at work, although the distinction between them is not clearly drawn.

The first arises out of some observations that have been known for a long time, but are not often referred to. (They’re on pp. 32-33 of Gazzaniga’s book.) One of these is that while our brains are 2.75 larger than those of chimpanzees, we have only 1.25 times more neurons. So, on average, our neurons are more distant from each other. What fills the “extra” space is connections among neurons; but if the same degree of connectivity among neurons were maintained with the extra distance, there would have to be many more miles of connecting lines (axons) than there actually are. So, in us, the degree of connectivity is, on average, less than that in chimps. There are still groups of close-lying neural cells that are richly connected, but the connections of one group to another are sometimes relatively sparse. We have thus arrived, by a sort of physiological derivation, at modules.

It must be noticed, however, that this explanation of the existence of modules does not say anything about what kind of functions the small, relatively well connected groups might be performing. So, this explanation does not contribute any reason for supposing that there are “modules” for anything so complex as – to take a famous case – detecting people who cheat on social rules. There is good evidence that we have a well developed ability to detect cheaters, and that this ability fails to extend to similar problems that are not phrased in terms of social rules. But it is another question whether there is one brain part that is dedicated to this task, or whether we can do it because we have several small groups of neurons, each of which does a less complicated task, and whose combined result enables us to detect cheaters with ease.

Modularity reaches its apogee when Gazzaniga introduces the “interpreter module”. The job of this item is to rationalize what all the other modules are doing. It is the module that is supposed to provide our ability to make up a narrative that will present us – to others, and to ourselves – as reasonable actors, planning in accord with our established desires and beliefs, and carrying out what we have told ourselves we intend to do.

According to the interpreter module story, we can see this inveterate rationalizer at work in many ways. It reveals itself in startlingly clear ways in cases of damaged brains. Some of the patients of interest here are split brain patients; others have lost various abilities due to stroke or accidents. Some parts of their brains receive less than the normal amount of input from other parts. Their interpreter modules have incomplete information, and the stories they concoct about what their owners are doing and why are sometimes quite bizarre.

But people with intact brains can be shown to be doing the same sort of rationalizing. For example, Nisbett and Wilson (1977) had people watch a movie. For one group, the circumstances were normal, for another the circumstances were the same except for a noisy power saw in the hall outside. Participants were asked to rate several aspects of the movie, such as interest, and likelihood of affecting other viewers. Then they were asked whether the noise had affected their ratings. In fact, there was no significant difference in the ratings between the non-distracted group and the group exposed to the noise. But a majority of those in the latter group believed that the noise had affected their ratings.

While false beliefs about our own mental processes are well established, I am suspicious of the “interpreter” story. The interpreter is called a “module” and is supposed to explain how we bring off the interesting task of stitching together all that goes on, and all that we do, into a coherent (or, at least, coherent sounding) narrative. But we might be doing any of very many things, under many different circumstances. To make a coherent story, we must remain consistent, or nearly so, with our beliefs about how the physical and social worlds operate. We must anticipate the likely reactions of others to what we say we are doing. So, according to the interpreter module story, this “module” must have access to an enormous body of input from other modules, and be able to process it into (at least the simulacrum of) a coherent story.

To me, that sounds an awful lot like an homunculus – a little person embodied in a relatively tightly interconnected sub-network, that takes in a lot of inputs and reasons to a decently plausible output that gets expressed through the linguistic system. That image does nothing to explain how we generate coherent, or approximately coherent, narratives; it just gives a name to a mystery.

It would be better to say that we have an ability to rationalize – to give a more or less coherent verbal narrative – over a great many circumstances and actions. Our brains enable us to do this. Our brains have parts with various distances between them; somehow, the combined interactions of all these parts results in linguistic output that passes, most of the time, as coherent. We wish we understood how this combined interaction manages to result in concerted speech and action over extended periods of time; but, as yet, we don’t.

[Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain, The Gifford Lectures for 2009 (New York: Harper Collins, 2011). Nisbett, R. E. & Wilson, T. D. (1977) “Telling More Than We Can Know: Verbal Reports on Mental Processes”, Psychological Review 84:231-259. This paper describes many other cases of false belief about our mental processes. The physiological comparison between humans and chimpanzees, and its significance, are referenced to Shariff, G. A. (1953) “Cell counts in the primate cerebral cortex”, Comparative Neurology 98:381-400; Deacon, T. W. (1990) “Rethinking mammalian brain evolution”, American Zoology 30:629-705; and Ringo, J. (1991) “Neuronal interconnection as a function of brain size”, Brain, Behavior and Evolution 38:1-6.]

Thinking About Modules

November 21, 2011

In a recent Wall Street Journal review article, Raymond Tallis expresses dissatisfaction with what he calls “biologism” – the view that nothing fundamental separates humanity from animality. Biologism is described as having two “cardinal manifestations”.

The first is that the mind is the brain, or its activity. This view is held to have the consequence that one of the most powerful ways to understand ourselves is through scanning the brain’s activities.

The second manifestation of biologism is the claim that “Darwinism explains not only how the organism Homo sapiens came into being (as, of course, it does) but also what motivates people and shapes their day-to-day behavior”.

Tallis suggests that putting these ideas together leads to the following view. The brain evolved under natural selection, the mind is the (activities of the) brain, our behavior depends on the mind/brain, therefore the mind and our behavior can be explained by evolution. A further implication is claimed, namely, that “The mind is a cluster of apps or modules securing the replication of genes that are expressed in our bodies”. Studying the mind can be broken down into studying (by brain scans) the operation of these modules.

Tallis laments the wide acceptance of this way of looking at ourselves. He affirms that brain activity is a necessary condition of all of our consciousness, but holds that “many aspects of everyday human consciousness elude neural reduction”.

But how could aspects of our consciousness elude neural reduction, if everything in our consciousness depends on the workings of the brain? Tallis answers: “For we belong to a boundless, infinitely elaborated community of minds that has been forged out of a trillion cognitive handshakes over hundreds of thousands of years. . . . Because it is a community of minds, it cannot be inspected by looking at the activity of the solitary brain.”

This statement, however, is not an answer to the question of how aspects of our consciousness can elude neural reduction. It explains, instead, why we cannot understand facts about societies by looking at a solitary brain, and why we cannot reconstruct the evolutionary history of our species by looking at the brain of one individual. But the question about elusiveness of neural reduction concerns the consciousness of individuals. It’s about how individual minds work, and what gives rise to each person’s behavior.

Aside from rare cases of feral children, individuals grow up in societies. Even so, their motivations and behavior depend on their individual brains. Individuals must have some kind of representation of societal facts and norms in their own brains, if those brains are to produce behaviors that are socially appropriate and successful. At present, alas, we do not understand what form those representations take, nor how they are able to contribute, jointly with other representations, to intelligent behavior. But the question of how the individual mind works is a clear one, and the search for an answer is one of the most exciting inquiries of our time.

Despite my dissatisfaction with Tallis’s account, I am sympathetic to some of his doubts about reduction of motivation and behavior to the operations of modules. The true source of the problem, however, is not our attention to the mind/brain of solitary individuals.

The real problem is, instead, uncritical acceptance of modules. The modular way of looking at things does not follow from Tallis’s two cardinal manifestations. They say, in sum, that whatever we think and whatever we do depends on the activities of a brain that developed under principles of Darwinian evolution. They do not say one word about modules. They do not imply any theory of how the evolved brain does what it does.

These remarks are in no way a denial of modules, and in some cases, there is very good reason to accept them. But, even accepting that there are many modules, it does not follow that for any given motivation or behavior, X, there is a module that is dedicated to providing X – i.e., that functions to provide X and does not do anything else. Moreover, it is clear that our evolved brain allows for learning. If we learn two things, they may be related, and if we recognize a relation among things that we had to learn in the first place, there cannot be a module for recognizing that relation.

Caution about introducing modules for specific mental or behavioral features that may interest us is compatible with supposing not only that there are many modules, but even with supposing that operations of several modules is required for everything we do. That’s because plurality of modules carries with it the possibility of variability in how they are connected. Such variability may depend on genetic differences, developmental differences, and/or differences in learning. In any case of combined action of several modules, therefore, there will be no simple relation between a motivation or a behavior and a single module, nor any simple relation between a motivation or behavior and a collection of modules.

So, even granting Tallis’s two cardinal manifestations and a commitment to extensively modular brain organization, we cannot expect any simple relation to hold between some ability that interests us and the operation of a module dedicated to that ability. So, I agree with Tallis that we should be suspicious of facile “discoveries” of a module for X, where X may be, e.g., an economic behavior or an aesthetic reaction. But I think that the complexities that lie behind this suspicion are to be found in the complexity of the workings of individual brains. Our social relations with others provides distinctive material for us to think about, but they will not explain how we do our thinking about them.

[Raymond Tallis, “Rethinking Thinking”, The Wall Street Journal for November 12-13, 2011, pp. C5 and C8. Readers of _Your Brain and You_ will be familiar with reasons for regarding sensations as effects of, rather than the same thing as, neural activities; but this kind of non-reducibility is not relevant to the issues discussed in this post. They will also be aware of reasons for saying that we do not presently understand how individual minds work.]