Showing posts with label Sandra Menssen. Show all posts
Showing posts with label Sandra Menssen. Show all posts

Sunday, August 15, 2010

Materialist Conceptions of Mind, Part II: Emergentism and the Explanatory Gap

In my last post I explained why I found a simple identification between mental phenomena and brain processes to be untenable. But such identification of mind and brain is not the only option for the materialist—and so even if my case against identificationism is sound, it doesn’t follow that one must give up on materialism. One alternative to simple identification is emergentism—the idea that consciousness is an emergent property of the brain.


This view holds (in the words of John Searle, the most prominent defender of emergentism) “that brain processes cause consciousness but that consciousness is itself a feature of the brain”—a feature that emerges because of something about the distinctive structure and organization and activity of the brain. It is this alternative I want to turn to now.

To get at the idea of what emergentism maintains, it may help to start with an example. On my desk, I have a framed picture of my kids—taken just over four years ago when my daughter was an infant. My son is leaning towards her, laughing while he looks at her. He’s wearing an armadillo t-shirt and is holding a green plastic cup. My daughter is wearing a red bodysuit and looks like she is boxing the air with her little fists.

This same picture is one I've uploaded onto my computer, and it is now part of my slideshow screensaver. When this picture appears on my computer screen, I can rightly say, “That is the same picture as the one in the frame on my desk.” And I can rightly say that because what I am referring to as the picture is something abstracted from (in one case) pixels and underlying hardware controlled by computer programming and (in the other case) ink distributed on photo paper. What I mean by "the picture" is this something that can emerge from each underlying physical substrate—something that both of these very different physical substrates have in common.

And what is it that they have in common? It has something to do with organization. In the case of the framed photo, dots of ink in different colors are arranged on paper in a pattern that produces a holistic visual effect on the viewer. The other image creates an arrangement of illuminated pixels rather than dots of ink, but the pattern into which those pixels are organized is the same. And so we have these two very different physical substrates that each succeed in generating identical images (that is, identical in kind; they are not numerically the same thing).

In each case, the picture of my children is an emergent property of the underlying physical substrate—it is a feature of the physical system that is produced by that system as a whole by virtue of what is true of the parts, but which is not a feature of any of the parts.

Arguably, the emergent property in either case doesn’t really “emerge” in the absence of an observer who has the capacity to find in the similar organizational structures of the two physical substrates a shared meaning. In other words, at least in some cases, emergence requires a subject who is capable of meaning-attributions. Without that observer, we have an arrangement of inkblots or of illuminated pixels, but we simply don’t have an image of my children. That requires someone to find meaning in the pattern—and neither a piece of photo paper with ink on it nor a computer can do that. This point, although not one I will pursue at the moment, may prove to be of great significance for thinking clearly about the emergence of consciousness.

Now, it is not necessary for emergence that there be multiple and distinct physical substrates that are somehow capable of giving rise to the same kind of property. I choose this example for two reasons. First, the fact that there are two very different physical substrates helps to isolate the emergent property and identify it as distinguishable from the substrate which produces it. Second and more importantly, this example is useful for highlighting some of the advantages of emergentism over identificationism with respect to consciousness.

The most obvious advantage, highlighted by this example, is that emergentism makes room for multiple realizations of consciousness. That is, different sorts of physical systems—human brains and, say, “positronic” ones—can both give rise to this thing we call consciousness.

Second, the example makes clear that an emergent property is not to be identified with the underlying physical system that causes it, even if it is causally explained or accounted for in each particular case by the more basic properties and structural features of that system. And because of this fact, one does not run into the sorts of problems that identificationism poses with respect to relational properties.

So, to go back to the picture of my children, I am intimately familiar with this image even though, in the case of its instantiation on my computer, I know very little about the underlying mechanisms which produce it. Since the image on the computer screen is caused by but distinct from the physical substrate that produces it, there is no problem that arises from this difference in relational properties. Put simply, one can be perfectly familiar with one property of a thing without being at all familiar with other properties. And so, if consciousness is an emergent property of brain processes, the fact that I am not familiar with any of the other properties of the underlying brain processes poses no difficulty at all. Likewise, that scientific investigation of a bat’s brain can’t tell us what it’s like to be a bat doesn’t cause the same degree of trouble, because a mode of inquiry that can describe one range of properties possessed by a thing might not be able to tell us everything there is to know about the thing. Some properties might be inaccessible to that mode of inquiry.

Now before turning to the challenges faced by emergentism, let me say a few words about what emergentism claims—and what it doesn’t claim—about consciousness. Here, I want to specifically stress that Searle, although the philosopher most commonly associated with emergentism, rejects some very important materialist theories of mind which are emergentist (at least according to the account of emergence offered above).

To be precise, Searle is a staunch opponent of the kind of functionalist account of mind that, for decades, was almost normative in cognitive science research. Functionalism, in its broadest terms, identifies mental states with functional ones—where a functional state is one that is possessed by a physical system when that physical system responds to a certain range of inputs with a predictable range of outputs. A vending machine has a functional state insofar as, when you insert the right amount of money and push the right button, a can of coke will tumble out. It has a “functional organization” or “pattern of causal relations” (to borrow Searle’s description).

The most interesting functional states, from a cognitive science standpoint, are those that computers possess by virtue of their programming. A computer program is, basically, the imposition of a specific functional organization onto a computer’s hardware. When a particular program is running (Microsoft Word), then various inputs (keys punched on the keyboard) reliably produces certain outputs (letters appearing consecutively from left to right across the screen). Of course, different programmers can generate similar functional states in different ways, and can do so on different hardware. So the same functional state might be produced on a PC with Microsoft Word, or on a Mac with WordPerfect.

The most popular developed form of functionalism is the theory that mental states are akin to computer programs—that is, mental states just ARE the functional organization or software of the brain.

Searle calls this view “Strong AI,” and he has attacked it again and again—most famously with his “Chinese Room” thought experiment. The thought experiment asks us to imagine someone who is isolated in a room and has Chinese characters given to him from the outside. He then consults some rather complex instructions that tell him what to do when he receives such-and-such characters. Following these instructions, he puts together a set of characters and hands them out of the room. It turns out that what he is receiving are questions asked in Chinese, and what he returns are answers. The point is that no matter how sophisticated the instructions for what symbolic outputs to provide in response to which symbolic inputs, the man in the room cannot be said to understand Chinese—because the instructions (the “program”) merely indicate how to correctly manipulate the symbols. They don’t say what the symbols mean. Put another way, a program can offer syntax but not semantics. But consciousness has semantic content—in fact, that’s what qualia are. And so, any system that can’t explain such content can’t explain consciousness.

But here’s the thing: the functional state of a system IS an emergent property of that system—it’s a property that emerges out of how the whole is organized. What Searle’s Chinese Room analogy demonstrates is that it isn’t enough to say that consciousness is an emergent property of brain processes. We need to ask what kind of emergent property it is and how it emerges—and this account has to track onto what we know first-hand about consciousness.

And although Searle is convinced that consciousness IS an emergent property, he has not offered any such account. That’s not his aim, because he doesn’t think he is in a position to do so. Rather, his aim is to spark a research program. He thinks cognitive scientists have been barking up the wrong tree—that their working model for understanding what consciousness IS just doesn’t work, and that as a result their attempts to explain consciousness are really explaining something else entirely (our ability to perform complex calculations, perhaps).

So, to summarize: the emergentist thinks that something about neurological systems—their constitutive elements, their organization, the interactions of the parts—gives rise to or produces on a holistic level this thing we call consciousness. But while one emergent property—the functional organization of the brain—can explain the brain’s capacity to respond to certain inputs (a hot stove scalding a hand) with appropriate outputs (the hand jerking quickly away), or its capacity to perform complex calculations, the functional organization alone is insufficient to account for the content of consciousness.

The problem, of course, is that neuroscientists do not at present have any account of how neurological systems can do this—a fact that most are willing to admit. Sandra Menssen and Thomas Sullivan, in The Agnostic Inquirer, offer some choice quotes from current neuroscience texts that are rather striking in this regard. For example, one of the standard textbooks in the field, Cognitive Neuroscience: The Biology of the Mind, puts it this way: “Right from the start we can say that science has little to say about sentience. We are clueless on how the brain creates sentience.”

Neuroscientists have had considerable success in tracking correlations between neurological events and conscious states—and then in describing the correlated neurological events in growing detail. They can do this, first of all, because their subjects can communicate their conscious states to researchers. Scientists can ask their subjects what they are feeling, sensing, etc., as those subjects’ brains are being probed using MRI technology or other exploratory equipment. To a lesser extent they can also track correlations because they can reasonably posit that their subjects are undergoing certain conscious states based on their own subjective experience of consciousness (they can assume that their research subject is having a certain kind of subjective experience because they’ve just flashed a bright light in the subject’s eyes and because the researchers know what their own subjective experience is when that happens to them).

But although they have been able to track correlations between brain states and conscious states in this way, we might well ask whether they could have made any progress at all in this project in the absence of either subjective reports from their subjects or conclusions based on attributing to their subjects what they find in their own consciousness (through introspection). The answer seems to be no. And the reason is because there is nothing about the MRI images or other data that by itself gives any clue as to what the corresponding contents of consciousness should be. There is this gulf between what neuroscientists are looking at and describing (the brain processes) and the correlated conscious states with which we are all familiar.

Could this explanatory gap be one that more scientific study will eventually close? Will we, eventually, be able to understand how neurological events can generate this thing we call consciousness? Many scientists express this hope, and many naturalists rest their ontology on it. They say, in effect, “Scientists have explained many mysteries that previously had been thought to be inexplicable in scientific terms. Just give them time, and they’ll explain consciousness, too.” Searle clearly has this hope—but he thinks the hope can be realized only once scientists aren’t being misdirected by philosophical “accounts” of consciousness that really deny the existence of the data to be explained.

But others think that there is a difference in kind between the mysteries that science has unraveled in the past and the present mystery of consciousness—a difference that makes this explanatory gap more than merely contingent. In effect, the view is this: the nature of neurological systems is such that a scientific understanding of them, no matter how complete, cannot account for consciousness.

The argument for this view traces back at least to Leibniz, who offers the following brief argument in The Monadology:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.
In short, Leibniz thinks our physical organs which might be thought responsible for consciousness are mechanical systems—but there is nothing about mechanistic explanation which is in principle capable of accounting for our inner perceptual experience. We find a similar argument advanced in more detail by the great 19th Century German Philosopher, Hermann Lotze:

…out of all combinations of material conditions the origin of a spiritual condition of the soul never becomes analytically conceivable; or, more simply expressed, if we think of material elements in such a way as to predicate of them nothing which does not belong to the notion of matter, if we simply conceive of them as entities in space which are moveable and may call each other into motion by their power; if we, finally, imagine these motions of one or many elements as varied or combined as we please, there never comes a time when it is self-evident that the motions last produced may not longer remain motions but must be transformed into sensations. A materialism, therefore, which assumed that a spiritual life could spring out of simply physical conditions or motions of bodily atoms would be an empty assumption, and, in this form, has hardly ever been advocated in earnest.
Of course, Lotze was writing before this position was widely and persistently advocated in earnest by a range of thinkers in the 20th Century—but his argument has continued to crop up, most recently (in an implicit form) in Chalmer’s zombie thought experiment—the point of which seems to be that there is nothing about an arrangement of physical bodies “pushing on each other,” no matter how complex the system of pushes, that implies consciousness. It is for this reason, I think, that Chalmers is convinced we can always imagine such a system existing but lacking consciousness (a “zombie”). Since nothing about the physical system, if it possesses only physical properties, implies consciousness, it is possible for such a physical system to exist without consciousness.

Chalmers’ solution is one that Lotze was well aware of more than a century before Chalmers proposed it with much fanfare. In fact, here is what Lotze says immediately after his rejection of a simple mechanistic account of consciousness:

The materialistic views which have really had adherents have proceeded from the premise that what we call matter is really better than it externally appears. It contains in itself the fundamental peculiarity out of which the spiritual conditions may develop just as well as physical predicates—extension, impenetrability, etc.—are developed out of another fundamental peculiarity. From this results the new attempt, out of the reciprocal operations of these psychical elementary forces to elucidate all the elements of the spiritual life just as its bodily life is derived from the reciprocation of the physical elementary forces of its constituents.
Lotze goes on to challenge this “Chalmerian” view on the grounds that it cannot account for the unity of consciousness—but let me leave this aside for now. The point that Lotze wants to make—a point echoed by Chalmers more than a century later—is that there is nothing about purely mechanistic explanation that renders consciousness “analytically conceivable” in terms of it.

Menssen and Sullivan offer their own analogy for getting at this explanatory disconnect. Here is how they put the point:

Your child has a pull toy that features a little man in a box whose head pops in and out of the box as the toy is pulled along. You wonder, why does the head pop in and out? You examine the toy and see that the wheels are affixed to an axle with a rise in the middle; the little man sits on the rise, so his head goes up and down with each revolution of the wheels. Now your friend comes in and asks, ‘Why does the man’s head pop in and out?’ So you explain. And your friend says, ‘I understand all that, but why does the head pop in and out when the toy is pulled along?’ The question is bizarre: if your friend really understood everything you have said, it makes no sense to continue to ask why the head pops in and out.
This “making no sense to keep asking why once the explanation is understood” is what Lotze has in mind when he speaks of a phenomenon being “analytically conceivable” in relation to a particular kind of explanation—the explanation just shows us how the phenomenon in question is brought about. And this, Menssen and Sullivan maintain, is a feature of any genuine causal explanation. In their terms, “If a putative explanation of a phenomenon is a genuine causal explanation, then if you grasp the explanation in relation to the phenomenon, it cannot reasonably be asked: ‘But why does the phenomenon occur?’”

They follow their articulation of this principle with the following crucial claim: “No matter how much is said about the nervous system, as long as what is said is confined to statements of fundamental physics and chemistry, you will always be able to ask ‘But why does that produce consciousness?’”

The contention here is that not only do current mechanistic explanations fall short of accounting for consciousness, but that “more of the same” sort of explanation won’t close the gap—because the problem lies with the kind of explanation being offered, rather than with the amount of detail involved.

To see this point, consider an analogy my friend and colleague John Kronen likes to employ (one that dates him—and me, since I am able to appreciate it). Suppose someone comes upon Samantha Stevens wiggling her nose to miraculous effect. She wiggles, and the vacuum flies out of the closet and cleans the house all by itself. She wiggles, and her poor husband Darren materializes in the living room, blinking in surprise. Suppose someone came along and said, “Oh, I see! No mystery here. These events are explained by the wiggling of her nose.” Well, we wouldn’t be satisfied.

Now suppose that the person took to studying Samantha’s nose-wiggles and began to observe and record correlations between how she wiggles her nose and what happens. A long wiggle to the left followed by two short ones to the right precede every instance of inanimate objects moving on their own; two short left wiggles followed by two short right wiggles precede every instances of teleportation, etc. Would we now be inclined to say, “Oh, now I get it!”? Of course not. And now matter how detailed the study of the patterns of nose movements—no matter how perfect the correspondence between distinctive sorts of nose wiggles and distinctive events—we would be no closer to having an explanation of how Samantha Stevens does what she does. Nose wiggles are analytically disconnected from flying objects and teleportation, such that they have no capacity to close the explanatory gap.

The claim, in effect, is that physical brain events bear the same relation to consciousness. They are analytically disconnected in such a way that it is not possible to close the explanatory gap.

Of course, it is one thing to say this, another for it to be true. But here is the problem. If someone were to ask why Samantha’s nose-wiggles are analytically disconnected from flying objects so as to be incapable by themselves of providing an adequate explanation of the latter, I would be at pains to offer anything other than, “Well, think about her nose wiggles. Think about flying objects. They have nothing to do with each other.” The sense of disconnect here is so intuitively evident that, in the absence of some astonishingly unexpected explanation that succeeds in establishing a connection, one is justified in assuming that “more of the same” won’t narrow the explanatory gap. We need to look past her nose and introduce some further element that can make the connection.

But, of course, defenders of materialist conceptions of consciousness think brains and minds have everything to do with each other—and so it may well be the case that what we have here is (once again) a basic dichotomy of intuitions. Those who find the explanatory gap argument persuasive have an intuitive and immediate sense of the distinctness of consciousness and mechanistic processes—and this intuitive sense entails that in the absence of a causal explanation that succeeds in closing the explanatory gap, the presumptive standpoint will be that the gap can’t be closed by that kind of explanation.

This is where I am positioned. And because I am positioned as I am, no materialist account of consciousness will be convincing in the absence of an explanation that actually closes the explanatory gap. But for those with different basic intuitions, the situation may be very different.

So what does all of this mean? Do I think that scientists should stop trying to explain consciousness in terms of the brain? No. But it does mean that unless and until they succeed, those like myself—those who see a disparity between brain processes and conscious states as enormous (more enormous, actually) as that between nose-wiggles and self-propelled vaccuums—won’t believe it until we see it. For us, given where we stand intuitively, the burden of proof rests on the materialist to show that the explanatory gap can be closed by nothing other than a deeper study of the brain.

In the meantime, we’ll conduct our own inquiries—looking for something more, some additional element, that can bridge the gulf between mechanistic explanations and the phenomenon of consciousness, and so explain the correlations that scientists have shown to exist between the two.

That different people, with different basic intuitions, are pursuing different theories and attempting to find ways to substantiate them (especially to those who stand on the other side of the intuitive gap) seems to me as if it can only be a good thing--although there are, of course, plenty of people on both sides of the divide who think that what those on the other side are attempting is absurd and pointless and worthy of nothing but mockery.