Sunday, August 15, 2010

Materialist Conceptions of Mind, Part II: Emergentism and the Explanatory Gap

In my last post I explained why I found a simple identification between mental phenomena and brain processes to be untenable. But such identification of mind and brain is not the only option for the materialist—and so even if my case against identificationism is sound, it doesn’t follow that one must give up on materialism. One alternative to simple identification is emergentism—the idea that consciousness is an emergent property of the brain.


This view holds (in the words of John Searle, the most prominent defender of emergentism) “that brain processes cause consciousness but that consciousness is itself a feature of the brain”—a feature that emerges because of something about the distinctive structure and organization and activity of the brain. It is this alternative I want to turn to now.

To get at the idea of what emergentism maintains, it may help to start with an example. On my desk, I have a framed picture of my kids—taken just over four years ago when my daughter was an infant. My son is leaning towards her, laughing while he looks at her. He’s wearing an armadillo t-shirt and is holding a green plastic cup. My daughter is wearing a red bodysuit and looks like she is boxing the air with her little fists.

This same picture is one I've uploaded onto my computer, and it is now part of my slideshow screensaver. When this picture appears on my computer screen, I can rightly say, “That is the same picture as the one in the frame on my desk.” And I can rightly say that because what I am referring to as the picture is something abstracted from (in one case) pixels and underlying hardware controlled by computer programming and (in the other case) ink distributed on photo paper. What I mean by "the picture" is this something that can emerge from each underlying physical substrate—something that both of these very different physical substrates have in common.

And what is it that they have in common? It has something to do with organization. In the case of the framed photo, dots of ink in different colors are arranged on paper in a pattern that produces a holistic visual effect on the viewer. The other image creates an arrangement of illuminated pixels rather than dots of ink, but the pattern into which those pixels are organized is the same. And so we have these two very different physical substrates that each succeed in generating identical images (that is, identical in kind; they are not numerically the same thing).

In each case, the picture of my children is an emergent property of the underlying physical substrate—it is a feature of the physical system that is produced by that system as a whole by virtue of what is true of the parts, but which is not a feature of any of the parts.

Arguably, the emergent property in either case doesn’t really “emerge” in the absence of an observer who has the capacity to find in the similar organizational structures of the two physical substrates a shared meaning. In other words, at least in some cases, emergence requires a subject who is capable of meaning-attributions. Without that observer, we have an arrangement of inkblots or of illuminated pixels, but we simply don’t have an image of my children. That requires someone to find meaning in the pattern—and neither a piece of photo paper with ink on it nor a computer can do that. This point, although not one I will pursue at the moment, may prove to be of great significance for thinking clearly about the emergence of consciousness.

Now, it is not necessary for emergence that there be multiple and distinct physical substrates that are somehow capable of giving rise to the same kind of property. I choose this example for two reasons. First, the fact that there are two very different physical substrates helps to isolate the emergent property and identify it as distinguishable from the substrate which produces it. Second and more importantly, this example is useful for highlighting some of the advantages of emergentism over identificationism with respect to consciousness.

The most obvious advantage, highlighted by this example, is that emergentism makes room for multiple realizations of consciousness. That is, different sorts of physical systems—human brains and, say, “positronic” ones—can both give rise to this thing we call consciousness.

Second, the example makes clear that an emergent property is not to be identified with the underlying physical system that causes it, even if it is causally explained or accounted for in each particular case by the more basic properties and structural features of that system. And because of this fact, one does not run into the sorts of problems that identificationism poses with respect to relational properties.

So, to go back to the picture of my children, I am intimately familiar with this image even though, in the case of its instantiation on my computer, I know very little about the underlying mechanisms which produce it. Since the image on the computer screen is caused by but distinct from the physical substrate that produces it, there is no problem that arises from this difference in relational properties. Put simply, one can be perfectly familiar with one property of a thing without being at all familiar with other properties. And so, if consciousness is an emergent property of brain processes, the fact that I am not familiar with any of the other properties of the underlying brain processes poses no difficulty at all. Likewise, that scientific investigation of a bat’s brain can’t tell us what it’s like to be a bat doesn’t cause the same degree of trouble, because a mode of inquiry that can describe one range of properties possessed by a thing might not be able to tell us everything there is to know about the thing. Some properties might be inaccessible to that mode of inquiry.

Now before turning to the challenges faced by emergentism, let me say a few words about what emergentism claims—and what it doesn’t claim—about consciousness. Here, I want to specifically stress that Searle, although the philosopher most commonly associated with emergentism, rejects some very important materialist theories of mind which are emergentist (at least according to the account of emergence offered above).

To be precise, Searle is a staunch opponent of the kind of functionalist account of mind that, for decades, was almost normative in cognitive science research. Functionalism, in its broadest terms, identifies mental states with functional ones—where a functional state is one that is possessed by a physical system when that physical system responds to a certain range of inputs with a predictable range of outputs. A vending machine has a functional state insofar as, when you insert the right amount of money and push the right button, a can of coke will tumble out. It has a “functional organization” or “pattern of causal relations” (to borrow Searle’s description).

The most interesting functional states, from a cognitive science standpoint, are those that computers possess by virtue of their programming. A computer program is, basically, the imposition of a specific functional organization onto a computer’s hardware. When a particular program is running (Microsoft Word), then various inputs (keys punched on the keyboard) reliably produces certain outputs (letters appearing consecutively from left to right across the screen). Of course, different programmers can generate similar functional states in different ways, and can do so on different hardware. So the same functional state might be produced on a PC with Microsoft Word, or on a Mac with WordPerfect.

The most popular developed form of functionalism is the theory that mental states are akin to computer programs—that is, mental states just ARE the functional organization or software of the brain.

Searle calls this view “Strong AI,” and he has attacked it again and again—most famously with his “Chinese Room” thought experiment. The thought experiment asks us to imagine someone who is isolated in a room and has Chinese characters given to him from the outside. He then consults some rather complex instructions that tell him what to do when he receives such-and-such characters. Following these instructions, he puts together a set of characters and hands them out of the room. It turns out that what he is receiving are questions asked in Chinese, and what he returns are answers. The point is that no matter how sophisticated the instructions for what symbolic outputs to provide in response to which symbolic inputs, the man in the room cannot be said to understand Chinese—because the instructions (the “program”) merely indicate how to correctly manipulate the symbols. They don’t say what the symbols mean. Put another way, a program can offer syntax but not semantics. But consciousness has semantic content—in fact, that’s what qualia are. And so, any system that can’t explain such content can’t explain consciousness.

But here’s the thing: the functional state of a system IS an emergent property of that system—it’s a property that emerges out of how the whole is organized. What Searle’s Chinese Room analogy demonstrates is that it isn’t enough to say that consciousness is an emergent property of brain processes. We need to ask what kind of emergent property it is and how it emerges—and this account has to track onto what we know first-hand about consciousness.

And although Searle is convinced that consciousness IS an emergent property, he has not offered any such account. That’s not his aim, because he doesn’t think he is in a position to do so. Rather, his aim is to spark a research program. He thinks cognitive scientists have been barking up the wrong tree—that their working model for understanding what consciousness IS just doesn’t work, and that as a result their attempts to explain consciousness are really explaining something else entirely (our ability to perform complex calculations, perhaps).

So, to summarize: the emergentist thinks that something about neurological systems—their constitutive elements, their organization, the interactions of the parts—gives rise to or produces on a holistic level this thing we call consciousness. But while one emergent property—the functional organization of the brain—can explain the brain’s capacity to respond to certain inputs (a hot stove scalding a hand) with appropriate outputs (the hand jerking quickly away), or its capacity to perform complex calculations, the functional organization alone is insufficient to account for the content of consciousness.

The problem, of course, is that neuroscientists do not at present have any account of how neurological systems can do this—a fact that most are willing to admit. Sandra Menssen and Thomas Sullivan, in The Agnostic Inquirer, offer some choice quotes from current neuroscience texts that are rather striking in this regard. For example, one of the standard textbooks in the field, Cognitive Neuroscience: The Biology of the Mind, puts it this way: “Right from the start we can say that science has little to say about sentience. We are clueless on how the brain creates sentience.”

Neuroscientists have had considerable success in tracking correlations between neurological events and conscious states—and then in describing the correlated neurological events in growing detail. They can do this, first of all, because their subjects can communicate their conscious states to researchers. Scientists can ask their subjects what they are feeling, sensing, etc., as those subjects’ brains are being probed using MRI technology or other exploratory equipment. To a lesser extent they can also track correlations because they can reasonably posit that their subjects are undergoing certain conscious states based on their own subjective experience of consciousness (they can assume that their research subject is having a certain kind of subjective experience because they’ve just flashed a bright light in the subject’s eyes and because the researchers know what their own subjective experience is when that happens to them).

But although they have been able to track correlations between brain states and conscious states in this way, we might well ask whether they could have made any progress at all in this project in the absence of either subjective reports from their subjects or conclusions based on attributing to their subjects what they find in their own consciousness (through introspection). The answer seems to be no. And the reason is because there is nothing about the MRI images or other data that by itself gives any clue as to what the corresponding contents of consciousness should be. There is this gulf between what neuroscientists are looking at and describing (the brain processes) and the correlated conscious states with which we are all familiar.

Could this explanatory gap be one that more scientific study will eventually close? Will we, eventually, be able to understand how neurological events can generate this thing we call consciousness? Many scientists express this hope, and many naturalists rest their ontology on it. They say, in effect, “Scientists have explained many mysteries that previously had been thought to be inexplicable in scientific terms. Just give them time, and they’ll explain consciousness, too.” Searle clearly has this hope—but he thinks the hope can be realized only once scientists aren’t being misdirected by philosophical “accounts” of consciousness that really deny the existence of the data to be explained.

But others think that there is a difference in kind between the mysteries that science has unraveled in the past and the present mystery of consciousness—a difference that makes this explanatory gap more than merely contingent. In effect, the view is this: the nature of neurological systems is such that a scientific understanding of them, no matter how complete, cannot account for consciousness.

The argument for this view traces back at least to Leibniz, who offers the following brief argument in The Monadology:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.
In short, Leibniz thinks our physical organs which might be thought responsible for consciousness are mechanical systems—but there is nothing about mechanistic explanation which is in principle capable of accounting for our inner perceptual experience. We find a similar argument advanced in more detail by the great 19th Century German Philosopher, Hermann Lotze:

…out of all combinations of material conditions the origin of a spiritual condition of the soul never becomes analytically conceivable; or, more simply expressed, if we think of material elements in such a way as to predicate of them nothing which does not belong to the notion of matter, if we simply conceive of them as entities in space which are moveable and may call each other into motion by their power; if we, finally, imagine these motions of one or many elements as varied or combined as we please, there never comes a time when it is self-evident that the motions last produced may not longer remain motions but must be transformed into sensations. A materialism, therefore, which assumed that a spiritual life could spring out of simply physical conditions or motions of bodily atoms would be an empty assumption, and, in this form, has hardly ever been advocated in earnest.
Of course, Lotze was writing before this position was widely and persistently advocated in earnest by a range of thinkers in the 20th Century—but his argument has continued to crop up, most recently (in an implicit form) in Chalmer’s zombie thought experiment—the point of which seems to be that there is nothing about an arrangement of physical bodies “pushing on each other,” no matter how complex the system of pushes, that implies consciousness. It is for this reason, I think, that Chalmers is convinced we can always imagine such a system existing but lacking consciousness (a “zombie”). Since nothing about the physical system, if it possesses only physical properties, implies consciousness, it is possible for such a physical system to exist without consciousness.

Chalmers’ solution is one that Lotze was well aware of more than a century before Chalmers proposed it with much fanfare. In fact, here is what Lotze says immediately after his rejection of a simple mechanistic account of consciousness:

The materialistic views which have really had adherents have proceeded from the premise that what we call matter is really better than it externally appears. It contains in itself the fundamental peculiarity out of which the spiritual conditions may develop just as well as physical predicates—extension, impenetrability, etc.—are developed out of another fundamental peculiarity. From this results the new attempt, out of the reciprocal operations of these psychical elementary forces to elucidate all the elements of the spiritual life just as its bodily life is derived from the reciprocation of the physical elementary forces of its constituents.
Lotze goes on to challenge this “Chalmerian” view on the grounds that it cannot account for the unity of consciousness—but let me leave this aside for now. The point that Lotze wants to make—a point echoed by Chalmers more than a century later—is that there is nothing about purely mechanistic explanation that renders consciousness “analytically conceivable” in terms of it.

Menssen and Sullivan offer their own analogy for getting at this explanatory disconnect. Here is how they put the point:

Your child has a pull toy that features a little man in a box whose head pops in and out of the box as the toy is pulled along. You wonder, why does the head pop in and out? You examine the toy and see that the wheels are affixed to an axle with a rise in the middle; the little man sits on the rise, so his head goes up and down with each revolution of the wheels. Now your friend comes in and asks, ‘Why does the man’s head pop in and out?’ So you explain. And your friend says, ‘I understand all that, but why does the head pop in and out when the toy is pulled along?’ The question is bizarre: if your friend really understood everything you have said, it makes no sense to continue to ask why the head pops in and out.
This “making no sense to keep asking why once the explanation is understood” is what Lotze has in mind when he speaks of a phenomenon being “analytically conceivable” in relation to a particular kind of explanation—the explanation just shows us how the phenomenon in question is brought about. And this, Menssen and Sullivan maintain, is a feature of any genuine causal explanation. In their terms, “If a putative explanation of a phenomenon is a genuine causal explanation, then if you grasp the explanation in relation to the phenomenon, it cannot reasonably be asked: ‘But why does the phenomenon occur?’”

They follow their articulation of this principle with the following crucial claim: “No matter how much is said about the nervous system, as long as what is said is confined to statements of fundamental physics and chemistry, you will always be able to ask ‘But why does that produce consciousness?’”

The contention here is that not only do current mechanistic explanations fall short of accounting for consciousness, but that “more of the same” sort of explanation won’t close the gap—because the problem lies with the kind of explanation being offered, rather than with the amount of detail involved.

To see this point, consider an analogy my friend and colleague John Kronen likes to employ (one that dates him—and me, since I am able to appreciate it). Suppose someone comes upon Samantha Stevens wiggling her nose to miraculous effect. She wiggles, and the vacuum flies out of the closet and cleans the house all by itself. She wiggles, and her poor husband Darren materializes in the living room, blinking in surprise. Suppose someone came along and said, “Oh, I see! No mystery here. These events are explained by the wiggling of her nose.” Well, we wouldn’t be satisfied.

Now suppose that the person took to studying Samantha’s nose-wiggles and began to observe and record correlations between how she wiggles her nose and what happens. A long wiggle to the left followed by two short ones to the right precede every instance of inanimate objects moving on their own; two short left wiggles followed by two short right wiggles precede every instances of teleportation, etc. Would we now be inclined to say, “Oh, now I get it!”? Of course not. And now matter how detailed the study of the patterns of nose movements—no matter how perfect the correspondence between distinctive sorts of nose wiggles and distinctive events—we would be no closer to having an explanation of how Samantha Stevens does what she does. Nose wiggles are analytically disconnected from flying objects and teleportation, such that they have no capacity to close the explanatory gap.

The claim, in effect, is that physical brain events bear the same relation to consciousness. They are analytically disconnected in such a way that it is not possible to close the explanatory gap.

Of course, it is one thing to say this, another for it to be true. But here is the problem. If someone were to ask why Samantha’s nose-wiggles are analytically disconnected from flying objects so as to be incapable by themselves of providing an adequate explanation of the latter, I would be at pains to offer anything other than, “Well, think about her nose wiggles. Think about flying objects. They have nothing to do with each other.” The sense of disconnect here is so intuitively evident that, in the absence of some astonishingly unexpected explanation that succeeds in establishing a connection, one is justified in assuming that “more of the same” won’t narrow the explanatory gap. We need to look past her nose and introduce some further element that can make the connection.

But, of course, defenders of materialist conceptions of consciousness think brains and minds have everything to do with each other—and so it may well be the case that what we have here is (once again) a basic dichotomy of intuitions. Those who find the explanatory gap argument persuasive have an intuitive and immediate sense of the distinctness of consciousness and mechanistic processes—and this intuitive sense entails that in the absence of a causal explanation that succeeds in closing the explanatory gap, the presumptive standpoint will be that the gap can’t be closed by that kind of explanation.

This is where I am positioned. And because I am positioned as I am, no materialist account of consciousness will be convincing in the absence of an explanation that actually closes the explanatory gap. But for those with different basic intuitions, the situation may be very different.

So what does all of this mean? Do I think that scientists should stop trying to explain consciousness in terms of the brain? No. But it does mean that unless and until they succeed, those like myself—those who see a disparity between brain processes and conscious states as enormous (more enormous, actually) as that between nose-wiggles and self-propelled vaccuums—won’t believe it until we see it. For us, given where we stand intuitively, the burden of proof rests on the materialist to show that the explanatory gap can be closed by nothing other than a deeper study of the brain.

In the meantime, we’ll conduct our own inquiries—looking for something more, some additional element, that can bridge the gulf between mechanistic explanations and the phenomenon of consciousness, and so explain the correlations that scientists have shown to exist between the two.

That different people, with different basic intuitions, are pursuing different theories and attempting to find ways to substantiate them (especially to those who stand on the other side of the intuitive gap) seems to me as if it can only be a good thing--although there are, of course, plenty of people on both sides of the divide who think that what those on the other side are attempting is absurd and pointless and worthy of nothing but mockery.

23 comments:

  1. Hi, All-

    The Leibnitz and Lotze quotes were interesting, but they obviously labored under a very mechanical image of our inner workings, (motions! windmills!), and had no notion of the electrical, informational, and other rapid-fire processes we know of today. So it would have been easier to repudiate the whole idea as preposterous. But neither is really an argument- they seem more like statements of intuition.

    It certainly is understandable that- "matter is really better than it externally appears"- seems like a daft idea. The Egyptians thought the brain was just a lot of snot, after all. But now we know it is quite a bit better than it appears. Whether it accounts for the whole ball of wax may be open, but it already accounts for a very large proportion of mental operations.

    "They follow their articulation of this principle with the following crucial claim: “No matter how much is said about the nervous system, as long as what is said is confined to statements of fundamental physics and chemistry, you will always be able to ask ‘But why does that produce consciousness?’”"

    Well, that yet again seems like a rather bold claim. We may well get there. Note that we don't know how the brain works yet. So these kinds of claims are speculative, easily as speculative as my counter claim that it will all become clear once we know enough about the brain.

    "In the meantime, we’ll conduct our own inquiries—looking for something more, some additional element..."

    Ah- best of luck. But what kind of an exploration will this be? I think this is a deep question, since the mode of inquiry says a lot. I get the sense that you will be mining your intuitions more and more deeply, delving into the depths of your soul. This is problematic in several ways, including- our naive intuitions about the brain have already been blown away (in part) by neuroscience, our intuitions have been documented to be wrong in many other instances, and our intuitions tend to lead in narcissistic directions, which is where the study of the mind, as you quote and explain so extensively, has been frozen for millennia. So, while I hope to not descend to mockery, it seems to me that there are serious difficulties with your project.

    Let me add one more analogy- an airplane. The passengers in flight have a radically different experience and view of its properties than those on the ground looking up at it. With quite a bit of intellectual imagination, observers from each perspective can get their minds around the perspective of the other, (if they even conceive that there is such an alternate perspective), but it takes some work and extra knowledge beyond just how it feels to be in one's own position.

    Some understanding of what the airplane is doing and how it works would be helpful ingredients in bridging the two perspectives. For instance, neither the people on the ground nor the people on the plane know what is going on in the cockpit, which is an essential part of the operation, as are other invisible communications, etc. Perhaps then the question becomes.. who is my co-pilot?! (That was just a little joke- sorry.)

    ReplyDelete
  2. Eric

    I fear I will lapse into repetition on this topic, but I can't help myself (where's one's will when one needs it?)

    First, the problem with all the thought experiments used in this debate is that they crank out exactly the intuition you take into them. A materialist does not find the Chinese room puzzling. The man does not understand Chinese, nor does an individual neuron in a Chinese speaker. The brain does though, if understanding is a materialist phenomenon, and so by analogy does the room. So it goes for zombies, colourblind scientists, windmills and bats. I am puzzled as to why they are employed at all, as they serve only to clarify our own intuitions or prejudices, yet are presented all too often as arguments in favour of an anti-materialist position. To put it as plainly as I can, is there anything in the Chinese Room scenario that stops me assuming the whole room is conscious, other than the intuition that consciousness isn't like that?

    Second, in the absence of the explanation, and I agree, it's still to be delivered by science, is one approach or intuition better than another? Probably not, so long as neither gets in the way of active curiosity and investigation. I think it's unfair to argue science hasn't got anywhere with consciousness yet. Incredible progress has been made. Yes, there's still mystery to hide behind, but it's disingenuous to argue that mystery isn't shrinking by the day. (Consider if you will investigations of patients who claim to be blind but show under testing evidence of sightedness. That tells me some pretty interesting things about what consciousness is that we didn't used to know).

    Finally, you may be able to help me clarify what is meant by emergence. Would you say, for instance, that the relationship bewteen a circle's diameter and circumference emerges from its being a locus of points equidistant from its centre?

    Thanks for these posts by the way. I love this stuff and you are a tremendously clear communicator.

    Bernard

    ReplyDelete
  3. One of the impressive title you have for your post.

    ReplyDelete
  4. Burk and Bernard,

    “Ah- best of luck. But what kind of an exploration will this be? I think this is a deep question, since the mode of inquiry says a lot. I get the sense that you will be mining your intuitions more and more deeply, delving into the depths of your soul.” (Burk)

    “Second, in the absence of the explanation, and I agree, it's still to be delivered by science, is one approach or intuition better than another? Probably not, so long as neither gets in the way of active curiosity and investigation. (Bernard)

    “This is where I am positioned. And because I am positioned as I am, no materialist account of consciousness will be convincing in the absence of an explanation that actually closes the explanatory gap. But for those with different basic intuitions, the situation may be very different.” (Eric)

    Burk, you note that Eric is coming at this from an “intuition” which I actually believe is better understood as a “faith” or world-view perspective while you write as if you are coming at it from a “scientific” view or a view that is “only” based upon “reason.” As Eric notes above, we are all coming at this from a certain “faith” or intuitive perpective. I would caution you that your own view derives also from the well-spring of your imagination, your faith or world-view perspective.

    Bernard, I would challenge you that a materialist perspective might indeed get in the way of active curiosity and investigation because all one need do is read the major figures in the field and it is clear that only a materialist (of some type) view of things is even considered. Yes, on blogs and in many informal settings other views are considered, but they are hardly considered in the literature or in formal settings. How is that open, curious, or effective investigation? Maybe I’m wrong. If you do know of some well known leaders in this field that seriously hold or consider views like Eric’s, I would love to know who they are.

    Also, I don’t think Eric was saying that, “science hasn't got anywhere with consciousness yet…” but was simply pointing out the rather obvious fact that this explanatory gap, which is huge, remains.

    Finally, and Eric hopefully can shed some light here, but the way you (Bernard) are interpreting the Chinese room scenario doesn’t make sense to me at all. You seem to be removing the core problem one step (from the man to the room), but which does nothing to solve the basic problem. Please help me to understand your point and why you think it doesn’t raise a problem for the strict materialist.

    ReplyDelete
  5. Let me take up Bernard's Chinese room argument, which is very interesting. Do we "understand" English? When we are "looking" for a word, that word usually "pops" into our head in time. But sometimes it doesn't. Sometimes it is on the tip of our tongue, but doesn't arrive. We are quite oblivious to the mechanisms behind our so-called understanding of English. Our ideas translate (usually) effortlessly into English sentences, but then we have to go back and edit them because what came out wasn't really what we meant, or sometimes we even have to make up a new word to express a thought. Sometimes, it can be quite laborious indeed.

    This is all to say that we have this Chinese room within us, where proper translations of thoughts and incoming language pop out on a routine basis, but the inner workings of that room are just like Searle presents.. unacquainted with the total picture of our thoughts. Sometimes a stroke can shut this room down, even though our mind otherwise is quite functional.

    When we are learning a new language, there is a dynamic going on where we seek to join concepts with language symbols, getting the coding more and more precise. It is obviously not a conscious process underneath, but one where we beat our heads against a wall practicing, until some unconscious process lays the pattern down reliably enough that this inner Chinese room operates with sufficient accuracy and speed that we can say we "understand" this language. Which is to say, we get symbols out of it that express our thoughts, and we can get external symbols translated into conceptual thoughts.

    Lastly, there is the issue of joining concepts with language in general. This is where the Chinese room analogy is lacking, since it is only a symbol-to-symbol translator, not a symbol to thought translator. Our "understanding" of a language extends beyond the ability to translate it to another symbol stream, but more importantly includes the ability to integrate the symbols from language into a larger mental reality that is partly language-agnostic (though also language-influenced).

    This mental model includes reality, cultural fantasy, personal fantasy, .. everything we have in our heads, which is richly inter-related in a network database slowly built up over childhood and beyond, which in turn provides the superset to which we can match language-borne concepts or other perceptions and novel thoughts we might have. Which is my model of how we "understand" things. This accords with how each person "understands" things differently, because each person has a different internal database, however much we strive to regiment learning so that everyone "knows" the same things.

    ReplyDelete
  6. I've got only a couple of minutes, so let me limit myself to one comment, relating to Bernard's observation that what lessons we take from various thought experiments seems to be a function of the intuitions we bring into them.

    I think that, in a way, this is exactly right. But what it highlights is the function that thought experiments serve (and what we CAN'T expect them to do). As I understand thought experiments, they are "intuition pumps" (in fact, some philosophers call them that explicitly, which I think is helpful). That is, thought experiments help us to see WHAT our intuitions ARE.

    This can be very useful in helping us to coordinate our belief systems so as to achieve maximal consistency. Of course, consistency of beliefs isn't a guarantor of truth, but it is a necessary condition for it.

    But intuition pumps can also help in a different way: they can help to uncover the roots of disagreement. That is, they can help us to see where the differences really lie between competing voices in a debate (and, by implication, where there are points of agreement). While some thought experiments pump out the same intuitive judgment among virtually all those exposed to them, others "invite" a certain intuitive conclusion--and while many "take up" the invitation (they say, "Yes, that's my intuition about the case"), others don't. Both lessons are important for better understanding the roots of disagreement.

    ReplyDelete
  7. Hi Eric

    Yes, I agree with that, it's a good point. Indeed I must admit that grappling with these very intuition pumps has helped me clarify the issues.

    What I should have said more clearly is that it is something of a debating trick to refer to these pumps as generating facts, and I suppose I was gently calling you on this use in your first consciousness post.

    Your argument, elegantly put, can be re-expressed as 'consciousness just doesn't feel physical to me.' Mine, on the same terms, is 'I can imagine how it might be physical.' I think it becomes troublesome when we start claiming the other side has some serious logical challenges based only on intuition pumps we know full well can be interpreted either way.

    For me the crucial point is that despite the objections science is making good progress.
    I'll pursue this point when I get time.

    Bernard

    ReplyDelete
  8. Darrell

    I agree with you that enquiry must remain open minded, and if there is indeed some conspiracy against this then it is a grave mistake. A good example might be the reluctance brain science once showed to engaging with eastern traditions of meditation, when it turns out these are deeply promising subjects when it comes to exploring certain mental processes.

    I would also agree that my reframing of the Chinese Room puzzle simply shifts the problem. That of course is the whole point. It's what the puzzle does. The only way you can respond 'gee, yes, there is something more to consciousness' is by excluding the possibility in advance that consciousness is a purely physical phenomenon. If you don't, then the option that a machine smart enough to engage in convincing conversation (as it is under most framings of this problem) is indeed also conscious, remains an open one and the scenario establishes nothing new.

    Burk, often the Chinese Room enthusiasts go beyond a simple translation device (in which case your explanation is exactly right) to one that can participate in convincing conversation. I suspect that in order to pass such a Turing test consciousness is probably necessary, because in the final analysis this is what consciousness is going to turn out to be, a method by which complex information is processed and responded to.

    Bernard

    ReplyDelete
  9. In the Chinese Room experiment as stated, the person in the room is said to actually answer questions asked in Chinese – supposedly on a number of topics. Then, it seems to be stating the obvious that the whole system (person, room, interface, and so on) displays actual knowledge of Chinese and of the domain about which it is able to answer questions and, moreover, at some semantic level (depending on the kind of questions). Saying that the person does not know Chinese is the same as saying that the CPU of a computer does not know what programs running on it are doing. In the same manner a sophisticated robot could display real intelligence. I don't know what it implies about consciousness though – in fact, the experiment seems unrelated to consciousness altogether.

    A word about intuitions. There is a lot of talk about our basic intuitions differing (re: Searle vs Dennett), thus sending us in different directions. This is certainly true, up to a point. But, knowing our intuitions are often incorrect, we don't have to believe them. For instance, I certainly have a strong intuition that there is some unified self that is me. However, considering the troubling results of many experiments (in particular with split brain patients) I am perfectly willing to accept (at least on an intellectual level) that this is not the case and that this probably inescapable intuition is wrong.

    Bernard РYou have intrigued me with Hofstadter's use on G̦del's theorem and I found the book a couple of days ago in a used book store. I didn't have time to read all of it (I think it's too long) but there are lots of stimulating stuff. I like the idea of consciousness being what happens when high-level self-referential symbols start to bouncing around in the brain. Far from a complete explanation though.

    ReplyDelete
  10. I wonder how the difficulty of explaining consciousness compares to what has been in the past the difficulty of explaining life. It seems to me that life was until recently as much a mystery as consciousness appears to be now. I suppose arguments (maybe similar to what we see for consciousness) were put forward to “prove” that life could not be reduced to, say, chemistry and physics – but we now know it can and this is no longer controversial. I don't know enough about the history of philosophy to decide if the parallel is valid or not and so this is as much a question as a comment. But if the comparison has some validity, there are certainly lessons to apply to the case at hand. One of them is to be patient: figuring out the biochemistry of life was hard, took a long time and demanded a number of unanticipated discoveries. Another is that counting out naturalistic explanations is always a risky bet.

    On a personal note, I'll be driving through Oklahoma tomorrow. So, Eric, greetings from the North!

    ReplyDelete
  11. Hi all,

    one reason I enjoy the materialist perspective is that which JP advanced. It has worked very well in the past when confronting deep mysteries. The intuition that consciousness is a different kind of thing and will need to be studied differently is interesting, and instinctively appealing. Against that is very real possibility of scientific progress.

    An example. Eric raised the case of what it's like to be a bat. It doesn't seem to me impossible that science could answer this. Take for example the change blindness phenomenon. We see here qualia, whatever they are, coming and going as what the viewer reports perceiving differs from what is being shown. Next step then would be to examine (technology allowing) the patterns of connections that distinguish a person who is seeing a brown door from one who knows they are seeing a brown door. This would tentatively establish a neural characteristic for the generation of visual qualia. From there, examine the structure of the bat's brain and see whether it is capable of achieving the same type of information distribution. If it turns out it can't, then although aspects of what qualia are remain unresolved, one of the philosopher's favourite mysteries regarding consciousness has fallen. We would have good cause to believe something crucial about what it is like to be a bat, and that would be that its experiences come without qualia.

    For this reason, I resist the idea that science will never be able to get at what qualia are. It's being approached from many angles, so shaping and sharpening our intuitions. I like this idea that intuitions are the starting point for investigation, rather than a final word on anything.

    Bernard

    ReplyDelete
  12. Bernard,

    With respect to the following:"Your argument, elegantly put, can be re-expressed as 'consciousness just doesn't feel physical to me.' Mine, on the same terms, is 'I can imagine how it might be physical.' I think it becomes troublesome when we start claiming the other side has some serious logical challenges based only on intuition pumps we know full well can be interpreted either way."

    While I readily concede that my position on emergentism is rooted in certain very basic intuitions of mine, intuitions that I become clear about in part on the basis of thought experiments, I see my case against identificationism as a bit different in form--and hence the point of disagreement as lying in something other than intuitive difference.

    While I reference a couple of thought experiments in the discussion of identificationism, their function (at least for me, whether or not this was the purpose to which Jackson and Nagel put them) is not to tap intuitions but rather to direct our attention to a specific phenomenon so as to make possible an ostensive definition of consciousness (a definition achieved by “pointing,” either literally or metaphorically). On the basis of the definition of consciousness thus arrived at, I challenge identificationism (but not materialism generally) on the grounds that the relationship between consciousness thus defined and “brain processes” as conventionally defined by materialists does not meet the logical conditions of identity.

    Now I think the dominant challenge to this argument is that, in effect, my definition is somehow inherently confused—that I THINK I’m referring to one thing but I’m really referring to something else, that what I think I’m referring to isn’t really there, and that while the logic of identity doesn’t hold between the SUPPOSED referent of “consciousness” and brain processes, it does hold between the ACTUAL referent and brain processes.

    In other words, when it comes to identificationism I think something other than a difference in basic intuitions is at work—but I’m still not sure about this and need to think more about it.

    I do want to stress, however, that the referent I am trying to call attention to for the term "consciousness" is the same one that Searle is trying to call attention to--and Searle does not for a minute think it is non-physical. So I don't THINK a sense of non-physicality is preemptively built into the referent. Rather, it emerges (for me) after what I have identified introspectively is compared to physical systems. THAT's when the intuition "This thing just can't be explained by a physical system ALONE" arises for me. And it seems clear that many do not share this intuition. So at this point the disagreement does seem to turn on basic intuitive differences.

    ReplyDelete
  13. And let me stress again (simply because it is easy to lose sight of this point) that in my argument against identificationism, I am rejecting "indentification" in only a very strict technical sense--not in the looser sense often employed by materialists when they say that consciousness is a brain process.

    In the technical sense, denying that A=B does not imply that A isn't a property of B, or that it isn't the way B appears from a certain perspective, or that it isn't wholly caused by B.

    I suspect that many critics of this anti-identificationism argument are resisting the argument because they think the conclusion is intended to be stronger than it really is. As such, many of the objections to it proceed by, in effect, attempting to show how consciousness might be the way that a brain state looks from the perspective of an internal monitoring system, or an emergent property of the whole. But the anti-identificationism argument is wholly consistent with one of those things being the case.

    Maybe the following will be helpful. Searle, in his Chinese room thought experiment, is trying to show that effective symbol manipulation in a system does not by itself generate understanding of the semantic content to those symbols. As such, he aims to show that the functional organization of the brain cannot be equated with consciousness. Critics of Searle can and do suggest, in response, that the Chinese room AS A WHOLE might be said to understand Chinese, even if the man in it does not.

    But notice what is being said here: This response is saying that the understanding of Chinese might be an emergent property of the whole system even if none of the parts (including the man in the room) exhibit such understanding--but no one who says this would be likely to say that the understanding of Chinese just IS the room.

    My argument against identificationism is intended to reject, in effect, the view that the understanding of Chinese just IS the Chinese room. But I don't think the argument against THAT view enables us to reject the view that the understanding of Chinese is to be identified with some emergent property of the room, such as its functional organization. For that, more is required.

    (And it may well be correct that Searle's thought experiment, which is supposed to provide this something more in response to those who would equate consciousness with the brain's functional organization, only succeeds in tugging free--from those who have them--certain intuitions which cannot be consistently maintained alongside the view that functional organization alone generates semantic content.)

    ReplyDelete
  14. Thanks Eric

    I appreciate the time you take on these replies. They get me thinking, a great luxury.

    I shall think more about The Chinese Room in the light of what you say. I suspect I do hold the view you reject, that understanding Chinese just is the room, in the sense that the room, moving through an unimaginably complex series of physical states during the time period in which it responds to a particular conversational gambit, is exactly what we mean by understanding Chinese. That it is a physical phenomenon. My first instinct is that the apparent problem stems form our difficulty in understanding what we mean by consciousness. We sort of think we know what it is until we attempt to get the detail in place, in which case the definition dissolves.

    An example, when I look out my window I am confident the ocean scene before me is a rich image of visual qualia. But, if I am asked to identify details within that picture, I find they are not available. I can not tell you how many seagulls are perched upon the lamppost, nor the colour of the car parked across the road. Not without re-attending again to exactly these details. There is something illusory about qualia in this way, we fool ourselves about them.

    This is where science comes in I think, to tease out what we really mean when we speak of things like consciousness and qualia. I think that once these are in place, arguments like that from Dainelos which you formalised, can be properly assessed. Will it really be true, in the case of consciousness, that something can not be a phenomenon if it is itself a precondition for that phenomenon? Well, I don't think we know what we mean by the precondition in this case, because we don't have a good enough handle on what we mean by consciousness. (Do you think dolphins have consciousness for example? Is their consciousness a precondition for phenomenon? And what evidence do you use to make this judgement?)

    Clearly I'm not sure about any of this, but it does fascinate me. I just keep getting pulled back to the feeling that until we see where the science leads us, we're rather whistling in the wind on this.

    Bernard

    ReplyDelete
  15. Hey Eric! This isn't about the current post but I was wondering if you would do a post about animal suffering. I was reading your book today, specifically the chapter on evil, and i think it is a great chapter but I wondered how animal suffering fits with it. What's the point of their suffering? Do they have souls that will be redeemed? I know you don't know the answers but I am interested in your thoughts.

    ReplyDelete
  16. Bernard,

    “I would also agree that my reframing of the Chinese Room puzzle simply shifts the problem. That of course is the whole point. It's what the puzzle does. The only way you can respond 'gee, yes, there is something more to consciousness' is by excluding the possibility in advance that consciousness is a purely physical phenomenon. If you don't, then the option that a machine smart enough to engage in convincing conversation (as it is under most framings of this problem) is indeed also conscious…”

    But it is not supposed to be a puzzle or a trick. It is an analogy or an example that makes us reflect on the matter of semantics being different than syntax. There is a huge difference between a machine that can manipulate symbols and what we perceive is happening when a person or a group of very different people understand what a poem is communicating, for instance. Or, do you not think there is such a difference? Do you really believe that a machine that could be programmed (for instance, if the input is “how are you?” the programmed output is “I’m fine.”) to “engage” in conversation- is the same as what might go through your mind when you reflect on whether you are truly “fine” or not? For instance, think of the times you have answered, “I’m fine,” while inside maybe you were hurting deeply. Does that difference make sense?

    Plus, there is something deeper going on here than just excluding possibilities in advance (which, by the way, is also what the naturalist is doing). The very problem is raised WHEN WE DO consider the naturalist perspective and try to make it work. The Chinese Room problem is only a problem for the strict materialist, and is why the conundrum exists.

    “one reason I enjoy the materialist perspective is that which JP advanced. It has worked very well in the past when confronting deep mysteries.”

    But has it worked out well? It has not bridged the explanatory gap under discussion and it has not solved anything as to the origins of life only the evolution of life, which is entirely compatible with theism. It has not solved what we might call the “big” questions of life and meaning.

    Perhaps I'm wrong, but it seems you are again conflating “science” with a philosophical perspective (materialism/naturalism). Science has indeed solved many mysteries, but such had little to do with the relatively recent philosophical perspective called naturalism or materialism.

    ReplyDelete
  17. JP,

    “I suppose arguments (maybe similar to what we see for consciousness) were put forward to “prove” that life could not be reduced to, say, chemistry and physics – but we now know it can and this is no longer controversial.”

    But life is also consciousness, right? In fact, isn’t that what we define “life” as in a sense? When a person is in a state where no conscious activity is detected, but they are either being kept alive artificially or because heart and lungs are still working, is that truly being “alive?” I wonder then if “life” can be reduced to chemistry and physics and I would guess not, thus this very conversation.

    I would argue that “science” and especially philosophical naturalism has not come close to answering the true mysteries of life, such as the very ones under discussion. This is not say anything negative about science at all, only to recognize its limits.

    ReplyDelete
  18. Bernard,

    “I like this idea that intuitions are the starting point for investigation, rather than a final word on anything.”

    This is to assume that “final” words are not also intuitions, which they are—in my view anyway. The fact that the earth is a certain distance to the sun is only the “final” word in the small area of knowledge called “distance,” but it is hardly the final word when we begin to connect all of our knowledge into greater webs of significance. When we begin to do that, we are doing philosophy and all such final words are, I would say more than intuitive, but at least faith-based or world-view based. I would suggest that “final” words are always philosophical words that holistically sum up meaning and significance regarding the knowledge we possess.

    “This is where science comes in I think, to tease out what we really mean…”

    “I just keep getting pulled back to the feeling that until we see where the science leads us…”

    Again, it sounds like to me that you believe we all have these opinions, intuitions, and “feelings” about certain things but the moment the “hard” science steps in and reveals the “facts” and “evidence” about such things, we will all have the “final” word. In reality, though, this is really the narrative of philosophical naturalism, which is a faith-based way of seeing the world, and not “science.” I guess I’m just wondering if you detect this almost hidden premise in your responses.

    I don’t think science will ever tease out what we really “mean” because science is simply a method, an employment, a tool, that is always being wielded within a philosophy. I think, rather, that it is our philosophies that lead us and help encompass the whole, which of course should include science.

    As an aside, I am still interested to know if you and JP would consider the possibility that your materialism or philosophical naturalism is simply a projection, a construct of meaning, but one that is maybe not true in any objective way—a way that corresponds with physical reality or the way the world really “is.”

    ReplyDelete
  19. Hi Darrell

    As always, apologies if I only get to some of your points in this reply. I'm unsure what length of reply constitutes an impoliteness within the context of this blog. You ask good questions and I know my answers are at time inadequate. I don't pretend to be doing much more than muddling through.

    The Chinese room first. Yes, I am saying exactly that. I, like a machine, can engage in a non-conscious conversation, giving small and inconsequential, unconsidered response to things I am barely hearing. My contention is that both human and machine can do this, syntax without semantic content, if I understand these terms correctly.

    Human and maybe machine too in time, can also grapple consciously with communication, as per your example of choosing to say I am fine when I am not. The difference, perhaps (so at least worthy of investigation) may simply be one of complexity. To feel one thing but choose to say another it seems to me requires referencing a great number of states and possibilities. So maybe, this interaction of millions if not billions of neurons, a process too complex to possibly be grasped in the first person, is dealt with by the brain at a metaphorical level. Consciousness becomes this metaphor, and the difference between syntax and semantics becomes only one of physical complexity. I am not saying this is the case, but if it is logically possible then The Chinese Room accommodates well the materialist position.

    I don't think science will lead us to final truths, and if I have suggested this I have made an error. But I think scientific investigation is an excellent way of testing our intuitions about qualia and I've tried to give a couple of examples of this. As an agnostic, I think we can get along perfectly well without the final word on anything, embracing our social responsibility to negotiate a shared understanding of life's meaning.

    And yes, I absolutely would entertain the possibility that my view is simply a construction that fits badly with objective truth. I can see no way of knowing this isn't so. Hence I embrace pragmatism, and acknowledge fully that others will pragmatically arrive at conclusions quite different from my own. I have genuine respect for your beliefs, and do not wish for one moment to suggest I think they are foolish or even wrong. Because they are different than my own, I enjoy teasing out these differences, primarily as a way of modifying my own beliefs, but also as a way of deepening my understanding of others.

    As always, thanks or engaging. There is far too little communication across the trenches, so to speak.

    Bernard

    ReplyDelete
  20. Hi Darrell,

    But life is also consciousness [...]

    I was simply trying to see if there are similarities between the problem of consciousness we are now trying to solve and the problem of life as it was seen in the past. When I say that the latter has been solved, I am not implying that consciousness has. Think of simpler life forms: insects, trees, bacterias. What I mean by “solved” is that it is pretty much settled that life in this sense can be explained by biochemistry (but maybe not its origin). I don't think there is any significant gap here.

    Whether the two problems were seen similarly, I don't know. Maybe you know enough of the history of philosophy to tell. But I would expect that the problem life was seen as as much intractable as consciousness may seem now. Maybe it was supposed that Life (capital L) was some kind of stuff (élan vital?) that needed to be added to matter to make it go. I think we can say now that it is not the case. But I certainly don't mean to say that all things related to life (e.g. Consciousness) are solved.

    As for the other questions that are left unanswered, I feel just like Bernard. I appreciate this conversation a lot and it's a shame I cannot cover more ground (especially now that I have little time – the ground I have been covering these last days is literally geographical).

    Let me try a few short answers.

    My take on the Chinese room is that the system is intelligent, knows Chinese and answers at the semantic level. I wouldn't say it is conscious however (it's essentially like a digital computer which the brain is not). Bernard seems to differ on this.

    To your last question: of course, my materialistic views may be wrong. I don't know that I would express this the way you do but I certainly don't exclude that. However, I must add that my reading of the evidence pushes me strongly in that position.

    As a bonus answer: I am open on the possibility that we may not be able to know all of reality. In fact, I don't have a strong leaning either side. However, if there is such an unreachable reality, I would be very surprised to find that it has any similarity at all with anything we know (as the theistic account assumes). I would expect it to be extremely strange (much more say than quantum mechanics) and counter intuitive.

    Must run now.

    ReplyDelete
  21. Bernard,

    “The Chinese room first. Yes, I am saying exactly that. I, like a machine, can engage in a non-conscious conversation, giving small and inconsequential, unconsidered response to things I am barely hearing. My contention is that both human and machine can do this, syntax without semantic content, if I understand these terms correctly.”

    But no one was saying a machine could not respond as programmed, syntax without semantic content, or that humans often do respond almost blindly, by rote as it were, without any (that we are aware of anyway) semantic content. But that was not the issue or point. The point was, humans, as opposed to machines, CAN understand semantics, nuance, irony, sarcasm, humor, and all the many complexities of the symbols (language) we manipulate. All a computer or machine can ever do is manipulate symbols and how slow or how fast those symbols are manipulated could never produce a sense (for a machine) of humor, irony, sarcasm, or any nuance whatsoever. In my view, this gap is infinite. Could one program a sense of irony into a computer? Even if, for example, we could program a computer to know that, “If one were to reply to the observation that it was raining on their wedding day, “Oh, how nice,” that such is ironic, do we really believe the computer would have a sense of the “ironic?”

    Further, how would “complexity” change this dynamic? This dynamic has nothing to do with speed or quantity of information. It is a sensibility that one partly can learn only by experiencing it once it happens to you. We’ve all had that sense, suddenly, unexpectedly, of irony, humor, or whatever. Do you see this gap (The gap we’ve been discussing) between the physical phenomenon of neurons firing (or micro-chips firing) and what is called consciousness?

    ReplyDelete
  22. Hi Darrell

    Yes, you have it exactly right. I believe it may turn out there is no such gap. So, the idea that no machine, no matter how complex, could ever experience humour, is an assertion that may in time fall over. I don't say it will, but I hold the possibility is both open and examinable. This would require discovering the way in which the machinery of the brain produces such things as the feeling we call humour.

    Were I a brain scientist I'd be particularly interested in how memories are stored, the link between language and memory, the difference between the way we report on qualia and the information we can pull form our qualia and the differences between both learning behaviour and brain architecture of humans and closely related animals. Scientists are indeed digging around in all these areas.

    Bernard

    ReplyDelete
  23. Bernard,

    "...I believe it may turn out there is no such gap. So, the idea that no machine, no matter how complex, could ever experience humour, is an assertion that may in time fall over. I don't say it will, but I hold the possibility is both open and examinable."

    Well I commend your faith, which given I think this gap infinite and a complete category mistake to even entertain bridging, I must admit I am impressed with the depth of your faith--it makes mine (in God) small I think.

    I have certainly enjoyed the conversation and will continue to try and understand where such a faith might spring. I appreciate your thoughtful responses and the help in sorting these things out a bit. I look forward to Eric’s other posts on this subject.

    ReplyDelete