A key issue that comes up repeatedly on this blog also strikes me as crucial for understanding the philosophical method—and so seems a fitting topic both for students of mine who are about to dig into my philosophy of religion course and for regular readers of this blog. The issue is this: When it is reasonable to trust our intuitions? Put another way, when is it appropriate to make use of an intuitive judgment as a premise in an argument, thereby treating it as a reason to believe a conclusion?
Of course, all of us agree that our intuitions can be mistaken. But does it follow that we are never warranted in making use of them? Is it even possible to refuse to make use of our intuitions—or is it, rather, the case that all of us inevitably appeal to intuitive judgments (but, perhaps, are mostly unaware that we are doing so, because the judgments seem so obvious to us that we don’t even notice we’re assuming them?)
I suspect the latter is true, especially when we are wrestling with philosophical questions—questions which, typically, cannot be answered based on sensory observation alone. In my own experience, everyone who engages in philosophical discussions and debates has intuitions that they are making use of—but not everyone recognizes their own intuitive presuppositions. In a sense, our intuitive starting points operate as the lenses through which we look at our world. Since we’re looking through them, they often become invisible to us. Part of what philosophers strive to do (with greater or lesser degrees of success) is to explicate what these starting points are. And in many cases the most valuable outcome of philosophical debate is that participants come away from them more fully aware of their own intuitive starting points than they were before—as well as more aware of how looking through those “lenses” colors their experience.
Of course, many of the premises that philosophers make use of in their arguments are drawn from observation. Sometimes they are observations of the most general kind (for example, the first of Aquinas’ “Five Ways”—his initial arguments for the existence of something with God-like properties—begins with the premise that there are things that undergo change). And philosophers will also make use of principles that are matters of logic (for example, the principle that something cannot be both the case and not the case at the same time in the same way, or the principle that if A and B are the identical thing, then everything that is true of A is also true of B).
But often enough, a premise in a philosophical argument will be neither of these things. Instead, it will be something that, while neither a matter of logic nor based on observation, just seems right (at least to the philosopher advancing the argument). In some cases the premise is thought to be self-evident. For example, Leibniz appeals at several points in his philosophical arguments to what he calls “The Principle of Sufficient Reason”—roughly, the principle that for everything that is the case, there is a reason why it, rather than something else, is the case. He treats this as a “first principle”—a self-evident starting point for reasoning about things.
And Leibniz isn’t alone. Richard Dawkins, in The God Delusion, offers an argument against the existence of God that depends on a principle Dawkins is only partly explicit about. The principle runs roughly as follows: “In order for an instance of organized complexity to be adequately explained by an intelligent designer, the intelligent designer—whether material or immaterial—must be at least as complex as that which is being explained.”
This principle isn’t a matter of logical necessity, and it certainly isn’t a matter of empirical observation (how many immaterial intelligent designers have we observed so as to ascertain that they consistently display the property of being at least as complex as what they have designed?). So why does Dawkins accept this principle? Because it just seems right to him. In other words, he has a strong intuition that it is true.
Often, philosophers rely on thought experiments whose most important function is to serve as “intuition pumps”—that is, their purpose is to help us get clear on what our intuitions are. In other words, the purpose of these thought experiments is to help us pinpoint what “just seems right” to us, to make these assumptions explicit--in part so that we can make deliberate use of them in our subsequent reasoning as opposed to relying on them implicitly without noticing that that’s what we’re doing; in part because only once we are conscious of our intuitive starting-points will we make them available for critical scrutiny.
And this leads to my next point. If reliance on intuitions is inevitable—but iour ntuitions are fallible—we are faced with an important question: When should we trust our intuitions and make use of them, and when shouldn’t we? Put another way, when is an intuition a good reason for me to believe something, and when isn’t it?
At this point I think it’s important to distinguish between two kinds of intuitions. First, your mind may leap ahead of your plodding intellect to a conclusion that, in a sense, you believe “intuitively.” But in such cases, the intuition presents itself as a kind of research project: You have a sense (an “intuition”) that the body of evidence (or the rules of logic, or the basic doctrines of a belief system) supports this conclusion—but you still need to do the work of showing that it does. And once you take the time and effort to pursue that work, your intuition might be vindicated or undermined. In either case, you no longer believe it intuitively.
That’s not the kind of intuition I want to focus on here. Rather, I want to focus on the kind of intuition that serves as a foundation for thinking and critical reflection. I have in mind beliefs that just seem right to us in themselves, that we have a strong confidence in, but which we don’t believe on the basis of other things. The point is that we all have such intuitive starting points. But having them is no guarantee of their truth…and yet it would be impossible, I contend, to operate in the world without trusting these intuitions at least some of the time. So when do we, and when don’t we, trust them?
Now I don’t think I can, in a blog post, provide a fully satisfying answer to this question and then show that it’s the right one. But I do want to sketch out an answer that I find compelling (based on my intuitions?)—in part so that others can better understand my perspective, and in part to stimulate discussion.
So, when is an intuition of mine a “good reason”—that is, when is it appropriate for me to make use of that intuition in my reasoning, reaching conclusions based on it, making decisions in the light of those conclusions, etc.?
Let me begin the sketch of my answer by making two suggestions. First, I want to suggest that the worth of a reason can be specific to a particular reasoner in a particular context such that what is a good reason for me to reach a certain conclusion, given my circumstances, may not be a good reason for you in your circumstances. This I think is going to be a characteristic feature of intuitions: That an intuition of mine is a good reason for me here and now does not imply that it must be a good reason for you—and so, if you don’t share this intuition, I am not warranted in regarding you as unreasonable.
In this respect, rock-bottom intuitions are different from, say, logical principles. If you deny the principle of noncontradiction, it may be entirely appropriate to call you irrational. If you consistently refuse to accept the clear implications of the most meticulous empirical observations consistently corroborated by the most highly trained researchers, I might be justified in calling you irrational. But if you don’t accept Dawkins’ intuitive principle about complexity or Leibniz’s Principle of Sufficient Reason, I’m not convinced that a judgment of irrationality is going to be appropriate. Put another way, there are some things about which reasonable people can disagree—and intuitions are among them.
My second suggestion is this: While an intuition can be a good reason for me even if it is something I could be wrong about, it needn't be. The fallibility of my basic intuitions imposes important constraints on when I can legitimately make use of them in my reasoning and when I cannot.
Because intuitions are fallible, I don’t think one should hold to them fanatically. One should, in other words, be open to evidence and arguments that might refute them (or that might shake your intuitive judgment enough that they no longer seem so intuitively right to you). But in the absence of such evidence or arguments, if I have a strong intuition that something is the case, then I may treat it as a premise in my thinking when the implications of doing so should my intuition prove mistaken are benign (or are no more pernicious than the implications of setting the intuition aside). However, when the implications of trusting my intuition are not benign (or are less benign than the implications of setting the intuition aside), I am not warranted in making use of the intuition as if it were a reliable premise.
To put this more succinctly, intuitions can face evidential “defeaters” (evidence that counts against the truth of the intuition) and pragmatic ones (practical circumstances which make it too risky to trust the intuition).
Let me focus a bit more on the latter. Whether acting on a mistaken intuition has benign or malignant implications may vary according to context—in one situation it may be entirely harmless to trust an intuition even should it prove to be wrong, while in another context the costs of trusting the very same intuition (should the intuition prove mistaken) are grave. But we also need to consider opportunity costs: what benefits are lost if one refrains from trusting an intuition and the intuition proves to be sound? In some cases, there are costs or benefits that emerge regardless of the intuition’s truth—that is, there may be benefits to trusting the intuition even if the intuition proves false (or costs to trusting it even if it should happen to be true).
But hovering over all such pragmatic assessment of intuitions is the difficult fact that it relies on an evaluative framework of some kind. When you say that the costs of trusting an intuition should it prove to be mistaken are high, you are making a value judgment about the consequences of believing an intuition in error. How do we determine what is the right evaluative framework to use? Moral intuitions? You see the problem, I hope.
But instead of pursuing this problem here, let me explore more deeply how pragmatic assessment of intuitions might work by considering an example—a case in which mistakenly trusting one’s intuitions is (at least within my evaluative framework) not benign. Having recently finished reading Bernard Beckett’s short novel, Genesis, a particular scenario from that book comes immediately to mind—and it seems a particularly fitting one because it potentially poses pragmatic challenges to some of my own intuitions, ones that we’ve been talking about in connection with my recent series of posts on materialist conceptions of mind.
(It’s also fitting because it might serve as free advertising for Bernard’s novel, which in addition to being thought-provoking on a philosophical level also earns the high praise of having kept me up significantly past my bedtime).
At the heart of the story is the relationship between two characters—one human, one android. The human, Adam Forde, has violated the laws of the Republic in which he lives in a way that makes him a focal figure in the Republic’s internal turmoil. Neither executing him nor letting him go are safe options from the standpoint of the authorities, and so they pursue a compromise: he is locked away with an experimental android prototype, Art, for the purposes of exposing it to stimulation that will facilitate its cognitive development.
Let’s suppose Adam knows a fair bit about Art’s internal circuitry (I don’t think this is true of Adam in the novel, but let’s assume it). Suppose, furthermore, that based on this knowledge he has a strong intuitive sense that nothing in that circuitry could account for the presence of consciousness.
He might, of course, have the same intuition about the human brain: nothing about the physical system of the brain can, by itself, account for the existence of this thing called consciousness. But, like me, he'd also know that he is conscious, and that his conscious states are demonstrably correlated with brain states. Perhaps he reconciles these facts with his intuition about the inability of a physical system alone to explain consciousness by positing that there is something about the brain which attunes it to some non-physical reality. Although he has no idea what this non-physical reality is like, his strong intuitive sense that a physical system alone cannot account for the consciousness he's so intimately acquainted with leads him to believe there must be some mysterious additional component at work—but that there's also something about the brain’s unique properties that makes the connection with this non-material element possible. (We would have to suppose, furthermore, that he lacks a different intuition that some people do seem to have—namely, that if there is some essentially “spiritual” or non-physical reality, it couldn’t interact with a physical one, at least not in the way necessary to generate consciousness as we know it).
Of course, if all of this is true of Adam, then he might also think the same things about Art’s circuitry—that is, he might believe that any appropriately structured physical system could do the same work that the brain does, such that there is no reason in principle why an artificial intelligence could not be created. But let’s suppose he knows a fair bit about Art’s circuitry—and not only is it unlike the biological circuitry of the brain, but its design is of a sort that (perhaps based on a thought experiment similar, perhaps, to Searle’s “Chinese Room”) Adam has a strong intuitive sense cannot do the sort of consciousness-generating work a brain can do. And so, Adam's intuitions lead to the conclusion that, at best, all that Art's circuitry can do is mimic the behavior of a conscious being.
Now it may be that after prolonged interaction with Art, Adam accumulates a body of data that really challenges these underlying intuitions. Perhaps his interaction with Art has a flow to it that just doesn’t fit with the hypothesis that Art is merely mimicking consciousness. The nuances of their exchanges just seem too hard to “fake.” And so, eventually, he reaches a point at which his intuitions have been defeated by a body of experiential evidence. If so, it would no longer be reasonable for him to invoke his original intuitions as premises—he'll be forced to conclude that one of another of them must be set aside, since taken together they imply a conclusion that he has strong evidential grounds for disbelieving. His intuitions have suffered evidential defeat.
(This, by the way, seems to be what actually happens to Adam in the story.)
But in the first days of his imprisonment with Art, his intuitions will not yet have been defeated in this way. At that point, it may be reasonable for him to trust them—but that depends on the pragmatic assessment of the associated costs. And those costs are a matter of context. My circumstances are quite unlike those that Adam faces. Among other things, I'm not sharing living quarters with an artificial intellegence that behaves as if it is conscious. And that difference might matter a great deal for whether Adam is warranted in trusting his intutions.
In fact, in this case I think it would be unreasonable for him to accept his intuitions, even though they haven’t been evidentially defeated—because operating as if they are true (and hence as if Art is not a conscious being) is more costly, should the intuitions prove mistaken, than operating as if they are false.
Here’s my thinking. If Adam operates as if Art is not conscious and he's mistaken, there is a real cost of considerable moral significance—as I understand it, the cost would be a failure to treat a conscious being with the dignity that such a being deserves. To treat a conscious being as if it were an unconscious one is to objectify it. In a sense, this is what distressed me about Adam’s early treatment of Art in the novel: in their first days together, Adam is happy to act on the assumption that Art is no more conscious than a toaster (although there is some evidence that, even at this early stage, Adam isn’t entirely confident in this assumption--he's drawn into conversation with the android, but almost as if to repudiate himself for acting as if this were a conscious being he strikes out against it in an act of physical violence).
One wonders whether this early treatment by Adam may have played an important role in Art’s subsequent development—if you’re conscious, the experience of being objectified, treated like a thing, creates both wounds and needs that can have long-term negative repercussions, ones that often propagate outwards onto others. (If you want to know more about Art’s development, read the book.)
On the other hand, were Adam to operate as if Art is a conscious being and is mistaken, the costs don’t seem to be comparable. And so, in this case, there is a pragmatic reason for Adam to set aside his intuitions, even if they have not been evidentially defeated--because they have been pragmatically defeated instead.
It is, by the way, this sort of thinking that I believe underlies a comment that philosopher Peter Singer once made to me. (Peter Singer is a renowned moral philosopher who is most famous for having written Animal Liberation, a book that helped to launch the animal rights movement). Back in the '90’s Singer visited the university where I was teaching, and the philosophy department had a dinner for him at the home of one of our department members—a vegetarian dinner, of course. During the meal I asked him what he thought about eating snails.
Now, Singer’s argument for vegetarianism hinges on his case for saying that if a non-human animal has interests, its interests should weigh as heavily in our moral deliberations as the comparable interests of a human. Since pigs and cows and chickens all have interests—as evinced by their capacity to suffer—their interests need to be given the same moral weight as ours. My question to Singer was therefore really a question about whether he thought snails (and other animals with neurological systems far simpler than those of pigs and chickens, etc.) met this criterion of being interest-bearers, a condition that Singer (unlike certain environmental philosophers) doesn’t think can be met in the absence of consciousness--leading him to conclude that plants do not have interests.
Singer’s answer? “I don’t know if snails have interests, but I give them the benefit of the doubt.”
"The children of God should not have any other country here below but the universe itself, with the totality of all the reasoning creatures it ever has contained, contains, or ever will contain. That is the native city to which we owe our love." --Simone Weil
Showing posts with label animal rights. Show all posts
Showing posts with label animal rights. Show all posts
Wednesday, August 25, 2010
Subscribe to:
Posts (Atom)