Showing posts with label problem of the criterion. Show all posts
Showing posts with label problem of the criterion. Show all posts

Wednesday, September 14, 2011

Cherry Picking Problems

Greta Christina thinks religious progressives have a cherry-picking problem. Since I launched this blog with a response to her infamous post on atheist anger, and since her cherry-picking essay is getting some attention, it seems fitting for me to say something about this so-called problem. And I seem to be uniquely situated to do so. After all, not only am I a self-professed religious progressive, but I also have experience picking cherries.

Back in college I had a summer job at a fruit orchard in Orondo, Washington. At the height of the cherry harvest, I was pulled away from trimming apple trees and spreading paraquat so that I could help pick cherries. And I must admit that I did have something of a cherry-picking problem.

The thing is, when you pick cherries at an orchard, you need to be fast. One of my co-workers, Rippin' Rod Ripley, had his name for a reason: He ripped those cherried from the tree so fast that he was actually able to earn enough for a few weeks of hotels, booze, and whores when the season was over (Rippin' Rod was a Vietnam vet who lived in a trailer on the orchard grounds during the fruit-picking season, but was homeless once the season was over and the money ran out).

You need to be fast--if not as fast as Rippin' Rod, then at least fast enough so that you aren't losing the orchard money. But you also need to be picky. Some cherries are split. You don't want too many of them in your bucket, or that whole batch becomes a lower grade. More serious are the cherries that have mold on them. If those go in the bucket, they can ruin the cherries around them.

And you don't just need to be fast and picky. You also need to be skillful. You see, when you pluck a cherry from the tree, it's very easy to leave the cherry stem behind. It's hooked onto the branch at least as strongly as it is to the fruit, so that if you just rip a bunch of cherries down off a branch into your bucket, chances are it'll be a bucket full of stemless cherries.

But consumers want their cherries with stems. I'm not sure why. Maybe it's so that they won't lose out on opportunities to be entertained (tittilated?) by those rare individuals (such as my wife) who can pop the stem in their mouth, work it around for a minute with their tongue, and spit it out again with a perfect little knot tied in it. Yes, my wife can do that. It's part of why I married her.

In any event, back to cherry-picking. While I could be fast, it was at the cost of selectivity and skill. And while I could be selective and skillful, it made me really slow. I never did get the knack that Rippin' Rod displayed out there among the trees, this ability to rip cherries from the trees at a frenetic pace and end up with buckets of pretty, unsplit, mold-free, stem-sporting cherries.

So, I confess to having a cherry-picking problem. But here's the thing: This problem has absolutely nothing to do with my religious progressivism. In fact, back when I was picking cherries at that orchard, I wasn't a religious progressive at all. That was the time in my life when I got sucked in by a more fundamentalist form of Christianity. I was recently "born again." I occasionally spoke in tongues and convinced myself I wasn't just making $#!t up. While at the orchard, I went with a friend most Sundays to a small Seventh Day Adventist church with a pastor whose mantra was "Just believe!"

Oh, wait.

Just glanced back at Greta Christina's post. Looks as if this "cherry picking" language is a metaphor. Whoowee,  am I embarrassed.

Well, taken in those terms her argument's a bit more interesting. But all kidding aside, I think my earlier conclusion still stands: I do have a cherry-picking problem. We all do. So does she. It's called "the problem of the criterion." The problem arises whenever we consider our judgments about our standards of judgment. By what standards do we decide that we have the right standards? How can we be critical of our own starting points without either implicitly abandoning those starting points in favor of different ones which we then use to critique the earlier starting points (but then we're still not critiquing the starting points we have now) or relying on our starting points and so begging the question?

But this problem isn't a problem with my religious progressivism. Instead, my progressivism is an attempt to pursue a solution to the problem--a Hegelian solution, if you will.

I think her framing of the problem as a special problem for religious progressives is rooted in the same misconceptions about the nature of progressive religion that Sam Harris falls prey to when he calls religious moderates "failed fundamentalists" who betray both faith and reason equally. I talk about Harris's mischaracterization in Chapter 1 of Is God a Delusion?, and much of what I say there is applicable to the cherry-picking challenge.

But here I want to pursue a slightly different line of response. Greta Christina, like Harris, is assuming a foundationalist epistemology, and in effect arguing that if you "cherry pick" the Bible you are rejecting the religious fundamentalist's foundational source of evidence--namely, Scripture--in favor of the standards you're using to distinguish the "bad" biblical cherries frome the keepers. And these standards, she argues, are either the same one's that serve as the epistemological foundations for naturalism, or they're standards that are clearly unreliable ("what's in my heart").

And so Greta Christina concludes, in effect, that if you're prepared to "pick cherries" you have in effect abandoned the foundations of faith (blind allegiance to The Holy Book) and so need different ones--and the only tenable ones are the foundations that scientists rely on. And as soon as you limit yourself to those foundations you pretty much have to throw out God and the Bible altogether. And so you should be an atheist rather than a religious progressive. (I just had to go back and correct an interesting typo at the start of this paragraph--I'd mistakenly written "Great Christian").

But this whole argument is premised not only on an implicit embrace of a foundationalist epistemology (as opposed, say, to a more Hegelian approach). It is also premised on an understanding of the Bible that's drawn from Biblical fundamentalism (in which Scripture serves the same foundational evidential role that, for example, sense experience serves for the empiricist) and on a sweeping dismissal of alternative foundations (moral intuition, mystical experience, "first principles of the intellect," etc.)--a dismissal that implicitly relies on an indiscriminate lumping together of all of these things into the same "what's-in-my-heart" subjective category that, while defensible if you assume the naturalist's view of what counts as epistemically foundational, is not defensible if you want to avoid begging the question about what can and cannot qualify as foundational. We need a way to decide among foundations that doesn't beg the question, which means we need to replace epistemic foundationalism with something more nuanced--like the Hegelian dialectical method I favor.

In fact, I think religious progressives in general have made just this kind of shift, even if they've never heard of Hegel. They have lost faith in foundationalism--or, more broadly, they've lost their confidence in the human capacity to "get it right" with respect to evidential foundations. Often, this loss of faith starts with the foundations posited by fundemantalist religious traditions--but their sketpicism extends further than that. Not that they reject the conclusions that scientists reach based on their foundations, but the lived experience of progressives suggests to them that what science can offer based on those foundations is far too limiting, that there is something deep and profound beneath the empirical surface that science explores. To rule this out in advance, to rely on an epistemology that makes any such "something more" eternally inaccessible should it prove to be there, seems as bad as the Biblical fundamentalist's insistence that the Bible offers unassailable foundations from which to build a system of beliefs and a way of life.

And as progressives look at their religious traditions, inherited stories, holy books, and oral teachings, they begin to see that viewing them according to a foundationalist lens is a very modern distortion of what is going on, a misconstrual of what the tradition had been up to until Cartesian Foundationalism got a hold of it. What they see in, say, the Bible, is not some singular "source of evidence" a la sense perception, but an evolving worldview, a holistic way-of-seeing the totality of human experience which, while it presupposes standards for evaluating beliefs, is constantly refining those very standards and methods in the light of lived engagement with the world.

The picture of God in the Bible is not univocal, but evolving. The ethical notions are not univocal, but evolving. There is a direction, a trajectory, that suggests viewing the Bible as a record of the growth and development of a way of seeing a reality that's more mystery than clarity, but which impresses itself upon us and transforms us no matter what standards of evidence we happen to be working with. No matter how inadequate out interpetive scheme, the reality that lies behind experience has a way of breaking through, producing cracks and fissures, exposing inadequacies.

And the religious progressive sees promise in the process. The progressive sees a trajectory that this evolution is taking, and so steps with a hopeful spirit into the evolving tradition to be part of its further evolution in the light of lived human experience. And in taking on that tradition and modifying it in the light of experience, the proper metaphor is not "picking cherries" at all. A better metaphor--derived from my recent history--would be road-testing a new bicycle.

You take it out on the road and, in the light of what you experience, you adjust the seat a little bit upward. You move the handlebars a bit. You decide that the saddle isn't right for your butt, and so you buy a different one. The tires are perhaps the wrong kind for the gravelly roads on which you have to ride--making you too susceptible to a flat. So you get "armadillo" tires. When you start out the process, you don't even really know what you should be looking for. Does the disturbing numbness in your unmentionables mean you've got the wrong seat, or just that your body needs to adjust to riding? What about the ache in your lower back? The very standards by which you decide what changes to make are part of what you discover over the course of road-testing the bike.

If someone watching this process were to say, "Hey, you're cherry-picking!", you'd probably scratch you're head in befuddlement. If they went on to say that you were just being a hypocrite for not trading in the bike for a car--well, you get the idea. The "picking cherries" metaphor presupposes that we have in place a set of foundational standards of judgment (freedom from splits and mold, etc.) that are clearly distinct from the cherries being selected on the basis of those standards. But when we're talking about a holistic worldview, there is no such ready dichotomy between the "correct" standards of judgment and that which is being judged. The road-testing metaphor is far more helpful here.

And you need a vehicle in order to do a road test. It's not something you can dispense with. Even Greta Christina has her metaphorical "vehicle"--her metaphysical naturalism with the evidentiary standards it implies (and the ones it rejects). While there might be a time and a place for telling someone that their bike is a piece of crap and they should consider a new one rather than keep tuning up and replacing parts on the one they have, the mere fact that someone is pursuing such refinements isn't evidence that the bike should be thrown out. The person who tunes up their bike is not thereby a hypocrite. And just because a bike needs more frequent maintenance than a minivan doesn't mean that people should give up riding bicycles. 

I could develop these ideas more precisely in terms of the Hegelian dialectic and its implications. But I already intend to offer a more rigorous treatment of this Hegelian approach as it relates to religion and science, and since I'm getting ready to leave for a conference in the morning, I'll leave matters at that for now. If you want more detail, you can always check out the online video of my University of Tulsa lecture, in which I respond to this species of challege if not to Greta Christina's specific formulation of it. Or you can look at this post, in which I describe the progressive approach to the Bible.

Friday, September 9, 2011

The Hegelian Dialectic: Escaping the Problem of the Criterion

My last post was a reposting of an archived essay on evidentialism--one which I replayed partly as a springboard for considering some related issues I've been thinking about recently, issues having to do with traditions of belief and the epistemic foundations on which they rely. Part of what I want to do in considering these issues is provide a deeper portrait of the Hegelian dialectical method, since I find myself strongly influenced by it. In this post I want to say a bit about Hegel's dialectic as a solution to the problem of the criterion. In subsequent posts, I may have more to say about Hegel's method as it bears on science, religion, and their intersections.

In the previous evidentialism post, I introduced a version of the "problem of the criterion." In a nutshell, the problem is this: If one is to form one's beliefs in accord with the weight of the evidence (which even the greatest fideist thinks we should do in many contexts, even if not in all of them), one needs to determine what counts as evidence and what doesn't. But how do you decide what counts as evidence?

Consider any belief of the form, "This sort of thing is good evidence on which to base beliefs." For any such belief we can ask, "On what basis do we believe that?" In attempting to answer this question, we find ourselves quickly faced with the prospects of circular reasoning, infinite regress, or dogmatism in one's beliefs about what counts as evidence.

In posing this problem, it is not my intention to imply or suggest that the problem is insoluble and that all of us are therefore in the same epistemic boat...whether our epistemic starting point is shared sense experience or every pronouncement proceeding from the mouth of Rush Limbaugh. I'm not one of those people who defends the reasonableness of Christianity by insisting that since anything can count as one's evidential foundation, Christians are fully in their epistemic rights to treat every sentence of the Bible as so foundational that the "biblical evidence" trumps all other sources of evidence (including reason and sense experience).

But I do think the problem of the criterion reveals something important: all of us, no matter who we are or what we believe, are unavoidably bound to have starting points--starting points which are not themselves believed on the basis of evidence, but which, in many cases, serve as the basis for deciding what counts as evidence and what does not. These starting points might take the form of a narrow set of "foundational beliefs," or we might start with something much bigger--a broad network or web of mutually reinforcing beliefs where none are clearly "foundational," since each is defended in the light of others (a kind of circle). In the latter case, each member of the network may be supported on the basis of the "evidence" offered by other members of the network, but the network itself is not. And so the network is our starting point.

In either case, there is a givenness built into our belief system--and there are therefore things that we believe in a way that cannot meet the strict demand imposed by the evidentialist. But this doesn't mean that we are doomed to dogmatism, with the only remaining question being which starting points we will hold to dogmatically.

The question is how we escape mere dogmatism in relation to the criteria we depend on for assessing our beliefs. And here is where I am powerfully influenced by Hegel, who thought the enlightenment had failed to attend with sufficient care to the problem sketched out above, and hence fell back into the very dogmatism they were criticizing. The enlightenment, Hegel thought, was simply not being skeptical enough. What is remarkable is that, if one follows through on the Hegelian alternative to dogmatism--an alternative that carries the skepticism down to the level of one's own starting points--one isn't reduced to radical skepticism. One is, rather, led to a critical and progressive appropriation of tradition.

Hegel's solution to the problem of dogmatism, given the inescapability of starting points, was to adopt one's starting points in an experimental and provisional way. One has to start somewhere--so do so. Start somewhere. Better yet, start where you are--with ordinary consciousness embedded in the social ideas of your age and culture. But be conscious of what your starting points are, and live them out critically. In other words, attempt to engage the world--cognitiviely and practically--from the starting points you happen to have, and see how well they work out. The result will be an evolutionary process to which Hegel gives the name "dialectic," and the goal will be a progressive disclosing or revelation of the "Absolute" (ultimate reality or Truth with a capital "T").

This Hegelian experimentation with one's starting points should not be confused with the sense of "experiment" that we have in mind when we think about the work of scientists. Scientific experiments take place within the context of a set of methodological assumptions, ideas about what counts as evidence and what does not, etc. The experiments test the adequacy of a hypothesis, not the adequacy of the methodology used to test the hypothesis. What Hegel is recommending is that, in the course of doing the former, we also do the latter. Or perhaps it is better to say that we are inevitably doing the latter as well, although we may do it poorly if we are insufficiently attentive to the lessons that speak to the fruits of our methodology-defining premises.

What this shows is that Hegel's dialectical "method" for approaching an understanding of the "Absolute" may not be properly described as a method at all. Perhaps we might call it a meta-method, the path to ultimately getting our method right. But Hegel thinks that perfecting our method is inseparable from the quest for the Absolute, so inseparable that, in effect, to perfect one's method is to achieve one's goal: One knows how to properly get in touch with the Absolute only once one is properly in touch with the Absolute.

It is also important to keep in mind that Hegel does not have in mind simply testing the methodology of science against its knowledge-expanding fruits. He has in mind all our ways of understanding ourselves in relation to the world, all the ways we attempt to "know" and "understand" who we are, what is out there, how we should live, etc.  For example, our foundational conception of the subject-object relationship leads us to distinguish between the "facts out there" and the "subjective projections" (values, desires) of the self who is engaging with those facts. Implicit assumptions about that relationship help to define the scientific method and its understanding of what counts as evidence and what doesn't, but similar assumptions permeate every aspect of our engagement with our world: ethical, aesthetic, religious, introspective. Even our notions about the differences between art and science--their goals and achievements--are a function of implicit presuppositions of this kind.

Or take the method of doing epistemology--studying knowledge--that prevailed (thanks largely to Kant) just before Hegel muscled his way onto the philosophical scene. Here's how Charles Taylor describes it in his monumental book, Hegel:
Hegel starts off the introduction to PhG [Taylor's shorthand for The Phenomenology of Spirit] attacking those who begin with a critique of our faculty of knowledge as a tool we use to get at reality or a medium through which realty appears to us. It is not just that this makes the problem of knowledge insoluble, since ex hypothesi we cannot get at reality as it is in itself, untouched by our tool, or unreflected in our medium. It is also that this approach assumes that the absolute, what is to be known, is something which is quite distinct from our knowledge of it, that 'the absolute stands on one side and that knowledge, though it is on the other side, for itself and separated from the absolute, is nevertheless something real' (PhG 65). This, Hegel points out, is to prejudge the issue; something he is less willing to do in this case since he wants to come to a conclusion diametrically opposed to this common assumption.
Put in concrete terms, consider the tendency to put "subjective projections" of our wishes and longings on one side, and dispassionate empirical investigation on the other, and to regard the latter as giving us knowledge of objective reality while the former as mostly an impediment to such knowledge--unless what we want to know about is human subjectivity itself (in which case we need to engage in a dispassionate empirical investigation of them, making them into an object of study while trying to free ourselves from the very things we are studying so that we can know what they are objectively like). For Hegel, this way of dichotomizing things makes deep assumptions about the ultimate nature of reality--that is about the Absolute--assumptions we are not in an epistemic position to make.

This is just an example. There are myriad other foundational assumptions that impact a lived engagement with reality, a reality that doesn't just have an empirical dimension but other dimensions as well. In every part of life, from our love affairs to our efforts to find meaning, our activities pressupose basic starting points--and in all of life, the business of living out those starting points becomes the testing ground for their adequacy. 

Hegel understood the dialectical process to be one of continual and ongoing refinement in the light of what he called "contradictions"--that is, deep inadequacies in one's holistic understanding of the world-in-relation-to-self that emerge in the process of living out that holistic understanding. Metaphorically speaking, by approaching the world with a square hole, one soon discovers that not every peg is square, because the effort to push the peg into the hole causes the casing around the hole to crack.

What is called for in the light of this "cracking"? Well, often enough what we do, in effect, is throw out the square hole and pick up  a round one. We abondon our original "thesis" in favor of an "antithesis". But then what happens? Our round hole cracks as we start shoving square pegs into it. Insofar as some pegs do fit into the square hole, simply discarding the square hole in favor of a round one proves to be a mistake. Likewise, it would be a mistake to ignore the apparently round pegs, or to insist (with a dogmatism that undercuts Hegel's experimental approach) that these apparently round pegs must eventually be explicable in terms of an underlying square-peggishness that, with sufficient diligence, will eventually be exposed to allow easy passage through the square hole.

In other words, it would be anathema to the Hegelian approach to simply dismiss, given one's own allegiance to squareness, those who explore the value of introducing rounded contours into their openings. Of course, it would also be inappropriate to dismiss those who aren't willing to give up the squareness approach quite yet. The cracks might indicate, not the need for adding rounded contours, but the need for some other modification that preserves the commitment to straight lines and righ angles. A contradiction might well lead to divergent paths of exploration, disparate experiments.

One of the things that became clear for me as I struggled my way through Hegel's Phenomenology of Spirit is that "what comes next" in the dialectical evolution isn't necessarily mandated by what came before and the nature of the "contradictions" it generated--even though Hegel sometimes writes as if this is what's going on, as if the "next stage" he explicates is the only one that makes sense. But if you look for what it is that requires the specific moves he decides to make, in many cases you'll be looking for a long time with little hope of success. Instead, it seems clear that "what comes next" is the result of a creative, almost artistic act. Some new developments and directions take hold because, to put it simply, they're proposed by better artists.

In sum, the Hegelian dialectic--his solution to the problem of the criterion (or the version of it he addressed)--involves experimentally modifying predecessor ways-of-seeing-and-interpreting in an ongoing, evolutionary way. And this process does not merely proceed on an individual level, but also a communal one--when communities live out a holistic vision collectively (discovering things through communal engagement that might be missed in merely personal life), and then passing on the lessons to the next generation, who continue the process in an evolving tradition.

Wednesday, September 7, 2011

From the Archives: Evidentialism and Theistic Belief

Because I start talking about evidentialism in my philosophy of religion class today, I wanted to reprise the following post which I composed a year ago, and which stimulated quite a lot of discussion at the time. Also, I think it has bearing on an issue I want to blog about in the near future, having to do with the fundamentalist tendency to treat the Bible as "evidentially basic" in the way that most of us treat our senses as evidentially basic--and why I think that the most radical cases of "deconversion" followed by "evangelical" atheism often begin from such a fundamentalist starting point. And so, without further ado, here it is again: "Evidentialism and Theistic Belief":

One of the most important grounds for challenging religious belief, especially theism, has involved invoking “evidentialism,” that is, the doctrine that one ought never to believe any claim, C, on insufficient evidence. Taken together with the premise that there isn’t sufficient evidence in support of the claim that God exists (a premise denied by not a few theists—but that’s another matter), evidentialism entails that belief in God is illegitimate.

Probably the clearest and most uncompromising articulation of this doctrine was offered by William Kingdon Clifford in the 19th Century. In his essay “The Ethics of Belief,” Clifford argued that all of our beliefs have the potential to affect what we do in ways that, should our beliefs prove false, can be very harmful. As such, belief is not a private affair but a public one, and we have a solemn responsibility to extend belief only to “truths which have been established by long experience and waiting toil, and which have stood in the fierce light of free and fearless questioning.” Clifford thus concludes that “it is wrong always, everywhere, and for any one, to believe anything upon insufficient evidence.”

Formulated in such strident and sweeping terms, evidentialism does not appear to be a tenable doctrine. Among other things it would mean that it’s wrong for children to take anything on trust from teachers or parents—and this seems a recipe for the impossibility of children ever arriving at a state in which they could actually engage in the sort of investigative activity that Clifford demands.

Furthermore, this kind of sweeping evidentialism falls prey to the following problem: In order to operate in accord with the evidentialist principle, we need to know what counts as evidence. But how do we arrive at the correct belief concerning what counts as evidence? What evidence do we use?

This is a variant on the problem of the criterion—basically, the problem of ascertaining in a non-circular way what criteria we should use for deciding what to believe. The problem can be summarized as follows: for any proposed criteria for deciding what to believe, one might ask, “Why should I believe that these criteria are the ones I should use?” If we answer this question by invoking the criteria in question, then we have begged the question. If we answer this question by appeal to different criteria, we can then ask the same question about them.

The difficulty that this poses for evidentialism can be emphasized with an example. Assume that “C” is the following proposition: “My senses are reliable.” A sweeping evidentialism would insist that we not believe this on insufficient evidence. But what is to count as sufficient evidence? How could we test the reliability of our senses without appeal to the very senses we are not supposed to trust until their reliability has been established?

This, of course, is the path to a radical skepticism which precludes having any beliefs at all. But, of course, it is impossible to actually live a human life—to make choices, to act on those choices, to form and maintain relationships, to work, etc.,—without having any beliefs. While Clifford rightly notes that our beliefs imply certain actions, it is equally true that our actions presuppose beliefs. Whenever we act, we act as if something is the case. Radical skepticism is therefore pragmatically impossible for anyone who actually gets on with the business of living (as Hume famously noted).

And in this business of living, isn’t it the case that we trust our senses unless and until we have good reason not to—unless they generate inconsistencies of various sorts? More broadly, doesn’t the business of living routinely require that we trust a range of propositions unless and until there is evidence that calls that trust into question? To put this point a bit more formally, the business of living requires that, with respect to some propositions, our “default” position should be belief (that is, we should operate as if the proposition is true unless we are confronted with reasons not to). Evidentialism, by contrast, insists that the default position for every claim C is disbelief or agnosticism. Put bluntly, then, one cannot be a faithful evidentialist of this sweeping sort and still lead a human life.

But a defender of evidentialism could seek to salvage the doctrine by making some distinctions. For example, there is a difference between what we might call “actual belief”—where you are convinced that something is the case, that you've got the truth—and “presumptive belief”—where you are merely operating as if something is the case. The criminal justice system in the US (and elsewhere) operates on the presumption of innocence. That is, if there is insufficient evidence to establishing guilt, the accused is treated as innocent. But everyone has heard stories of juries that quietly harbor the view that the accused is “guilty as hell” while delivering an acquittal. The jurors don’t “actually” believe that the accused is innocent in such cases, but they operate in accord with the presumption of innocence (at least in their role as jurors--they might in their private lives take steps to make sure that their children are kept away from the accused).

A modified evidentialist could hold that what living a human life requires is not that we actually believe that our senses are reliable, but that we presumptively do so. We operate as if our senses are reliable unless and until we encounter sufficient reason not to. And so the modified evidentialist could say that one ought never to actually believe any claim C on insufficient evidence, but that one should be free to presumptively believe a range of things, within certain limits, so long as there is no evidence against these things.

Several issues arise, however, in relation to this modified principle. First, there is the question of the limits within which presumptive belief (if not actual belief) is taken to be legitimate. Here there will be room, I think, for considerable dispute—especially in connections with “meaning bestowing beliefs about the transcendent” (religious beliefs). The evidentialist challenge to theism, then, will have to take a position in this disputed territory, making a case for the view that the class of propositions for which presumptive belief is appropriate excludes belief in God. But if this is what the evidentialist wants to argue, there are a number of important opposing arguments. Since I will be covering a number of these arguments later in my philosophy of religion class (when discussion pragmatic arguments for religious belief), I won't take up this version of evidentialism in detail at this time.

But there is another way of construing the evidentialist challenge to theism that I do want to explore a bit further here. Perhaps the evidentialist challenge to theism is merely targeting “actual” belief, while accepting the legitimacy of presumptive belief. In this case different concerns arise. First of all, as a challenge to theism it is radically attenuated insofar as it leaves space for the epistemic legitimacy of people organizing their lives around a presumptive belief in God.

But second, there is the question of the extent to which our actual beliefs are in our control. When it comes to the reliability of my senses, for example, I don’t just presumptively believe their deliverances. I really believe them. Of course, I could say something along the following lines: “Although living a human life requires that I presumptively believe the deliverances of my senses, I affirm that I have no compelling reason to actually believe them.” But saying this doesn’t make it the case that my belief here is merely presumptive. It remains an actual belief. Converting it into a merely presumptive belief seems beyond my power.

What is not beyond my power is lifting up, alongside this actual belief in the reliability of my senses, the judgment that such actual belief exceeds what is warranted by the evidence and as well as what is demanded by the pragmatic requirements of living (which only requires presumptive belief). Pairing this judgment with my actual belief might produce a kind of "functional equivalent" of mere presumptive belief.

But in that case, we would still need to ask, “On what basis should I believe this judgment about my actual belief in the reliability of my senses?” Is this belief about my belief itself an actual one or is it a presumptive one? And if it is a presumptive one, does that mean we are to act as if our belief is merely presumptive even though it is actual? This will be possible only if there is a discernible pragmatic difference between actually believing C and presumptively believing C. In other words, the behavior you exhibit when you behave as if C is true is different from the behavior you exhibit when by actually believe C is true. But in that case, in what sense are you behaving as if C is true when you presumptively believe C?

Assuming these difficulties can be overcome (and I think they can), acknowledging that what we actually believe is often out of our control forces evidentialists to truncate their evidentialist principle even further. They can no longer say that, on insufficient evidence, presumptive belief can be okay but actual belief never is. Rather, they'll have to say that actual belief can be okay too, so long as we pair that belief with a clear recognition of our fallibility with respect to it—a recognition which serves to generate the functional equivalent of mere presumptive belief.

But now, an evidentialist challenge to theism that makes room for presumptive belief will have to make room for actual belief as well—so long as that belief is paired with fallibilism. In other words, if we follow this path, we no longer have an evidentialist opposition to theism at all. What we have is a challenge to fanaticism.

I suspect very many theists would be very happy to accept that conclusion--because it isn't really a case against theism at all, but rather a case for favoring moderate forms of theism over fanatical ones. And what this shows, I think, is that in order really to have an evidentialist case against theism, the evidentialist would need to argue that presumptive belief in God is illegitimate—in other words, that it is wrong to live as if there is a God.

I think an argument of this sort can be made with respect to what I call (in my book, following Plutarch) “the god of superstition.” And there may be other specific forms of theism for which a case can be made that presumptive belief is illegitimate. But I think it would be very difficult indeed to make this case with respect to every species of theism--for example, the species which conceives of God as that whose existence would (borrowing language from my book) fulfill "the ethico-religious hope" (the hope that, in some fundamental way, reality is on the side of the good).

Tuesday, September 14, 2010

Evidentialism and Theistic Belief

One of the most important grounds for challenging religious belief, especially theism, has involved invoking “evidentialism,” that is, the doctrine that one ought never to believe any claim, C, on insufficient evidence. Taken together with the premise that there isn’t sufficient evidence in support of the claim that God exists (a premise denied by not a few theists—but that’s another matter), evidentialism entails that belief in God is illegitimate.


Probably the clearest and most uncompromising articulation of this doctrine was offered by William Kingdon Clifford in the 19th Century. In his essay “The Ethics of Belief,” Clifford argued that all of our beliefs have the potential to affect what we do in ways that, should our beliefs prove false, can be very harmful. As such, belief is not a private affair but a public one, and we have a solemn responsibility to extend belief only to “truths which have been established by long experience and waiting toil, and which have stood in the fierce light of free and fearless questioning.” Clifford thus concludes that “it is wrong always, everywhere, and for any one, to believe anything upon insufficient evidence.”

Formulated in such strident and sweeping terms, evidentialism does not appear to be a tenable doctrine. Among other things it would mean that it’s wrong for children to take anything on trust from teachers or parents—and this seems a recipe for the impossibility of children ever arriving at a state in which they could actually engage in the sort of investigative activity that Clifford demands.

Furthermore, this kind of sweeping evidentialism falls prey to the following problem: In order to operate in accord with the evidentialist principle, we need to know what counts as evidence. But how do we arrive at the correct belief concerning what counts as evidence? What evidence do we use?

This is a variant on the problem of the criterion—basically, the problem of ascertaining in a non-circular way what criteria we should use for deciding what to believe. The problem can be summarized as follows: for any proposed criteria for deciding what to believe, one might ask, “Why should I believe that these criteria are the ones I should use?” If we answer this question by invoking the criteria in question, then we have begged the question. If we answer this question by appeal to different criteria, we can then ask the same question about them.

The difficulty that this poses for evidentialism can be emphasized with an example. Assume that “C” is the following proposition: “My senses are reliable.” A sweeping evidentialism would insist that we not believe this on insufficient evidence. But what is to count as sufficient evidence? How could we test the reliability of our senses without appeal to the very senses we are not supposed to trust until their reliability has been established?

This, of course, is the path to a radical skepticism which precludes having any beliefs at all. But, of course, it is impossible to actually live a human life—to make choices, to act on those choices, to form and maintain relationships, to work, etc.,—without having any beliefs. While Clifford rightly notes that our beliefs imply certain actions, it is equally true that our actions presuppose beliefs. Whenever we act, we act as if something is the case. Radical skepticism is therefore pragmatically impossible for anyone who actually gets on with the business of living (as Hume famously noted).

And in this business of living, isn’t it the case that we trust our senses unless and until we have good reason not to—unless they generate inconsistencies of various sorts? More broadly, doesn’t the business of living routinely require that we trust a range of propositions unless and until there is evidence that calls that trust into question? To put this point a bit more formally, the business of living requires that, with respect to some propositions, our “default” position should be belief (that is, we should operate as if the proposition is true unless we are confronted with reasons not to). Evidentialism, by contrast, insists that the default position for every claim C is disbelief or agnosticism. Put bluntly, then, one cannot be a faithful evidentialist of this sweeping sort and still lead a human life.

But a defender of evidentialism could seek to salvage the doctrine by making some distinctions. For example, there is a difference between what we might call “actual belief”—where you are convinced that something is the case, that you've got the truth—and “presumptive belief”—where you are merely operating as if something is the case. The criminal justice system in the US (and elsewhere) operates on the presumption of innocence. That is, if there is insufficient evidence to establishing guilt, the accused is treated as innocent. But everyone has heard stories of juries that quietly harbor the view that the accused is “guilty as hell” while delivering an acquittal. The jurors don’t “actually” believe that the accused is innocent in such cases, but they operate in accord with the presumption of innocence (at least in their role as jurors--they might in their private lives take steps to make sure that their children are kept away from the accused).

A modified evidentialist could hold that what living a human life requires is not that we actually believe that our senses are reliable, but that we presumptively do so. We operate as if our senses are reliable unless and until we encounter sufficient reason not to. And so the modified evidentialist could say that one ought never to actually believe any claim C on insufficient evidence, but that one should be free to presumptively believe a range of things, within certain limits, so long as there is no evidence against these things.

Several issues arise, however, in relation to this modified principle. First, there is the question of the limits within which presumptive belief (if not actual belief) is taken to be legitimate. Here there will be room, I think, for considerable dispute—especially in connections with “meaning bestowing beliefs about the transcendent” (religious beliefs). The evidentialist challenge to theism, then, will have to take a position in this disputed territory, making a case for the view that the class of propositions for which presumptive belief is appropriate excludes belief in God. But if this is what the evidentialist wants to argue, there are a number of important opposing arguments. Since I will be covering a number of these arguments later in my philosophy of religion class (when discussion pragmatic arguments for religious belief), I won't take up this version of evidentialism in detail at this time.

But there is another way of construing the evidentialist challenge to theism that I do want to explore a bit further here. Perhaps the evidentialist challenge to theism is merely targeting “actual” belief, while accepting the legitimacy of presumptive belief. In this case different concerns arise. First of all, as a challenge to theism it is radically attenuated insofar as it leaves space for the epistemic legitimacy of people organizing their lives around a presumptive belief in God.

But second, there is the question of the extent to which our actual beliefs are in our control. When it comes to the reliability of my senses, for example, I don’t just presumptively believe their deliverances. I really believe them. Of course, I could say something along the following lines: “Although living a human life requires that I presumptively believe the deliverances of my senses, I affirm that I have no compelling reason to actually believe them.” But saying this doesn’t make it the case that my belief here is merely presumptive. It remains an actual belief. Converting it into a merely presumptive belief seems beyond my power.

What is not beyond my power is lifting up, alongside this actual belief in the reliability of my senses, the judgment that such actual belief exceeds what is warranted by the evidence and as well as what is demanded by the pragmatic requirements of living (which only requires presumptive belief). Pairing this judgment with my actual belief might produce a kind of "functional equivalent" of mere presumptive belief.

But in that case, we would still need to ask, “On what basis should I believe this judgment about my actual belief in the reliability of my senses?” Is this belief about my belief itself an actual one or is it a presumptive one? And if it is a presumptive one, does that mean we are to act as if our belief is merely presumptive even though it is actual? This will be possible only if there is a discernible pragmatic difference between actually believing C and presumptively believing C. In other words, the behavior you exhibit when you behave as if C is true is different from the behavior you exhibit when by actually believe C is true. But in that case, in what sense are you behaving as if C is true when you presumptively believe C?

Assuming these difficulties can be overcome (and I think they can), acknowledging that what we actually believe is often out of our control forces evidentialists to truncate their evidentialist principle even further. They can no longer say that, on insufficient evidence, presumptive belief can be okay but actual belief never is. Rather, they'll have to say that actual belief can be okay too, so long as we pair that belief with a clear recognition of our fallibility with respect to it—a recognition which serves to generate the functional equivalent of mere presumptive belief.

But now, an evidentialist challenge to theism that makes room for presumptive belief will have to make room for actual belief as well—so long as that belief is paired with fallibilism. In other words, if we follow this path, we no longer have an evidentialist opposition to theism at all. What we have is a challenge to fanaticism.

I suspect very many theists would be very happy to accept that conclusion--because it isn't really a case against theism at all, but rather a case for favoring moderate forms of theism over fanatical ones. And what this shows, I think, is that in order really to have an evidentialist case against theism, the evidentialist would need to argue that presumptive belief in God is illegitimate—in other words, that it is wrong to live as if there is a God.

I think an argument of this sort can be made with respect to what I call (in my book, following Plutarch) “the god of superstition.” And there may be other specific forms of theism for which a case can be made that presumptive belief is illegitimate. But I think it would be very difficult indeed to make this case with respect to every species of theism--for example, the species which conceives of God as that whose existence would (borrowing language from my book) fulfill "the ethico-religious hope" (the hope that, in some fundamental way, reality is on the side of the good).