Tuesday, March 29, 2011

Robert Merrihew Adams on the Primacy of Pretheoretical Ethical Beliefs

My graduate seminar has started looking at sections of Robert Merrihew Adams' book, Finite and Infinite Goods: A Framework for Ethics; and this afternoon (over coffee, of course) I was reading a passage from Chapter 2 of that book which speaks to several recurring themes on this blog. So I thought I'd quote the passage here and ask what people think.

First, a bit of context. The passage occurs at the conclusion of a section in which Adams defends the work of the metaethical naturalist, Richard Boyd, against a particular line of attack. Boyd seeks to develop a naturalistic account of moral realism (the view, roughly, that there are objective moral truths, that in calling X good or bad, right or wrong, one can be saying something that is true of X). Although Adams ultimately rejects naturalistic moral realism in favor of supernaturalism, he is unimpressed by those who challenge it on the grounds that ethical propositions are inadequately subject to empirical testing to qualify as objectively true or false.

In the lead-up to the passage, Adams spends some time defending the pursuit of "reflexive equilibrium," that is, the task of developing our theoretic understanding of things through a kind of dialectic between our strongest and most confident pretheoretical beliefs and the the theories we build from them--a dialectic that gravitates towards a state of optimal mutual support. "No cognitive enterprise," Adams insists, "can get off the ground without initial reliance on many confident, pretheoretical beliefs; and a large proportion of those beliefs remain more deeply entrenched than the theoretical superstructure erected on them." As an example, he notes our strong, pretheoretical belief that when you drop a rock, it falls towards the earth. And he notes that "a development of physical theory that would advise us to give that up completely would almost certainly discredit itself rather than the pretheoretical belief."

But, Adams points out, we have evaluative and normative beliefs that we hold to with just as much pretheoretical confidence as that stones fall when you drop them. He uses the example of the wickedness of torturing children.  Adams then notes the universal and justified respect that modern people have for science and its methods--a respect that has led many theorists to attempt to extend its empirical methods beyond the sphere of science. "But it is important," he continues, "to distinguish between the successes of modern physical sciences, which are great and uncontroversial, and the successes of modern empiricist, science-inspired epistemology, which are arguably much less impresive, and certainly much more controversial." The passage I want to share here continues from this point, as follows:

If a pretheoretical ethical belief held confidently by all of us were irreconcilable with well-established principles of physics, that would be a sever problem for ethical theory; and if there were too many such cases, it could certainly call into question the acceptability of the method of reflective equilibrium in ethics. But if initial reliance on confident pretheoretical ethical beliefs fails to conform with an epistemology of exclusive reliance on inference to best explanation, or with some other empiricist epistemology, that is a much less serious problem. Nor do we need an elaborate theory to justify going with pretheoretical belief rather than empiricist epistemology, if such a choice is forced upon us...

Adams then turns from epistemology to metaphysics, making a similar point with relation to physicalism:

(Physicalism) is not a theory in physics, nor more generally a result established by modern physical science. It is rather a metapysical theory. inspired by enthusiasm for science. It holds, roughly, that all the facts there are can in principle be described in the vocabulary, and explained by the laws, of modern physics, in some ideal development thereof. It is obvious that such a theory is issuing large promissory notes to be paid by future developments, and hence is highly speculative. It is attended by much-debated and, I believe, grave and unresolved difficulties. Given the strength of our confidence in many pretheoretical evaluative and normative beliefs, and the pervasiveness of their role in our thinking, I believe that physicalism has much more need to be found compatible with them than they have to be found compatible with it. This is not an argument against physicalism; that's business for another occasion. It is rather an argument for not allowing physicalist worries to undermine ethical beliefs.

Thoughts?

Friday, March 25, 2011

From the Archives: Evangelicals and Premarital Sex

A short, provocative essay by Julie Clawson at the Century Blog, Reading the Bible, sex and all, reminded me of a blog post from last year on evangelical attitudes towards sexuality. Since that post didn't generate much discussion at the time, I thought I'd repost it now in light of Clawson's essay, and ask readers what they think.

A facebook friend recently posted a New Yorker article, "Red Sex, Blue Sex," by Margaret Talbot, that offers some interesting observations and insights about evangelical sexual attitudes and practices.

Drawing mainly from the book, Forbidden Fruit, by UT-Austin sociologist Mark Regnerus, Talbot calls attention to a number of important facts about evangelicals and sex. Among the highlights:

  • "(R)eligion is a good indicator of attitudes toward sex, but a poor one of sexual behavior, and...this gap is especially wide among teen-agers who identify themselves as evangelicals."
  • "(E)vangelical teen-agers are more sexually active than Mormons, mainline Protestants, and Jews. On average, white evangelical Protestants make their 'sexual début'—to use the festive term of social-science researchers—shortly after turning sixteen."
  • "(E)vangelical Protestant teen-agers are significantly less likely than other groups to use contraception."
  • "More than half of those who take (abstinence) pledges—which, unlike abstinence-only classes in public schools, are explicitly Christian—end up having sex before marriage, and not usually with their future spouse." (But, it turns out, they often wait a bit longer to have sex than evangelicals who don't take such pledges.)
  • "(C)ommunities with high rates of pledging also have high rates of S.T.D.s." (Some room for dispute about cause and effect here).

While these facts are interesting and important, the article links them to a broader observation about differences between conservative evangelicals and liberals that I found especially intriguing:
    Social liberals in the country’s “blue states” tend to support sex education and are not particularly troubled by the idea that many teen-agers have sex before marriage, but would regard a teen-age daughter’s pregnancy as devastating news. And the social conservatives in “red states” generally advocate abstinence-only education and denounce sex before marriage, but are relatively unruffled if a teen-ager becomes pregnant, as long as she doesn’t choose to have an abortion.
Put another way, evangelicals tend to take a strong stand against premarital sex, even to the point of discouraging sex education programs designed to protect teens from the more adverse consequences of failing to practice abstinence--but when their teen daughter comes home pregnant, they take it in stride so long as she "takes responsibility." More sexually progressive liberals are far more accepting of premarital sex--being more concerned with discouraging "unsafe" sex than with discouraging sex as such--but when their daughter comes home with a bun in the oven, they are devastated.

Essentially, evangelicals demand abstinence before marriage but aren't any better than the rest us at practicing it. And while there is every indication that they are judgmental and condemnatory towards "fornication" in the abstract, when it comes to their own children they are less inclined to make their pregnant daughters feel like dung for what they've done, focusing instead on present responsibility.

So what, exactly, is going on here? With respect to how evangelicals differ from liberals in their response to unwanted teen pregnancy, I suspect that several things are going on. First of all, for more sexually progressive families in contemporary America, the sexual choices of teens are regulated more by prudential considerations--fear of unwanted pregnancy, fear of STD's, worry over not being emotionally ready for that level of intimacy and the personal entanglements that sex creates--and less by concepts of moral duty. The danger posed by teen sex is not so much that it threatens virtue as that it threatens prospects for future flourishing. The announcement of an unwanted pregnancy is thus accompanied by a sense of shattered hopes for the child's future. This is what makes the announcement so "devastating."

Of course, evangelical families may feel many of the same things. But the response of devastation is tempered by other factors. First of all, there is the fact that if teens anticipate a parental response that is too harsh, they may opt for abortion in order to avoid it. Realization of this fact, coupled with a strong pro-life stance, may have shaped evangelical culture so as to encourage a more temperate and forward-looking reaction to unwanted teen pregnancy.

Furthermore, like all Christians, evangelicals embrace a doctrine of grace, one that acknowledges the ubiquity of sin and insists that we forgive it rather than beat people over the head about it. So they are in the habit of speaking out vociferously against policies that seem to endorse or minimize some sin or other, but when one of their own commits this sin, the appropriate response is to forgive and look forward.

But I'm pretty sure something else is going on here as well. An unwanted teen pregnancy is not just a threat to an adolescent's future, but to the virtue of both the girl and the boy--a fact that puts something else in the forefront, something significant enough to eclipse many of the concerns about a young person's future. The fact is that, given the way evangelical Christianity understands "virtue" in connection with sex, it follows that virtue can be (at least partially) restored after the fact by "taking responsibility" for the consequences of premarital sex--that is, by getting married and raising the child.

There's substantial biblical endorsement of the idea that the impropriety of premarital sex can at least be partially erased if the couple marries. But what is most important here is to understand why this is possible. The reason, I think, lies in the patriarchal norm--a norm which has it that a woman "belongs" to her husband in such a way that he has exclusive proprietary claim to her sexuality.

This norm explains why, if a man has sex with a woman who "belongs" to someone else (either through marriage or pledge), the problem is so grave that (according to the authors of Deuteronomy) it requires the execution of the man in cases where the woman was clearly unwilling, or of both man and woman if there is even a chance that the woman was willing--as indicated by the fact that she was in an urban area and didn't scream loudly enough to be discovered and rescued (see Deut. 22:22-27).

But if the woman does not belong to another man, then instead of death the penalty is a fine paid to the father and a requirement to marry the woman--and this is called for even in cases in which the woman was raped (Deut. 22:28-29). The horror of being married off to one's rapist didn't seem to bother the writer(s) of Deuteronomy, since the perceived crime of rape was that a man was taking what didn't rightly belong to him--a crime that could be erased if his victim came to belong to him through marriage.

On this view of things, a teenage boy's sexual virtue lies in sexually claiming only that woman who rightly belongs to him. A girl's virtue lies in giving her sexuality freely only to the man to whom it belongs. And marriage defines what woman belongs to what man.

Although contemporary evangelicals would probably not be inclined to marry their daughters off to rapists, the patriarchal conception of marriage as the proper context for sexual expression persists--and insofar as it does so, much of what it perceived to be wrong with premarital sex can be neutralized by subsequent choices. This helps to explain why evangelicals are inclined to focus less on the past sin of fornication and more on what happens next.

But none of this explains why evangelicals are not only no less likely to engage in premarital sex than others, but arguably even more likely to do so. Let me reflect on this issue for a bit.

Human sexuality is potent stuff. The urgency of sexual desire is so intense that even "good little Christian" teens have a hard time resisting it altogether. And we live in a culture in which adolescents who are attracted to each other have substantial opportunity to interact and explore their mutual attraction. Not even evangelical parents are typically prepared to enforce draconian rules against contact with the opposite sex (assuming their kids are straight--which raises an entirely different issue).

After all, such rules would so conflict with the norms of the broader culture that they would not only trigger resentment but defection. Put simply, if the broader culture allows me to flirt and date and follow my instincts for romance but my conservative Christian parents don't because they're afraid it might lead to a prohibited end--then there's a good chance I'll rebel against my parents and their values. A prohibition against sex is one thing. A prohibition against living the American way of life is another.

American teens may swallow the former, but they're not likely to happily embrace the latter. And so it's the former that becomes the expectation for evangelical teens. So long as they don't cross the line--so long as it's just kissing on the park bench--they've done nothing wrong. And so they kiss on the park bench, and their sexual longings are enflamed. They find a moment alone, and the kissing continues in private. So long as they keep their clothes on, everything's okay. So they do, fondling each other madly through their clothes. Then under the clothes. But, of course, the clothes stay on, so they're safe.

They don't plan for sex, because they have a rule against that. But they push against the boundaries of the rule. And somehow they're convinced they'll be able to resist. But all the while they're becoming at home with one incremental stage of intimacy after another. They linger at each stage--far longer than their secular friends are inclined to linger--thereby ensuring two things: first, that they become entirely comfortable with each other at that level of intimacy, entirely trusting; second, that they are fully immersed in the maximal sexual urgency that such a step is capable of producing. And so when the urgency drives them forward to the next incremental step, there is no sudden flare of distrust, no fear of boundaries violated or in danger of being violated. And there certainly is no talk of going to get a condom.

In short, by taking it slowly they may actually make the culmination of their sexual explorations more inevitable than if they moved more quickly. If they moved more quickly there'd be a greater chance of triggering fears that could derail the progress of their intimacy. But what makes them take it so slowly is the very same thing that ensures that, when they finally have sex, it's unprotected: their moral inhibition against having sex. It isn't strong enough to stop the fire of adolescent desire, at least in the absence of rules segregating the sexes except under conditions of strict supervision. But it is strong enough to ensure that when they finally take that last step, it's an unexpected stumble.

For months, perhaps, they've flirted with the edge of actual sex but resisted it. They don't expect that this time will be any different. But there are forces at work that conspire together to make that stumbling step into sex almost inevitable--not just the power of sexual desire itself, but certain other impulses that are uniquely bound up with evangelical Christianity itself.

Evangelical Christianity in America is not just about biblical inerrancy or accepting Jesus into your heart. It's also characterized by views concerning family. In fact, a former colleague of mine, Betty DeBerg, has argued in Ungodly Women that American evangelicalism has had, as one of its driving impulses from the beginning, the aim of preserving traditional Victorian-era family values against a variety of so-called threats.

For such idealization of an earlier sexual era to succeed, I think there has to be an important streak of romanticism to it. Only if the Victorian-era marriage is seen through rose tinted lenses can it endure compelling challenges based on its inequity towards women and its restrictiveness with respect to human liberty. To sell restrictive gender roles to the women who are most trammeled by them, it helps to wrap them up in chivalry and lace. When a suitor asks a father for the daughter's hand in marriage, the blatant patriarchy of it--the fact that men are deciding the fate of a woman as if it were some kind of commercial exchange--can be more readily swallowed if it's defined as a romantic gesture.

Girls must be taught from a young age to swoon over such objectification, or they'll rise up in rebellion against it. Likewise, they must be trained early on to idealize the selfless wife who quietly works in the shadows to lend her husband the strength to achieve great things. The saying, "Behind every successful man there's a good woman," must become a role that each young woman aspire towards--because she sees it as a badge of honor. She must be conditioned to see the world through her husband's eyes, to identify his achievements as her own, so that her own subservience becomes viewed as the means to her own success. For this to work, empathy for her husband's desires must be so great that she confuses them with her own. That level of self-subordination needs to be sold early or it won't stick. And to sell it early requires that an aura of romantic idealization be wrapped around the whole affair.

And the same must be done with respect to the prohibitions on premarital sex, which cannot be seen as just an outmoded constraint on individual liberty. In a post-sexual revolution era, in which so much that we see and hear through the media is infused with sexuality, the restraint on sexual expression must be romanticized. The reason for restraint has to be rooted in the almost sacramental grandeur that sex can achieve between a husband and a wife. Premarital sex is then seen as debasing something of exquisite beauty. Within the bonds of marriage, the sexual act can serve as a crucible of love that unites husband and wife, deepening their connection to one another in ways that will hold them together throughout their lives. To pursue sex outside of that context is to trivialize it.

A part of me is inclined to think there is a fair bit of truth in these latter ideas about sex and its context. I certainly do think that sex can be trivialized, and that located within the context of a stable, enduring relationship, it can acquire a meaning and value that it would not have apart from that context.

But there's a problem with selling this romanticized foundation for premarital sex taboos as a basis for discouraging Joey and Susy from having sex after months of romance and passionate intimacy. The problem, simply put, is that when you've built up your intimacy slowly, achingly over time, what you've forged together does not seem even remotely trivial. Joey and Susy are awash in love, practically bursting with it. They want to melt into each other, to become one. They gaze into each other's eyes and know that they are meant to be together forever, that this is what all the songs and stories and movies are about, that here is something unbelievable, wonderful beyond compare: true love.

If anything, they are less inclined than their secular peers to be realistic about what tomorrow will bring. Since what they are feeling has the scent of a sacramental act, the ugly realities of facing the morning after are more readily lost behind the idealized image of marital love that they have been fed since childhood.

One of the most important sexual inhibitions in the evangelical toolkit--Susy's fear that Joey will lose all respect for her--might have been operative up until that moment when they finally stumble into sex. But in that moment of passion, when they are lost in the romantic ideal of their eternal union, what she sees in Joey's eyes reassures her utterly that he cannot disdain her for what she's about to give. Perhaps, later, the fear of his contempt will surge back up--perhaps even poisoning their relationship. But in this moment it's gone.

In its place, for her, is a vivid awareness of his desire, which has now reached the level of agony. And she has been trained well, all these years, for her future role as the good wife--a role that requires her to subordinate her own wishes to those of her true love, to see the world through his eyes and his achievements. And so, in that stumbling moment of passion, her own fears about pregnancy, which otherwise might rise up to halt the tide of desire, are subordinated. All that exists is his desire--but because it is for her, giving into it becomes a way of reclaiming herself. He wants her so badly. He wants her so badly. Her patriarchal training ensures that this is the ultimate validation of herself: this young man's whole self is straining towards, longing for, desiring her. In this moment she is the most valuable thing in his world.

This narrative probably doesn't account for every case of premarital sex among teenage evangelicals. But I suspect it is an important piece of the puzzle.

Thursday, March 24, 2011

Human Decency Across Ideological Divides: The Case of Penn Jillette

Penn Jillette (of the entertaining magic duo "Penn and Teller") wrote a blurb for Dawkins' The God Delusion in which he called it "true like ice, like fire." Beyond this, I've not had any exposure to his thoughts on religion--just a vague knowledge that he's an atheist. But when Robin Parry over at Theological Scribbles posted a video message (below) from Jillette, calling it "refreshing," I was intrigued enough to check it out.



I find Jillette's message moving and thought-provoking--and it touches in various ways on themes that are central to this blog. So I thought I'd repost it here and solicit thoughts/reaction from my readers.

In his blurb for Dawkins' book, Jillette says, "If this book doesn't change the world, we're all screwed." I can't say I agree with that, but I do think that if we can't find ways to understand and appreciate each other across the divisions created by different understandings of the world (and ways of living in it), we are all screwed. It may be for this very reason that I don't hold out much hope for the new atheist books changing the world in a positive way. In this brief video, Penn Jillette models for me--in a way that Dawkins, at least in the pages of The God Delusion, does not--the capacity for the kind of understanding and appreciation that we all need to have.

And he expresses this capacity despite being convinced that he has the truth--and that, by implication, the man who proselytized him is wrong ("I know there's no God," he says quite emphatically at one point in the video). Jillette has convictions--he thinks he knows--on matters about which I am pretty sure there is nothing like knowledge to be had. And yet I don't find in him--at least in this personal moment--the kind of dangerous ideological conviction (with its concomitant in-groups and out-groups) that I too frequently hear coming from Sam Harris at one pole and Pat Robertson or John Piper (who likes to excommunicate by tweet) at the other.

While resisting the allure of certainty about our worldviews--either by being agnostic in one form or another, or by holding "lightly" to one's beliefs in the way that Allen Stairs describes (a lightness that, I think, depends on a consciousness of one's fallibility)--seems to be one way of avoiding dangerous ideological division among people, I wonder if Jillette in this video message is modeling another way. And if so, how should we characterize it?

Monday, March 21, 2011

On Agnosticism and Faith

I’d like to think a bit about agnosticism, and in so doing pick up on some themes that have been brushed up against in comment threads in earlier posts. Specifically, I want to explore what it means to be an agnostic about God. The term “agnostic” was coined by Thomas Huxley in the 19th Century, but I’m not mainly interested in pinning down what he meant by it. Rather, I want to think about how the term is used today—and I expect it is used in some different ways.


This issue is interesting to me for a number of reasons, not least of them being that my wife is a self-described “genetically Catholic, Episcopalian agnostic” (who happens to attend a Lutheran church).

Etymologically, “agnostic” literally means “without knowledge.” But on that definition, many self-professing theists would qualify as agnostics. As so many of my students put it, “Belief in God is a matter of faith, not knowledge.” If agnosticism is to qualify as an authentic alternative to theism and atheism, what kind of alternative is it, given that no small number of theists confess to lacking knowledge about God?

Here, it may be helpful to zero in on what they do claim to have in relation to God, namely faith. Perhaps the agnostic is one who has neither faith nor knowledge when it comes to God. The difficulty here, of course, is that “faith” is a difficult concept, used in different ways—and often without any clear meaning or usage in mind at all.

(More often than not, when my students profess to believe something “on faith” and I ask them what that means, they stare at me blankly. “Do you mean,” I ask, “that you don’t have any good reasons to believe it, but you willfully believe it anyway, just because your parents and community will approve of you if you do?” They shift uncomfortably. Eventually I can get an interesting conversation going, but it takes awhile—because so many use the term “faith” unreflectively, without any clear sense of what they really mean by it.)

Hence, it seems to me that defining agnosticism in relation to faith merely shifts the problem to a different place. Furthermore, it may be that there are different kinds of agnosticism, and that not every kind of agnosticism involves an absence of faith (or an absence of faith in every sense, since there may be different kinds of faith). As such, it may be better to explore the concepts of agnosticism and faith in tandem.

Since faith and agnosticism both seem in some way related to beliefs about God, it may be helpful to begin with the notion of belief. What does it mean to believe something? Here, I tend to be rather influenced by the pragmatic philosopher William James, who held that that a belief’s meaning is given by the difference it makes for what we do and how we live. If this is correct, then to believe X is, most essentially, to behave in certain ways—to operate in the world as if X is true. To believe in the existence of the force of gravity is to operate as if stepping off a precipice (when standing on Earth) will lead to a nasty fall. Those who believe in gravity therefore step off precipices (without parachutes) only when they are suicidal. Those who want to experience floating but believe in gravity will, perhaps, strap on a parachute.

But notice a few things here. First, someone who believes in gravity but wants to die may engage in the same outward behavior (stepping off a precipice) as someone who (inexplicably) does not believe in gravity. The former steps off expecting to die, while the latter steps off expecting to float. The difference lies not in the outward behavior but in aims and expectations.

One might therefore argue that the essence of believing in the existence of the force of gravity lies not in what one does, but in what outcomes one expects—based, perhaps, on what one does, but also based on what one sees happening, etc. This way of understanding belief enables us to distinguish between those who just “go through the motions” of belief—doing the sorts of things that people who believe in gravity (and want to live) do, but who honestly expect to float when they step off cliffs—and those who “really believe it,” that is, who really expect to suffer a nasty fall if they fail to follow the appropriate behavioral protocols.

But defining belief in terms of expectations is problematic, too. Suppose you believe that Marge is secretly in love with you and would appreciate text messages expressing your ardor for her. Suppose, however, that Marge is married and that you are convinced she takes that bond so seriously that she would not respond in any way to amorous texts from people other than her husband, even if they are welcome. And so you don’t expect any discernible response from Marge when you text your little love notes. If you expect anything, it is that your messages will generate a wholly indiscernible internal response, one that will look the same (from your vantage point) as what you would expect to see if some very different things were true about her (for example, that your messages are not welcome but that she thinks the easiest solution is to ignore them).

In other words, if belief is construed in terms of expectations, it needn’t be in terms of what you would expect to observe. And if a belief makes no difference for what you would expect to observe, we are led to Anthony Flew’s pragmatic question framed in terms of his invisible gardener fable: If the person who “believes” in the invisible gardener expects the world to look exactly the same as it would were there no gardener, how is believing in the gardener any different from not believing? Here, the most plausible answer (suggested by R.M. Hare in his response to Flew) is that it makes a difference in terms of how the believer operates in the world. This seems right to me: There is a difference between the believer and the skeptic, but the difference lies not in expectations as much as it does in behavioral dispositions. In other words, we’re pushed back to the behavioral understanding of belief.

Now the implication of all of this might be that beliefs come in different kinds—some of which have their primary meaning given by what we expect to experience under a range of conditions, some of which have their primary meaning given by what we are inclined to do, at least given certain motives. But the line here cannot, I think, be neatly drawn. In most cases, to believe something amounts to having both certain expectations and certain behavioral dispositions.

But now consider a somewhat different example—the case of a trust fall. The idea in a trust fall is that someone falls backwards—presumably into the waiting arms of someone else. Suppose you are preparing to do a trust fall. You are standing at the edge of a table, your back to the floor, and Joe is standing behind you on the floor ready to catch you. If you fall backwards into his arms, assuming you have no desire to be hurt, can we say that you believed Joe would catch you? (Let us suppose that Joe is big enough and strong enough to catch you easily if he chooses to.)

We’d surely say you were putting your trust in Joe. But this trust might be automatic (you know Joe well and confidently expect that he will catch you), or a fairly easy decision (you don’t know much of anything about Joe, but you have no special reason to think he’ll let you fall and so you decide to operate as if he will catch you), or a rather anguished one. Perhaps you and Joe have a deeply conflicted relationship, Joe has betrayed you in the past, but he has apologized and seems to want to establish a new and better relationship.

In the first case, it seems natural to say that you believe Joe will catch you. You behave as if he is going to catch you, and you fully expect him to do so—you might even say that you know he will (although you may be wrong about this, in which case you’ll say “I thought I knew” once you pick your bruised self off the floor).

In the second case, it still seems appropriate to say you believe it (but in a weaker sense). You have no reason not to expect him to catch you, and the context of the game and your general sense of human nature leads you to adopt a defeasible presumption that people in this situation will catch you barring any reason not to.

But it also seems appropriate to say that you have faith that Joe will catch you. You can hardly count on Joe’s reliability in this case, since you know nothing about Joe. You are simply extending to Joe a general confidence you have towards people (a confidence that others, who’ve been burned too many times, don’t have). You are optimistic about Joe because you are optimistic about people in general—but you recognize, perhaps, that your optimism isn’t universally shared, that your reasons for optimism are largely anecdotal in character, and that others who lack your optimism also have anecdotal reasons for their reluctance. In other words, you expect Joe to catch you, but you recognize that your reasons for having this expectation are contestable. Nevertheless, you act as if he will.

In the third case, when you fall backwards into Joe’s arms, “belief” that Joe will catch you seems too strong for what is going on. Furthermore the act of falling seems better described as a gesture of faith than a case of having faith. Faith in this case isn’t something you have so much as it is something you are doing. It is something you are doing in the face of uncertainty, when the stakes are substantial. You don’t expect Joe to catch you, but neither do you expect him to let you fall. Your expectations on the matter are conflicted or ultimately silent. But you decide to operate as if he will catch you.

To decide to fall backward is to give Joe a chance to prove his trustworthiness. You might think gestures of this kind are the only way you can begin to test the sincerity of his overtures of friendship. When you let yourself fall, you operate as if he will catch you, but at best what you have is the hope that he will do so.

But under those same conditions of uncertainty, possessing the same absence of any clear expectations about what Joe will do, you might deciding not to make that venture of faith. You clearly don’t know what Joe would have done. You don’t claim to know that he wouldn’t have caught you Your inner expectations are the same as in the third scenario—conflicted or silent—but your behavior is different.

Now clearly, the first and second cases are not cases of agnosticism. But what about the third and the fourth? Here, the most helpful thing might be to distinguish between two dimensions or parameters of belief—one having to do with expectations, the other having to do with behavior. We might then say that in terms of expectations, cases three and four are both instances of agnosticism; but in terms of behavior, case three exemplifies a kind of pragmatic faith and case four exemplifies the absence of such faith. (Case four, however, needs to be distinguished from case five, in which you fully expect that were you to fall, Joe wouldn't catch you, and so you refuse to fall so as to save yourself from the expected pain).

Now I don’t want to go too far with the analogy between believing that Joe will catch you and believing that there is a loving God who transcends the material world and is operating to redeem its evils. And I’ve barely touched in this post on the issue of evidence and the justifiability of belief. My interest here is in providing some talking points for defining notions of “agnosticism” and “faith,” and perhaps distinguishing some different species of each. And so, with that in mind, I’ll stop here and ask what others think.

Robin Parry's Concise Reflection on Christian Universalism

Robin Parry uses the controversy surrounding Rob Bell's new book as a springboard for offering a nice (and concise) overview of Christian Universalism, in the form of debunking seven "myths" about it. If you haven't seen it (and if this controversy interests you), check out his essay, Bell's Hells, in the Baptist Times. Parry is the author of The Evangelical Universalist (under the pseudonym "Gregory McDonald) and co-editor of Universal Slavation? The Current Debate (in which I have an essay). Both books are well worth checking out.

Sunday, March 20, 2011

Prayer of Confession for Perfectionist Children

Today, after lunch, we had a family meeting. The meeting occured in lieu of going to church.

We had every intention of going to church this morning. We'd missed last week because of a family trip, and we'll be missing next week because my wife will be running the Dallas Rock 'n Roll half marathon. So we really wanted to make it to church this morning.

But things happen. Some of it this morning involved a harmonica being snatched from an elder brother (and the subsequent drama of trying to get a four-year-old girl to relinquish the stolen toy and apologize for the theft). Some of it, however, involved what was supposed to be a pleasant mother-son run before church, but which took much longer than anticipated (and was less pleasant). My wife and son planned to run to the bagel shop (a little over a mile away), eat breakfast together, and then run back. But as the return journey was about to start, after my wife had told my son she'd be happy to carry his full bottle of orange juice home so that it wouldn't get wasted, my son threw it in the trash.

My wife scolded him for wasting orange juice. Now my son hates to do anything wrong, and responds very strongly when he feels he's done something he shouldn't have done. The wasted orange juice (which he's decided to give up for Lent and so only drinks these days on Sundays, making it a special treat) inspired a sulk that slowed the run into a ponderous walk, which then delayed the return home...which he then felt bad about. In short, he descended into a cycle of self-recrimination which culminated in a hand-written note delivered to my wife. The note said, simply, "All the bad things that happened this morning were my fault."

And I was reminded of my Lenten meditation from last week. I recalled the liturgical Prayer of Confession with which my son is familiar. For those unfamiliar with it, it runs essentially as follows:
We confess that we are in bondage to sin and cannot free ourselves. We have sinned against you in thought, word, and deed, by what we have done and by what we have left undone. We have not loved you with our whole heart. We have not loved our neighbors as ourselves. For the sake of your son, have mercy on us. Forgive us, renew us, and lead us, so that we may delight in your will and walk in your ways, to the glory of your holy name. Amen.

This prayer is often misunderstood, treated as a kind of litany of self-recrimination. But I don't understand it in that way, because the prayer is really about forgiveness, about acknowledging our common human condition, our collective tendency to be less than we can be--and then looking forward (and upward, towards the divine resources that not only help us to forgive ourselves, but also to do better than we knew we could do).

So I thought about how to word that idea in language that would make sense to a very precocious, perfectionist seven-year-old. I wrote something up and gave it to my son during our family meeting. I told him I tried to put the meaning of that prayer of confession we said in church into second-grade language. And he seemed to really appreciate it. He seemed to think it would help him stop beating himself up so much when he did something he shouldn't have done.

Just in case others have children with similar personalities (or have inner children with similar personalities), I thought I'd share this children's version of the prayer of confession--really more a meditation than a prayer--on this blog. Here it is:
We all make bad choices. That's part of being human.

But we can all make better choices, and God helps us do that. We pray for God's help to be better than we thought we could be.

We shouldn't beat ourselves up for making bad choices. God doesn't. God loves and forgives us. So should we.

Instead of being mad at ourselves for our bad choices, we should say, "I can do better," and then try to do better next time, with God's help.

Friday, March 11, 2011

Plagiarism and Forgery--Some Thoughts about Ehrman's FORGED

A number of advance reviews of Bart Ehrman's new book, Forged: Writing in the Name of God–Why the Bible’s Authors Are not who we Think they Are, have started coming out, including reviews by some bloggers I follow and appreciate. As expected, the responses to the book has been various--from those who see it as littered with uncompelling arguments and tricky rhetorical moves, to those who see it as a slightly sensationalist and overly confident repackaging of old research conclusions for a popular audience, to those who see it as making a useful new contribution to that same research, to those find it a welcome challenge to their faith journey. I'm sure many more find it an unwelcome challenge to their faith journey, but they probably won't read it or review it.

Not being a biblical scholar, I am not qualified to address Ehrman's more historical claims. And having not read the book yet (it's due out later this month), I can't speak to the merits of Ehrman's more philosophical arguments based on those historical claims. But I do think the preliminary discussion of Ehrman's book raises some interesting questions worth discussing on this blog.

For the sake of argument, I want to assume that the broad consensus among (at least the more progressive) biblical scholars is substantially correct: A number of New Testament epistles written as if they were being authored by one of the apostles (notably Peter and Paul) were in fact authored by someone else.  From what I can glean from the advance reviews, what Ehrman does beyond popularizing the scholarly case for this view is to make a case for a more challenging conclusion: This pseudonymous authorship, at the time that it was done, would have amounted to a misleading representation of one's work as someone else's, and would have been perceived as deceptive by readers at the time.

Here's what I take this to mean: the cultural context during which these "pseudepigrapha" were written was such that there was a presumption of accuracy in attributions of authorship, a presumption that was being violated in problematic ways. In other words, the attribution to someone else would not have been perceived at the time as simply a gesture of authorial humility and respect for the one in whose name one was writing. It would have been perceived as something like what we have in mind when we use the term forgery.

To reflect on this claim and its significance, I think it may be helpful to think about forgery in the light of a contrasting offense, one that seems more common today--namely plagiarism. Plagiarism involves passing off someone else's work as one's own, that is, writing as if someone else's ideas and arguments had originated with oneself. As I point out to my students just about every semester, you can plagiarize without deliberately setting out to deceive. If, simply through a kind of reckless disregard for which ideas and word choices originated with others, you write in such a way that it sounds as if the ideas and word choices are yours, you have plagiarized. Because in the absence of giving proper credit, the presumption is that you are sharing your own ideas--a presumption that will, among other things, frame a professor's grading of a student's paper. If you turn in a paper to me and write as if you came up with the ideas and arguments contained therein--because I am presuming that unless you indicate otherwise, this is the case--I will end up crediting you for coming up with those ideas and arguments. If the standard presumption is a false one, then you will be wrongly credited for someone else's work.

And this will be true whether or not you had any deliberate intention to deceive. Such an intent makes the plagiarism worse, of course. In OSU's official documents on the matter, deliberate plagiarism is academic dishonesty, which is a more serious offense than the lesser charge of academic misconduct. Your plagiarism amounts to the latter if you were simply careless about citing sources, if you simply forgot to put things in quotes or introduce borrowed arguments with the appropriate "According to so-and-so."  It's like the difference between murder and manslaughter--both are instances of criminal homicide, but the former involves the intent to kill and the latter involves a reckless disregard for life.

So what does all of this show? First, plagiarism is a function of cultural context. There are expectations in the academy concerning the crediting of sources, and there is a purpose to submitting written work (namely, the purpose of evaluating the intellectual accomplishments of the student) that is thwarted if these expectations are violated. Take away the former and the same document might no longer qualify as plagiarized. Take away the latter, and plagiarism ceases to be as big a deal. If I simply wanted to know something about penguins, and so my aunt wrote me an e-mail telling me all about them, it wouldn't appall me if I discovered that she'd cut and pasted much of the information from the internet without crediting sources. In part, I have no expectation that she will credit her sources when she sends an e-mail of this kind; and in part, it just doesn't matter anyway, since the e-mail's only function is to convey information to me.

Another point to make about plagiarism is this: The actual intentions of the plagiarist don't take it out of the class of things we call plagiarism, even if they do impact the severity of the case. We can, in other words, correctly identify something as plagiarized before we know whether there was any intent to take false credit, or whether the student simply did not know about the academy's expectations or understand their importance. These things are necessary for assessing degrees of culpability, but not for deciding whether a document is plagiarized.

One final point deserves mention as well. That something has been plagiarized does not affect our assessment of its actual merits. If a good idea is plagiarized, it's still a good idea. The problem lies with the fact that the good idea is being credited to someone who doesn't deserve credit. If a sound argument is plagiarized, it remains sound--but its soundness tells us nothing about whether the plagiarist knows how to put together a good argument. If an eloquent turn of phrase is plagiarized, it remains eloquent--but doesn't speak to the plagiarist's eloquence.

With plagiarism thus in place as a comparative foil, let's turn to forgery--which is a kind of inverse (converse?) of plagiarism. To forge a document, at least in the sense Ehrman has in mind, is not to take credit for writing something you didn't write, but to attribute credit for something you did write to someone else. Or, more precisely, it is to write in such a way that it appears as if the author is someone else.

Like plagiarism, forgery seems to be a function of context, including cultural context. If writing as if you were someone else is done in a context where no one supposes that the piece of writing is authored by the person it presents itself as being authored by, then there is no forgery taking place. For example, I write a fair of bit of fiction in my free time, and often enough I write in the first person. The story I'm working on now is written in the first person "by" a fifteen-year-old protagonist. I am not, however, a fifteen-year-old boy. So, am I guilty of forgery? What if I were to publish the story under a pseudonym, and chose as my pseudonym the name of my main character? Given that it's a kind of fantasy story and would be sold as young adult fiction rather than as memoir, would I be guilty of forgery then? Clearly not. But as soon as I write a realistic (if rather taudry and embarassing) first-person tale about a certain atheist biologist and sell it as memoir under the pen name "PZ Myers" (assuming I could get away with it), I would be guilty of forgery.

As such, it makes a great deal of difference whether, in the time at which the biblical epistles were written, there were an established literary convention in which authors would write in the spirit of someone they admired and then attribute what they wrote to that person. Given such a genre--whose existence would seem to require a clear means of distinguishing letters written within that genre from letters written by the person they seemed to be written by--no one writing within that genre and distributing it as a piece in that genre would be guilty of forgery. But even given the existence of such a genre, a piece of this sort would be a kind of forgery if it were represented as an eponymously authored piece. And in the absence of any such genre, but given a cultural expectation that authors represent themselves as themselves when they write letters, any pseudepigraphic letter would qualify as a forgery.

It appears that at least part of what Ehrman is attempting to argue in his new book is that the last of these conditions prevailed in the culture in which the purportedly pseudepigraphic epistles were composed. If this is right, it seems to be an important point to make, because it changes the moral significance of the authorial attributions: they are forgeries.

Unlike the case of plagiarism, its hard to imagine forgeries being unintentional. How exactly do you negligently end up attributing authorship of your work to someone else? Perhaps pseudonymously-authored first-person fiction might unintentionally get mislabeled as memoir, but that's not exactly analogous to unintentional plagiarism. In the former case, any negligence would occur post-production, so to speak, and I'm not sure we'd want to call it a forgery at all. Rather, we'd want to simply call it a work of fiction inadvertently passed off as memoir. Only if it is deliberately passed off as memoir would we be inclined to call it a forgery. In short, take away the intention to deceived, and I don't see that something can be called a forgery at all.

This point is important, because it means that if Ehrman is right about cultural context, then the pseudepigraphic epistles violate the conventional expectation of accurate authorial representation, and therefore were likely to mislead readers of that time. And it in the absence of genre pieces that could be erroneously mislabeled, it is hard to envision how the misleading of readers could be anything but intentional deception.

Of course, the deeper motives for such deception might be varied, and some of these motives might be more praiseworthy than others. But the way in which motives play into our assessment of actions is different from the way in which intent does. Contrast the student who says she didn't intend to plagiarize with the student who concedes that she meant to plagiarize, but that she had a really good reason. In the former case, if we believe her we'll label her act a lesser offense. In the latter case, things work differently. We'd want to know what the motives were and whether they gave her a good reason to do what would otherwise be a serious offense. I can think of few motives that would meet this standard--perhaps if she were deliberately plagiarizing as part of a university job to assess the effectiveness of professors' plagiarism detection skills.

So what would motivate someone to forge a letter? Since forgery requires the existence of conventions about accurate authorial ascriptions, conventions that are being violated by the forger, one way to approach this question is to ask what purpose these conventions are meant to serve. Conventions against plagiarism exist in order to give due credit, and avoid giving improper credit to those who haven't earned it. To some extent, conventions against forgery might serve a similar function--if an uncredited ghost writer authors a Sarah Palin memoir, I might inadvertently attribute to Sarah Palin an eloquence and cleverness with language that she may not actually possess. But in such cases of ghost-writing, we have a kind of collusion between the actual author and the one who is credited for authorship: The latter is plagiarizing, and the former is collaborating with the plagiarism.

This is presumably not what Ehrman is concerned is happening with the pseudepigraphic epistles--although some opponents of Ehrman seek to defend the legitimacy of these epistles by, in effect, supposing some sort of innocuous "ghost-writing" analogue (Peter told an educated friend or follower what to write while he was languishing in prison, and the friend went home and wrote it).

So, do the conventions that forgery violates serve functions other than preventing us from inaccurately crediting someone with the merits of written work that isn't theirs? Arguably, one of the most significant purposes of the conventions in question is to discourage a different kind of misattribution of merit. If a particular painter has earned a place of honor that has made his works enormously valuable, then a forger is likely to make considerably more money by bringing a newly discovered work by this painter to the market than by bringing a painting "in the style of" the artist to the market. Here, the prestige of the artist is being "borrowed" through misattribution so as to increase the perceived worth of the work.

And here, of course, is the most obvious motive for the deliberate deception that takes place in the case of forgeries: Someone wants what they have written to carry the kind of weight, to be given the sort of credibility or the kind of hearing enjoyed by the letters of revered figures such as Peter and Paul. And so they write as if they were Peter or Paul, hoping to mislead the communities that honor Peter and Paul.

And why do that? Not to get credit. The motives in such cases are not self-serving ones in the ordinary sense. But they might still be egoistic. I might think that what I have to say is just as significant, just as important, as what Peter or Paul had to say--but no one is giving me the kind of hearing I deserve. But I want to prove to myself just how great my ideas really are, and so I forge a letter--and everyone responds to my ideas as if they came from the mouth of an apostle. What an ego rush!

Or, more plausibly, I might really care about my community and my faction's ideological convictions about what is best for the community. I might think that certain influential voices in my community are dangerously misguided in their views concerning women's equality. My faction's ideology tells me that if their views prevail it will be disastrous. So I put my misogynistic ideology into the mouth of Paul, hoping the misattribution will be believed so that the honor in which Paul is held will spill over onto my ideas for the community's future. It works, and my faction prevails.

What these examples show, I think, is that there are good reasons why communities that attach special reverence to the words of particular leaders might adopt conventional expectations of authenticity in authorial attribution, especially in relation to these revered leaders. That doesn't mean that early Christian communities did adopt such conventions--but it's a point in favor of Ehrman's claim that they did. All else being equal, early Christians would presumably prefer to be able to tell when a letter was actually written by Paul or Peter.

But here is something to keep in mind: Just as the merits of an argument don't change just because the argument was plagiarized, so too with forgery: An idea can be compelling, a poem beautiful, and argument sound, even if it is attributed to someone other than the real author. While Ehrman's points in Forged have bearing on certain legalistic ways of conceiving the authority of Scripture ("if it's in Scripture, it must be profoundly true even if it appears to be banal or misguided"), they do not prevent anyone from finding things of value in the pseudepigraphic texts. No matter who the author, and no matter what the author's motives for attributing authorship to someone else, if the idea is a good one, it's a good one. But in a book where authorship is in question, we cannot decide that and idea is a good one simply on the formal basis that it was presumably authored by someone taken to be a rich source of good ideas.

Wednesday, March 9, 2011

A Lenten Meditation

The other day I was cleaning up my office and found—in a stack of papers—the 2010 Ash Wednesday bulletin from my church. Since Ash Wednesday was only days away, I decided that instead of tossing it (which is what I’d typically do with a year-old church bulletin), I’d leave it on my desk to look through on Ash Wednesday.


And so there it was this morning. I sat down and read through it as a kind of morning devotion: Psalm 51, the extended responsive confession, the prayers. And I found myself pausing over certain key phrases, phrases that jarred me because of a conversation I’d had earlier in the morning, when someone I love had been beating themselves up about their personal failings.

I won’t repeat exactly what I said in response, but it involved the importance of recognizing one’s positive qualities and not exaggerating the negatives. It was a short exchange, but I kept thinking about it on the way to work. I was thinking about the difference between acknowledging honestly when you’ve done something wrong and making the commitment to do better, on the one hand, and defining yourself in terms of your failings, on the other. You can look at your bad choices and say, “I shouldn’t have done that; I’m better than that,” and then commit to doing better in the future. Or you can look at your bad choices and say, “That’s who I am. I’m a miserable failure of a human being.” The former gives you a good name to live up to, inspiring you to reach for the resources that will help you to grow into your best self. The latter imposes a kind of roadblock: You define yourself in terms of your worst moments: it’s who you are. And if it’s who you are, then you don’t have the resources to do better.

These were the thoughts going through my head as I read through last year’s Ash Wednesday bulletin. And my attention was caught by verse 5 of Psalm 51: “Indeed, I was born steeped in wickedness, a sinner from my mother’s womb.” I read and reread the extended prayer of responsive confession, a litany of our offenses “in thought, word, and deed,” our failures to love God and neighbor. And more: “…our self-indulgent appetites…our neglect of human need and suffering, and our indifference to injustice and cruelty…our prejudice and contempt toward those who differ from us…our waste and pollution of your creation, and our lack of concern for those who come after us…”

And I imagined those words becoming a kind of self-flagellation, a litany of verbal self-abuse.

It occurred to me that for many people, that is exactly what these words mean, and that is exactly what the season of Lent is about: beating yourself up for being such a miserable sinner. I recall all the times I’ve witnessed Christians repeat the words, “I confess that I am in bondage to sin and cannot free myself.” Sometimes it is just rote words, repeated without meaning. But sometimes it is something more terrible. They are lashing themselves with the words, as if being in bondage to sin were the same thing as being nothing but a sinner—as if the cage were the only thing, as opposed to being what was keeping the dove from taking wing.

But in Psalm 51 the confession of wickedness is preceded by an assurance of God’s compassion and mercy; and the fifth verse, which seems to define the sinner in terms of their sin, is immediately followed, in verse 6, by the following: “Indeed, you delight in truth deep within me, and would have me know wisdom deep within.”

And then the psalm moves forward into the beautiful words so often repeated (sometimes sung) in Christian liturgies:

Create in me a clean heart, O God,
  and renew a right spirit within me.
Cast me not away from your presence,
  and take not your Holy Spirit from me.
Restore to me the joy of your salvation
  and sustain me with your bountiful Spirit.
The Psalm is about cleansing. And for something to be cleansed, there must be more to it than the dirt which is washed away. While Christians insist that we cannot cleanse ourselves, that we need the grace of God, this does not mean that there is nothing beneath the dirt. The words of Psalm 51 insist that sin and wickedness do not have a definitive hold on us. In its own way, at least for me, the psalm evokes the Genesis assurance that deep within we bear the “image of God,” and that despite our failings this is what we most essentially are. In our innermost being there is something precious, something that touches upon the divine—something into which divine truth and wisdom can flow, to cleanse us from the inside out.

The purpose of confession, of recognizing the scope of our failings, is not to justify self-loathing but to inspire the self-transcendence that comes when we open ourselves to that essential connection linking us to the divine. The narrow self that does not love enough, that neglects human suffering, that is indifferent to injustices, that wastes and pollutes the world—this self exists only to the extent that we cling to our sins, only to the extent that we say to ourselves, “This is my essence.” When we do that, how can we perceive cleansing as anything other than self-destruction? By conceiving ourselves too narrowly, by telling a story about ourselves that makes no room for the image of God, Lenten confession is reduced to beating oneself up. And so we are forced to choose between denying our sins (hiding behind self-righteous justifications) and hating who we are.

But Lent isn’t about letting go of false self-righteousness in order to hate ourselves more perfectly. It’s about reaching for a third alternative. When the psalmist says, “The sacrifice of God is a troubled spirit; a troubled and broken heart, O God, you will not despise,” there is an assurance here: Honestly facing our failings does not require that we hate ourselves. Why should we despise what God does not despise? It is possible to fully confront our most heart-breaking offenses in all their troubling reality, because they are not the end of the story.

If these offenses defined us, then being heart-broken about them wouldn’t be possible at all. It is because we are more than our sins, even our direst sins, that we can weep over them. To let our grief over offenses rise to the level of self-hatred is therefore to reach the wrong conclusion. That we can grieve over what we have done means that we are more than that, that we are greater than that. That we can grieve over our offenses means that self-hatred is misplaced, because there is in us that which rises above our worst acts.

In grieving over our offenses, we give voice to that within us which is of God. But in hating ourselves for our offenses, we blind ourselves to that within us which is of God. Lent is about the former, not about the latter. And because it is about the former, it does not end with grief and confession. It only begins there. Lent is a journey of self-transformation whose starting point is honesty over how we have fallen short. But we must say, “I am better than this.” And for Christians this acknowledgment is deeply rooted in our understanding of ourselves as beloved children of God.

Even if our sin is intolerable to God, we are not. If our sin is intolerable to God, it is because we are better than that. Divine forgiveness is nothing more and nothing less than God’s unblinking attention to that which is greater than our sin—greater precisely because it was born in an outpouring of divine love.

Lenten repentance, if it is to reflect this spirit of grace--if it is to be the window to divine grace that it is supposed to be--must be a recognition that we are better than our sin, and that this is what makes our sin so heartbreaking. And to perceive our sin in that way--to see it as God sees it--is to forgive ourselves as God forgives us.

Tuesday, March 8, 2011

Some Thoughts of Universalism: Recognizing the Inadequacy of Pat Responses

I’ve decided that one of the more recent comments on my RD article about the conservative backlash to Rob Bell is worth reflecting on here, simply because it helps to highlight some of the unconvincing lines of thought that are too often unreflectively thrown out there in response to universalism. If Christians (and other theists) are to converse thoughtfully about issues such as universalism, we need to take our thinking deeper than the pat responses allow. And for that to happen, the weaknesses of the pat responses need to be clear. So here is the text of the comment, in which at least some of the pat responses to universalism are put forward:


In reference to the part of your article that discusses "when all means some," one must look at the entire context of a passage in the bible. If you only look at one verse, it is easy to pull things out of context and assume things. When these passages refer to "all," they are talking about all who trust in Him. If everyone on this earth is automatically saved and we can do whatever the heck we want with no consequences, then God would not be a loving God. Just like a parent who disciplines his or her children out of love, so does our Father in heaven. Doesn't it seem like a huge waste of time for those who have a relationship with God to continue if they are going to heaven regardless? Never underestimate the power of God's grace but ever take it for granted either.
There are three points made here. The first has to do with biblical interpretation. On this issue the commenter stresses the importance of reading the Bible holistically and interpreting isolated passages in the light of such holistic reading.

I couldn’t agree more. If the Bible is to be seen as authoritative at all, then I think it must be in these terms, where the plain sense of isolated passages is subordinated to the core messages that emerge through a holistic reading. Such an approach fits well with seeing the Bible as the product of diverse voices writing at many times and places in history, reflecting on their understanding of and experience of the divine in terms of their own cultural lenses. It is by reflecting carefully and critically (in conversation with others) on a cloud of such fallible witnesses that we can begin to see the common themes that lie behind their limited perspectives, and thereby transcend those limitations.

My point in picking out the “universalist texts” in my article about Bell was precisely to highlight the fact that an approach to the Bible that prioritizes a narrow reading of the literal sense of isolated texts is not as such consistent with a confident endorsement of the doctrine of eternal hell—because there are isolated texts whose literal reading seems universalist. Given the complexity of the text—given that there are isolated passages that in their most straightforward sense support universalism, while others support damnation and a few support annihilationism—reaching theological conclusions based on what the text says requires critical thinking in light of the whole, something which is most effectively done in open conversation with those who have a different reading. It is only by respectfully considering the reasons and arguments of those who read the holistic message in different ways that we can reach responsible conclusions about what the whole tells us (if anything) about the eternal fates of human beings. And the fanaticism of “hellists” such as Piper and Taylor impedes just this sort of critical dialogue.

The second and third points raised by this commenter go beyond the matter of biblical interpretation to more philosophical reasoning about universalism and “hellism” in the light of core Christian teachings. The commenter offers, in effect, two arguments. The first is stated as follows: “If everyone on this earth is automatically saved and we can do whatever the heck we want with no consequences, then God would not be a loving God. Just like a parent who disciplines his or her children out of love, so does our Father in heaven.”

The argument here, in brief, tries to spell out the implications of the traditional Christian notion that God is essentially loving in something like the way that good parents are loving (“our Father in heaven”). If God is loving in this way, God would not let us do “whatever the heck we want with no consequences,” because good parents do not let their kids do whatever the heck they want with no consequences. Apparently, however, the alternative to letting your kids do whatever the heck they want with no consequences is to reject them utterly and completely, decisively casting them away from you and into an endless torture chamber.

Excuse the sarcasm—but it is helpful in calling attention to the false dilemma at work in this particular argument. Universalism is in part premised on the recognition that there are alternatives to coddling or “enabling” those you love (protecting them from all the negative consequences of their poor choices) and utterly rejecting those you love. Good parents do neither. As such, if God is like a good parent, God would do neither. So what would God do, given the extraordinary resources that God has available (assuming traditional theological assumptions)? That, of course, is one of the key questions that Christian debates about universalism and hellism need to grapple with. One cannot rule out the various universalist answers with nothing but a bizarre false dilemma (“bizarre” because the options presented are both ones that seem to be things a God conceived in the Christian sense would avoid). And there are various universalist answers, since, of the different ways God might respond to beloved but frequently misguided and willful creatures, it seems that more than one might be thought to culminate in the salvation of all.

Finally, the commenter asks, “Doesn't it seem like a huge waste of time for those who have a relationship with God to continue if they are going to heaven regardless?” The reasoning here seems to be pragmatic: If all are saved, then no one has any reason to have a relationship with God (because the only reason to have a relationship with God is in order to get into heaven). Hence, we must reject universalism if we want anyone to be motivated to cultivate a relationship with God.

Presented in these terms, the weaknesses of this argument essentially speak for themselves. But just in case it isn’t obvious, let me enumerate the problems here. First, there is the assumption that “getting into heaven” is the only reason anyone could have for continuing a relationship with God. This suggests that there is nothing intrinsically rewarding about nurturing such a relationship in this life, that those who pursue a relationship with God are doing it wholly for future rewards and get no immediate benefits from it.

Really? I could understand this perspective if God is conceived as a tyrant in the sky who reards the loudest sycophants, or as an unpleasant uncle you might decide to spend time with so you can get written into his will. And there are certainly some people who do conceive of God in something like that way. But those who have had profound experiences of God's presence in their lives don't usually come away with an impression of God as a nasty tyrant or annoying uncle. They are like lovers smitten.

The baffling nature of this perspective really comes out when we begin to reflect on what Christianity has traditionally taken “heaven” to be. It is, simply put, having a relationship with God of the most immediate and powerful kind. Heaven just is intimate loving union with the creator—the beatific vision. And so the commenter’s question really amounts to this: Why should I bother to have a taste of heaven now when I’ll get to enjoy heaven later regardless?

Of course, the most significant challenge to universalism becomes apparent if we restate the question in a different way: Why should I bother to have a relationship with God now when I will eventually come to have the most intimate kind of relationship with God later? Stated in this way, it calls attention to the fact that relationships usually involve the voluntary participation of both parties. But doesn’t this mean that, if heaven consists in having a relationship with God of a particularly intimate and immediate kind, that there can be no guarantee that all will come to experience heaven (since whether this happens depends on the free choices of the creature)?

In the theological debate between universalists and hellists, this question raises the most interesting and thorny philosophical issues—issues pertaining to the nature of freedom. Some think that freedom is such that, if we assume that human beings are really free, universalism has to be rejected (or simply held out as a hopeful possibility). Some people treat this position as uncontroversial—but is it?

I think the controversy here can be highlighted by asking a different but related question. Suppose someone is confronted with a standing offer that is never withdrawn. Suppose, furthermore, that rejecting the offer has natural consequences that are negative (because one has a nature such that accepting the offer is the only way to really be satisfied), and that these negative consequences become progressively worse the longer one rejects the standing offer (in the way that thirst or hunger become progressively worse the longer one rejects water and food). And suppose, finally, that the person has every conceivable reason to accept the offer (the person has come to see that accepting the offer is supremely good in every conceivable way) and absolutely no reason to reject it (the person has come to realize that all supposed reasons to reject the offer are utterly vacuous). On these assumptions, can we imagine that a person who is free to do otherwise would reject this offer forever?

In effect, the latter part of That Damned Book (whose actual working title is God’s Final Victory) aims to answer this question in the negative. But while I may have more to say about this issue in later posts, for now I want to highlight the controversial character of the view that divine respect for freedom is incompatible with a guarantee of universal salvation. This view is hardly beyond dispute—in fact, it there is a powerful intuitive case for thinking that if people have every reason to choose something, no reason not to, and infinite opportunity to choose it (because it is a standing offer), they will eventually choose what they have every reason to choose and no reason not to choose. The choice will be free but inevitable.

Put simply, there is no pat biblical or philosophical/theological basis for dismissing universalism. Those who want to defend the traditional doctrine of hell need to confront some serious issues, and they should ideally refine their arguments in conversation with those who thoughtfully develop an alternative view.

Monday, March 7, 2011

Advertising, Religion, and the End of the World

Okay, so the title of this very brief blog post is a bit melodramatic--but I'm thinking of it as a working title for a future book that develops the ideas I presented a couple of weeks ago at the AAAS meeting in Washington DC. I've been thinking of offering a summary of that talk here--but a blogger named Brigid has saved me the trouble. She was apparently at the panel in which I gave my talk, and she does an excellent job of summarizing my main line of argument in a post entitled American Wind Power and the Power of Advertising. (My comment on that post has much more to do with the themes of this blog than the original talk did, which mentioned religion only in a brief concluding paragraph--as an invitation to further research).

Thursday, March 3, 2011

New Religion Dispatches Article on Rob Bell

I decided to reflect a bit more deeply on the conservative Evangelical backlash against Rob Bell--a backlash that strikes me as a case study in "fanaticism," at least under one way of understanding that term. The result, Rob Bell Catches Hell from Conservatives, appears in today's Religion Dispatches.