Growing up gay or lesbian in America today isn’t easy—even thought it may be less alienating than it used to be, thanks in large measure to progressive efforts to provide support services of various kinds. But now the Texas House of Representatives has taken steps to eliminate such support services at public universities.
More precisely, the Texas House has voted that any public university which funds a student center for sexual minorities must also extend equal funding to a “traditional values” center. But those behind the bill have made it clear that their real desire is to see LGBT (lesbian, gay, bisexual, transgender) centers defunded. The equal funding bill just seemed easier to get passed, and may inspire the same result.
But what if, instead of encouraging the defunding of LGBT student centers, universities began actually creating “traditional values” centers? What would that look like? If the point is about parity, these centers would have to provide services similar in kind to those provided by LGBT centers. So what would those services be?
Of course, the question reveals the absurdity of the legislation. The purpose of LGBT centers on college campuses is to provide support for a minority group that endures a distinctive set of challenges. Many of these challenges relate to the legacy of growing up with a different sexuality than one’s peers. Adolescence is a time when the yearning for peer acceptance is especially great, and it is precisely at this time in their lives that gays and lesbians come to discover that they don’t have the same romantic and sexual feelings as those around them. More significantly, they discover that they have a sexuality that doesn’t fit into the deeply entrenched cultural norms they have been breathing in like air since they first watched Ariel, the little mermaid, pining for Prince Eric.
Even as our culture wrestles with the issue of same-sex marriage, children are carefully insulated from these debates and the reality of same-sex attraction. The weddings they hear about in their stories, the marriages they see modeled on “child-appropriate” television, are heterosexual ones. Any effort to introduce some alternative into the world of children—even in the form of a book about a pair of male penguins co-parenting a baby penguin at the zoo—is met with strident opposition. And so these alternative images remain exceedingly rare.
The result, of course, is that when gay and lesbian adolescents first become conscious of their same-sex attraction, their cultural paradigm offers no place for it. While their heterosexual peers hardly reflect on the direction of their sexual feelings, gays and lesbians may find it difficult to think about anything else. What they are learning about themselves is so anomalous that—at least in the absence of sufficiently high-profile LGBT support groups—a sense of deep isolation is almost inescapable.
And then, of course, comes the peer abuse. The targets of such abuse are usually identified well before they become sexually active—a fact which exposes the disingenuousness of the conservative pretense that homosexuality is a life choice rather than an innate sexuality. Adolescent abusers know better, quickly identifying the “faggot” in their midst long before this victim has “chosen a homosexual lifestyle.”
My best friend, a gay man, recalls that one of the things he was most grateful for in adolescence was his long legs, which enabled him to run fast. It meant that he was beaten up a bit less often than he might otherwise have been.
And then, of course, there is the broad social rejection. While gays and lesbians immediately experience their sexuality is an unchosen fact about themselves that they cannot change, they are told by their churches, by their communities, often by their parents, that it is a perverse and immoral choice. And because they know that what is being condemned is as inescapable as the color of their eyes, what they experience is not a condemnation of some behavior of theirs that they can change. What they experience is a condemnation of who they are. The occasional Hollywood movie or television program that challenges this condemnation becomes a kind of lifeline—and as they cling to it, they hear the conservative members of their community furiously insisting that the rope be cut.
Alienated, abused, and condemned for who they are, they emerge out of adolescence with a range of challenges that their heterosexual peers do not (if they emerge at all--gay teens are four times as likely to die by suicide as their heterosexual peers). And then they arrive as freshman on their college campus, and—perhaps for the first time in their lives—find an established support network in place, one that understands what they have been through and has resources to help them shake off the damaging messages they have received and come to terms with who they are.
Less than a year ago, the life struggles of sexual minorities were brought into sharp focus by a series of highly publicized suicides by young gay men. At least for many, consciousness was raised about the importance of extending special support to those who have been marginalized, abused, and denounced for their sexuality.
Apparently, the majority of the Texas House missed the message. The vote to create “comparable” centers promoting traditional values is hardly coherent. Once the reality of what LGBT centers are about is acknowledged, the requirement for an equivalent “heterosexuality center” would involve the absurd demand to provide support services for those who have endured the “hardship” of having a sexuality that is wholly accepted by their culture, the “struggle” of coming to terms with not being beaten up for who they are, and the challenge of never having being condemned for something about themselves they cannot change.
There is, however, something that opponents of LGBT centers get right. The existence of such centers is an implicit social acknowledgment that the pervasive marginalization of sexual minorities does real harm. And the traditional condemnation of homosexuality is deeply implicated in that marginalization. To recognize the validity of LGBT centers is to call into question the validity of the values that undergird the systematic marginalization of gays and lesbians, in much the same way that centers for racial minorities calls into question the validity of racist values.
But this doesn't mean we shouldn't help gays and lesbians overcome the harms of growing up in the midst of intolerance and abuse--any more than it means we shouldn't have African American student centers. It means, rather, that we should honestly confront the potentially inconvenient questions that the need for such centers raises. And the questions become apparent only when we clearly understand why "parity" in these cases makes no sense--why African Americans and sexual minorities need such things as student centers in a way that members of the empowered majority do not. Seen in this light, pursuing legislation requiring “equivalent” centers for the empowered group (or, as the case may be, centers promoting the "traditional values" that perpetuate that group's privileged position) is really a way of trying to silence the inconvenient questions by pretending that parity is coherent when it is not.
"The children of God should not have any other country here below but the universe itself, with the totality of all the reasoning creatures it ever has contained, contains, or ever will contain. That is the native city to which we owe our love." --Simone Weil
Wednesday, April 27, 2011
Critiquing the Texas House--Satirical Version (because the devil hates to be mocked)
The other day, the Texas House of Representatives approved a budget bill requiring that public universities which fund GLBT student centers (that is, centers within student affairs offices that serve gay, lesbian, bisexual, and transgendered students) must extend the same resources to creating an equivalent “traditional values” student center. In attempting to understand the impact of such legislation on college students in Texas, an intrepid imaginary reporter here at The Piety that Lies Between tracked down Billy, a faux student at the University of Texas law school, who is an active member of one of the conservative groups promoting the legislation.
TPTLB: As you see it, what is the purpose of this new legislation?
Billy: Well, it’s clearly unfair that the people with…those…sexualities get to have an entire student services center devoted to them while the rest of us are left out. This legislation calls for parity.
TPTLB: So you want a student office that is specifically devoted to the unique needs of heterosexual students on campus?
Billy: That’s right. It’s only fair.
TPTLB: What kinds of services would such an office offer? I mean, if we’re talking about needs that are unique to heterosexuals, does that mean the center would operate like a kind of campus-based Planned Parenthood?
Billy: NOOOOOOOOOOO!!!!! (There is a long pause while Billy composes himself.) This is supposed to be a traditional values center, not some den of iniquity handing out free love instruction manuals and murdering babies.
TPTLB: I see. So what services would they provide?
Billy: Well, for example, they could bring speakers to campus who would defend traditional values.
TPTLB: Like loving your neighbor? Caring for the poor? That sort of thing?
Billy: Well, I suppose if they wanted to that would be okay. But the point of the center is to stand up for traditional sexual values.
TPTLB: Traditional sexual values? Like limiting sex to the context of marriage?
Billy: Yes, for example.
TPTLB: So does that mean the center you envision would advocate for same-sex marriage?
Billy: What?!? Why the &*$^!#!@ would you think the center would do that?
TPTLB: Well, as it stands, same-sex couples are systematically denied the opportunity to participate in the institution of marriage. This means that gays and lesbians can’t restrict their sexual expression to the context of marriage even if they wanted to.
Billy: What the crud are you talking about? The whole point of the traditional values center is to condemn those %^$#$!@ perverts.
TPTLB: Oh, so what you’re proposing is a special center on campus specially created for the purpose of condemning gays, lesbians, and other sexual minorities. Is your idea that if there’s a center on campus designed to serve the needs of a minority group that faces marginalization and abuse, it’s only fair that there be a center devoted to perpetuating that marginalization and abuse?
Billy: No.
TPTLB: No?
Billy: It’s not about perpetuating marginalization and abuse. It’s about standing up for God’s law.
TPTLB: Oh. So what you want is for the State of Texas to publicly fund religious centers whose sole aim is to promulgate religious teachings which explicitly condemn homosexuality? You think the “no establishment of religion” clause should be repealed from the constitution?
Billy: No! You’re just being a &!#$@. I want comparable centers so that universities aren’t being biased. That’s it.
TPTLB: Well, okay. Then let’s think about what a comparable center would look like. GLBT student centers exist to serve the distinctive needs of sexual minorities. These needs are uniquely tied to the consequences of growing up and living in a society whose norms and expectations make little or no room for their sexuality. Because of their differences, they are more frequently targeted for systematic physical and psychological abuse, euphemistically called "bullying". Some are beaten severely, even to the point of death, just because they’re gay. They experience systemic social marginalization by being excluded from participation in marriage, one of the basic institutions of our society. They are told by religious conservatives that their very sexuality is “intrinsically disordered,” that their very desires (whether they act on them or not) are always “sinful, impure, degrading, shameful, unnatural, indecent and perverted,” and that if they ever act on their romantic impulses by making a loving commitment to another person, they will be making a commitment to sin. As a result of all of this, sexual minorities often suffer from depression and even suicidal impulses. In fact, gay teens are up to four times as likely to kill themselves than their heterosexual peers. In recognition of this reality, universities have created student centers responsive to the distinctive needs of sexual minorities—for counseling services, for advocacy in the face of social prejudice, and for protection from various forms of abuse. How, exactly, would a comparable institution for heterosexuals look?
Billy: Well, maybe straight people like me who stand up for traditional values are beginning to feel a bit…what was the word you used? Marginalized. I mean, maybe we’re starting to be abused for our beliefs. I wouldn’t be surprised if some good Christians are feeling a touch suicidal because of all the intolerance.
TPTLP: Is there evidence of such...intolerance?
Billy: Well, for example, if I were to walk through UT law school with a shirt on that said, 'Homosexuality is immoral,' if I were to do that, there would be an uproar. People would be upset, and it would be considered out of place and not acceptable to do that. I'd probably get a talking to.** I mean, can you imagine that? A talking to? I bet you none of those gays and lesbians you’re so protective about ever got themselves a talking to! I’m a victim, I tell you! I poor, suffering victim! I NEED A CENTER!
TPTLP: Um, my gay best friend has been beaten for his sexuality. Numerous times, in fact. He’s woken up after being knocked unconscious, his head bloody. After coming out in high school, my gay cousin had his home repeatedly vandalized, including having "fag" scrawled on his driveway. Have you been beaten up for being straight? Had your home vandalized for it?
Billy: You’re missing the point. Those of us who oppose homosexuality are beginning to feel more and more like a minority on college campuses. We aren’t allowed to call homosexuals gross abominations against God without having our views challenged. People actually have the audacity to tell us that our beliefs are wrong and that announcing them by sporting them on our fashion choices is inappropriate! I mean, that's practically censorship! Talk about social marginalization and systematic abuse! We need a publicly funded center on campus to stand behind us, to tell us that our beliefs are right and to stand up for our efforts to drive homosexuals back into the closet! We need a center on campus that prints “Anti-Gay and Proud” T-shirts for us to wear! And it’s high time that our public universities stand up for my right not to get a talking to when I publicly condemn other people for doing things that, because of my heterosexual orientation, I’m not tempted to do! I mean, do you realize how good it makes me feel to fixate on homosexuality as if it were the worst of sins? It feels great, because then avoiding sin is easy! It’s just a matter of avoiding sex with people I’m not even remotely attracted to! Wow! Couldn’t get any easier! I can keep on ignoring the plight of the poor, enjoy my privilege, and yet still feel great about myself because at least I ain’t no homosexual!
TPTLB: Um. Well. And you don’t think there’s any similarity between a center like the one you’re describing and, say, a white supremacist center on campus funded for the sake of creating parity in the face of the African American Student Affairs office?
(At this point Billy is so offended by the comparison that he storms away.)
**The words that appear in italics before the double asterix are the actual words of Tony McDonald, a law student at UT Austin and an officer of the Young Conservatives of Texas, a group that worked with Republican Representative Wayne Christian on introducing the recent Texas legislation. The hyperbolic elaborations (as well as everything else attributed to Billy) are purely my own invention.
TPTLB: As you see it, what is the purpose of this new legislation?
Billy: Well, it’s clearly unfair that the people with…those…sexualities get to have an entire student services center devoted to them while the rest of us are left out. This legislation calls for parity.
TPTLB: So you want a student office that is specifically devoted to the unique needs of heterosexual students on campus?
Billy: That’s right. It’s only fair.
TPTLB: What kinds of services would such an office offer? I mean, if we’re talking about needs that are unique to heterosexuals, does that mean the center would operate like a kind of campus-based Planned Parenthood?
Billy: NOOOOOOOOOOO!!!!! (There is a long pause while Billy composes himself.) This is supposed to be a traditional values center, not some den of iniquity handing out free love instruction manuals and murdering babies.
TPTLB: I see. So what services would they provide?
Billy: Well, for example, they could bring speakers to campus who would defend traditional values.
TPTLB: Like loving your neighbor? Caring for the poor? That sort of thing?
Billy: Well, I suppose if they wanted to that would be okay. But the point of the center is to stand up for traditional sexual values.
TPTLB: Traditional sexual values? Like limiting sex to the context of marriage?
Billy: Yes, for example.
TPTLB: So does that mean the center you envision would advocate for same-sex marriage?
Billy: What?!? Why the &*$^!#!@ would you think the center would do that?
TPTLB: Well, as it stands, same-sex couples are systematically denied the opportunity to participate in the institution of marriage. This means that gays and lesbians can’t restrict their sexual expression to the context of marriage even if they wanted to.
Billy: What the crud are you talking about? The whole point of the traditional values center is to condemn those %^$#$!@ perverts.
TPTLB: Oh, so what you’re proposing is a special center on campus specially created for the purpose of condemning gays, lesbians, and other sexual minorities. Is your idea that if there’s a center on campus designed to serve the needs of a minority group that faces marginalization and abuse, it’s only fair that there be a center devoted to perpetuating that marginalization and abuse?
Billy: No.
TPTLB: No?
Billy: It’s not about perpetuating marginalization and abuse. It’s about standing up for God’s law.
TPTLB: Oh. So what you want is for the State of Texas to publicly fund religious centers whose sole aim is to promulgate religious teachings which explicitly condemn homosexuality? You think the “no establishment of religion” clause should be repealed from the constitution?
Billy: No! You’re just being a &!#$@. I want comparable centers so that universities aren’t being biased. That’s it.
TPTLB: Well, okay. Then let’s think about what a comparable center would look like. GLBT student centers exist to serve the distinctive needs of sexual minorities. These needs are uniquely tied to the consequences of growing up and living in a society whose norms and expectations make little or no room for their sexuality. Because of their differences, they are more frequently targeted for systematic physical and psychological abuse, euphemistically called "bullying". Some are beaten severely, even to the point of death, just because they’re gay. They experience systemic social marginalization by being excluded from participation in marriage, one of the basic institutions of our society. They are told by religious conservatives that their very sexuality is “intrinsically disordered,” that their very desires (whether they act on them or not) are always “sinful, impure, degrading, shameful, unnatural, indecent and perverted,” and that if they ever act on their romantic impulses by making a loving commitment to another person, they will be making a commitment to sin. As a result of all of this, sexual minorities often suffer from depression and even suicidal impulses. In fact, gay teens are up to four times as likely to kill themselves than their heterosexual peers. In recognition of this reality, universities have created student centers responsive to the distinctive needs of sexual minorities—for counseling services, for advocacy in the face of social prejudice, and for protection from various forms of abuse. How, exactly, would a comparable institution for heterosexuals look?
Billy: Well, maybe straight people like me who stand up for traditional values are beginning to feel a bit…what was the word you used? Marginalized. I mean, maybe we’re starting to be abused for our beliefs. I wouldn’t be surprised if some good Christians are feeling a touch suicidal because of all the intolerance.
TPTLP: Is there evidence of such...intolerance?
Billy: Well, for example, if I were to walk through UT law school with a shirt on that said, 'Homosexuality is immoral,' if I were to do that, there would be an uproar. People would be upset, and it would be considered out of place and not acceptable to do that. I'd probably get a talking to.** I mean, can you imagine that? A talking to? I bet you none of those gays and lesbians you’re so protective about ever got themselves a talking to! I’m a victim, I tell you! I poor, suffering victim! I NEED A CENTER!
TPTLP: Um, my gay best friend has been beaten for his sexuality. Numerous times, in fact. He’s woken up after being knocked unconscious, his head bloody. After coming out in high school, my gay cousin had his home repeatedly vandalized, including having "fag" scrawled on his driveway. Have you been beaten up for being straight? Had your home vandalized for it?
Billy: You’re missing the point. Those of us who oppose homosexuality are beginning to feel more and more like a minority on college campuses. We aren’t allowed to call homosexuals gross abominations against God without having our views challenged. People actually have the audacity to tell us that our beliefs are wrong and that announcing them by sporting them on our fashion choices is inappropriate! I mean, that's practically censorship! Talk about social marginalization and systematic abuse! We need a publicly funded center on campus to stand behind us, to tell us that our beliefs are right and to stand up for our efforts to drive homosexuals back into the closet! We need a center on campus that prints “Anti-Gay and Proud” T-shirts for us to wear! And it’s high time that our public universities stand up for my right not to get a talking to when I publicly condemn other people for doing things that, because of my heterosexual orientation, I’m not tempted to do! I mean, do you realize how good it makes me feel to fixate on homosexuality as if it were the worst of sins? It feels great, because then avoiding sin is easy! It’s just a matter of avoiding sex with people I’m not even remotely attracted to! Wow! Couldn’t get any easier! I can keep on ignoring the plight of the poor, enjoy my privilege, and yet still feel great about myself because at least I ain’t no homosexual!
TPTLB: Um. Well. And you don’t think there’s any similarity between a center like the one you’re describing and, say, a white supremacist center on campus funded for the sake of creating parity in the face of the African American Student Affairs office?
(At this point Billy is so offended by the comparison that he storms away.)
**The words that appear in italics before the double asterix are the actual words of Tony McDonald, a law student at UT Austin and an officer of the Young Conservatives of Texas, a group that worked with Republican Representative Wayne Christian on introducing the recent Texas legislation. The hyperbolic elaborations (as well as everything else attributed to Billy) are purely my own invention.
Monday, April 25, 2011
Meanings of the Resurrection
In honor of Easter, I want to reflect a bit on the symbolism of the resurrection story. I don't mean to offer an apologetic for the resurrection. That is, I don't intend to defend the reasonableness of believing in it--which I think is something that, as I mentioned in a comment on an earlier post, can only be done as part of a much broader project of reflecting on the value of holistic interpretations of lived experience. What I offer here is simply a reflection on what, symbolically, the story of resurrection means for Christians.
Like the symbol of the cross, the symbol of the empty tomb is polysemic—that is, it is heavy with a diversity of meanings. In its simplest terms it announces, “Death is not the end!”
Paul was arguably the first to develop a theological elaboration of this meaning. In terms of Paul’s teachings, Jesus’ empty tomb declares that Jesus has forged a pathway through death—past the final end of mortal existence—and established on the far side of that end a new beginning, a new life which has no end. For Paul, Jesus is the “firstfruits” of a general resurrection. And by such a general resurrection he had in mind an awakening from the “sleep” of mortal death, one in which all of us are brought into a new existence freed from the specter of death (see I Corinthians 14-22).
In conceiving of this triumph over death as involving a bodily resurrection, the tradition has affirmed the value of embodied existence. The tomb is empty because Jesus' new life was not achieved by abandoning his body but by reclaiming it--but reclaiming it in a redeemed form, that it, redeemed from its limitations, its fragility and propensity for degeneration. The message is that, despite its finitude, despite the evils that assail our material reality, there is an essential goodness to the physical universe, to our bodies and our embodiment, that deserves respect. Disdain or disregard for the physical world is not appropriate.
Taken in relation to the cross, the empty tomb has further meanings. It declares that what is conceived from a terrestrial standpoint as ultimate and total defeat, as final humiliation, is none of these things from the divine standpoint (and hence from the most complete, enveloping, and hence truest standpoint). Crucifixion, after all, was not merely a means of killing that involved intense physical suffering before death. It was also a graphic means of intimidation and a tool of public degradation. Human beings were treated worse than things—not merely as something to be used, but as objects of contempt. The purpose of crucifixion was to express towards a human being the very antithesis of respect.
To have the power to crucify another human being was to have the power to take away their lives in a manner that first stripped them of everything that gives life any value. And it was, at the same time, an act of triumphantly crowing over one’s victim—displaying for all the world to see just how helpless, just how disgraced, one could make another human being (before ultimately turning them into a thing in truth, that is, a corpse).
The empty tomb symbolically represents what such efforts at mortification achieve from God’s ultimate standpoint. We might express it as follows: “Look into the tomb and you begin to see what you’ve accomplished by such exercises of power. The tomb is not merely empty. It has been emptied. In the place of a corpse there is new life, eternal and incorruptible.” The empty tomb erases the pretentions of coercive power to define human worth. It declares that the use of force to degrade and destroy is less than impotent. It has become the means whereby the intended victim has been exalted, whereby the target for destruction has been made indestructible.
Take another step back, looking at the empty tomb in relation to Jesus’ life and ministry, and we see a related message. In his ministry, Jesus faced human forces that wielded enormous terrestrial power: the power to crucify. As He began to teach—as He preached against the injustices of His age, as He lifted up the poor and rebuked those who profited at their expense—He gradually and cumulatively earned the enmity of the privileged.
And so the power to crucify was turned against Him. And in the face of that power--the power to kill in the most brutal and humiliating way--what did Jesus do? Did He flinch? Did the fear of death--the fear of death imposed by the wielders of secular power--silence Him? Did He stop "preaching truth to power"?
On the contrary, the gospel narrative is a narrative of unflinching and persistent insistence on doing the right thing, saying the right thing, following the path of truth and love, regardless of the costs. And in the face of such a commitment to the good, the threat of death is impotent. For it is the threat of violence that tyrants use to control others. Actual violence is done only to intimidate those who remain alive--or when efforts at intimidation fail. This is the key: killing in the face of unswerving allegiance to the good is an expression of failure. Tyrants kill those that cannot be controlled by the fear of death. Their killing is an expression of their impotence, like the tantrum of a child who is thwarted.
But the vividness of killing--especially for those who fear death--often masks this impotence, this failure. The empty tomb exposes it. It declares that when we live as Jesus did, with a commitment to the good that does not bow to the fear of death, then the good has triumphed over the forces of this world that rely on the threat of death. When people of good will act with unflinching love in the face of the power to crucify, when the power to crucify is utterly stripped of its capacity to change our allegiance to the good by even the tiniest fraction, then death has lost its hold on the living. The tomb is empty.
Like the symbol of the cross, the symbol of the empty tomb is polysemic—that is, it is heavy with a diversity of meanings. In its simplest terms it announces, “Death is not the end!”
Paul was arguably the first to develop a theological elaboration of this meaning. In terms of Paul’s teachings, Jesus’ empty tomb declares that Jesus has forged a pathway through death—past the final end of mortal existence—and established on the far side of that end a new beginning, a new life which has no end. For Paul, Jesus is the “firstfruits” of a general resurrection. And by such a general resurrection he had in mind an awakening from the “sleep” of mortal death, one in which all of us are brought into a new existence freed from the specter of death (see I Corinthians 14-22).
In conceiving of this triumph over death as involving a bodily resurrection, the tradition has affirmed the value of embodied existence. The tomb is empty because Jesus' new life was not achieved by abandoning his body but by reclaiming it--but reclaiming it in a redeemed form, that it, redeemed from its limitations, its fragility and propensity for degeneration. The message is that, despite its finitude, despite the evils that assail our material reality, there is an essential goodness to the physical universe, to our bodies and our embodiment, that deserves respect. Disdain or disregard for the physical world is not appropriate.
Taken in relation to the cross, the empty tomb has further meanings. It declares that what is conceived from a terrestrial standpoint as ultimate and total defeat, as final humiliation, is none of these things from the divine standpoint (and hence from the most complete, enveloping, and hence truest standpoint). Crucifixion, after all, was not merely a means of killing that involved intense physical suffering before death. It was also a graphic means of intimidation and a tool of public degradation. Human beings were treated worse than things—not merely as something to be used, but as objects of contempt. The purpose of crucifixion was to express towards a human being the very antithesis of respect.
To have the power to crucify another human being was to have the power to take away their lives in a manner that first stripped them of everything that gives life any value. And it was, at the same time, an act of triumphantly crowing over one’s victim—displaying for all the world to see just how helpless, just how disgraced, one could make another human being (before ultimately turning them into a thing in truth, that is, a corpse).
The empty tomb symbolically represents what such efforts at mortification achieve from God’s ultimate standpoint. We might express it as follows: “Look into the tomb and you begin to see what you’ve accomplished by such exercises of power. The tomb is not merely empty. It has been emptied. In the place of a corpse there is new life, eternal and incorruptible.” The empty tomb erases the pretentions of coercive power to define human worth. It declares that the use of force to degrade and destroy is less than impotent. It has become the means whereby the intended victim has been exalted, whereby the target for destruction has been made indestructible.
Take another step back, looking at the empty tomb in relation to Jesus’ life and ministry, and we see a related message. In his ministry, Jesus faced human forces that wielded enormous terrestrial power: the power to crucify. As He began to teach—as He preached against the injustices of His age, as He lifted up the poor and rebuked those who profited at their expense—He gradually and cumulatively earned the enmity of the privileged.
And so the power to crucify was turned against Him. And in the face of that power--the power to kill in the most brutal and humiliating way--what did Jesus do? Did He flinch? Did the fear of death--the fear of death imposed by the wielders of secular power--silence Him? Did He stop "preaching truth to power"?
On the contrary, the gospel narrative is a narrative of unflinching and persistent insistence on doing the right thing, saying the right thing, following the path of truth and love, regardless of the costs. And in the face of such a commitment to the good, the threat of death is impotent. For it is the threat of violence that tyrants use to control others. Actual violence is done only to intimidate those who remain alive--or when efforts at intimidation fail. This is the key: killing in the face of unswerving allegiance to the good is an expression of failure. Tyrants kill those that cannot be controlled by the fear of death. Their killing is an expression of their impotence, like the tantrum of a child who is thwarted.
But the vividness of killing--especially for those who fear death--often masks this impotence, this failure. The empty tomb exposes it. It declares that when we live as Jesus did, with a commitment to the good that does not bow to the fear of death, then the good has triumphed over the forces of this world that rely on the threat of death. When people of good will act with unflinching love in the face of the power to crucify, when the power to crucify is utterly stripped of its capacity to change our allegiance to the good by even the tiniest fraction, then death has lost its hold on the living. The tomb is empty.
Wednesday, April 20, 2011
Piper's Fatal Patriarchy
Over on his blog I Think I Believe, Arni Zachariassen posted a video in which conservative evangelical preacher John Piper seeks to address the question of what a wife's submission to her husband is supposed to look like if her husband is abusing her. His answer should make anyone who has studied the dynamics of domestic abuse squirm in distress. Arni notes just how striking is Piper's lack of wisdom on this matter, and offers several incisive critical remarks.
In any event, the post inspired me to write a rather lengthy comment about what I take to be the root cause of Piper's lack of wisdom here. Readers of this blog may want to check out Arni's post both for its own intrinsic interest, and because I think it speaks to an issue that's come up on this blog before and that I want to dwell on more fully in future posts--namely, the idea that serious problems arise when religious communities and their leaders shape their ethics in terms of an uncritically embraced theology, as opposed to having their theology criticized and revised in the light of ethical insight.
In any event, the post inspired me to write a rather lengthy comment about what I take to be the root cause of Piper's lack of wisdom here. Readers of this blog may want to check out Arni's post both for its own intrinsic interest, and because I think it speaks to an issue that's come up on this blog before and that I want to dwell on more fully in future posts--namely, the idea that serious problems arise when religious communities and their leaders shape their ethics in terms of an uncritically embraced theology, as opposed to having their theology criticized and revised in the light of ethical insight.
Tuesday, April 19, 2011
New Religion Dispatches Feature: A Review of Vincent Bugliosi's Case for Agnosticism
Regular readers of the blog may be interested in my most recent Religion Dispatches essay, Unreasonable Doubt: Vincent Bugliosi Defends Agnosticism (not my title, but it's catchier than what I had). In it, I review what is touted as the agnostic contribution to the God debates: famed prosecutor Vincent Bugliosi's Divinity of Doubt. In a phrase that was edited out of the final version of the essay, I basically characterize Bugliosi's book in the following terms: He essenitally agrees with the criticisms that atheists and theists level against each other, but rejects their defenses of their respective positions. The result, of course, is uncertainty.
But is the case for epistemic uncertainty about God's existence the same as a case for agnosticism? I don't think so. My main aim in the essay is to offer my thoughts on what sorts of arguments should have been taken up in a book that markets itself as a defense of agnosticism. I'm left wishing someone had written that book (or something a bit closer to it than what Bugliosi offers). Maybe someone will. (Bernard?)
One warning, however: This review essay was edited a bit more heavily than the essays I usually submit, probably because I spent less time doing my own tinkering and editing than I usually do (I think I felt some pressure to get the essay submitted the week that Bugliosi's book came out). Fortunately, as usual at Religion Dispatches, the editing was overwhelmingly beneficial: streamlining the prose, eliminating unnecessary qualifying phrases, cutting unnecessary digressions, all the while remaining sensitive to the substance of my argument.
But here's the warning: While the edits that were made to the section on the Kierkegaardian objection to agnosticism make it punchier and more readable, they also remove the "distancing" remarks that make it clear I am not personally endorsing Kierkegaard's view. Too many people close to me in my life are agnostics for me to sincerely believe that agnostics inevitably lack a passional relationship with their world. For reasons I've expressed in an earlier post, I think Kierkegaard's arguments offer some reason to think it can be legitimate to take "leaps of faith" beyond the limits of what we know (although I think a conversation about what constraints should be imposed on such leaps is essential). But I am far from convinced that Kierkegaard demonstrates that taking such a leap on the God-question is required of anyone who wishes to live a fully human life.
In any event, the essay is more open than I would have liked to the sort of misreading that comes out in the first reader's letter (although a part of me wants to ask that reader if he read all the way to the end of the article). I've submitted a clarifying response, but as of this moment it has not yet been posted.
But is the case for epistemic uncertainty about God's existence the same as a case for agnosticism? I don't think so. My main aim in the essay is to offer my thoughts on what sorts of arguments should have been taken up in a book that markets itself as a defense of agnosticism. I'm left wishing someone had written that book (or something a bit closer to it than what Bugliosi offers). Maybe someone will. (Bernard?)
One warning, however: This review essay was edited a bit more heavily than the essays I usually submit, probably because I spent less time doing my own tinkering and editing than I usually do (I think I felt some pressure to get the essay submitted the week that Bugliosi's book came out). Fortunately, as usual at Religion Dispatches, the editing was overwhelmingly beneficial: streamlining the prose, eliminating unnecessary qualifying phrases, cutting unnecessary digressions, all the while remaining sensitive to the substance of my argument.
But here's the warning: While the edits that were made to the section on the Kierkegaardian objection to agnosticism make it punchier and more readable, they also remove the "distancing" remarks that make it clear I am not personally endorsing Kierkegaard's view. Too many people close to me in my life are agnostics for me to sincerely believe that agnostics inevitably lack a passional relationship with their world. For reasons I've expressed in an earlier post, I think Kierkegaard's arguments offer some reason to think it can be legitimate to take "leaps of faith" beyond the limits of what we know (although I think a conversation about what constraints should be imposed on such leaps is essential). But I am far from convinced that Kierkegaard demonstrates that taking such a leap on the God-question is required of anyone who wishes to live a fully human life.
In any event, the essay is more open than I would have liked to the sort of misreading that comes out in the first reader's letter (although a part of me wants to ask that reader if he read all the way to the end of the article). I've submitted a clarifying response, but as of this moment it has not yet been posted.
Monday, April 18, 2011
Some Earlier Distinctions Summarized and Applied to Morality
I think it may be helpful to summarize some points from my last “distinctions” post and bring them to bear explicitly on the question of “objective morality.”
Given the distinction between objectivity and subjectivity offered in that earlier post, it should be clear what I mean by the term “objective morality.” Put simply, I have in mind the conjunction of the following two theses: (1) Some moral judgments are true and others are false; (2) What makes a moral judgment true (or false) is never merely the fact that the one making the judgment is in certain subjective states (most notably in possession of certain attitudes and preferences) with respect to what the judgment addresses.
In other words, if the judgment at issue is “Rape is wrong,” the fact that I disapprove of rape is not sufficient to make rape wrong. If rape is wrong (which I am convinced it is) then what makes it wrong is more than the mere fact that I happen to disapprove of it. By implication, the mere fact that someone else happens to approve of it is insufficient to make rape right “for them.” In short, to be an objectivist about morality is to hold that subject S’s approval/disapproval of action A is not sufficient to render A moral/immoral “for S.”
There are two important points I want to stress about “objective morality” conceived in this way—points that I think it is crucial to keep in mind for the sake of avoiding confusions of various sorts. Both points have been made in other posts, so this is largely an exercise in recapitulation and reframing.
First, to say that morality is objective in the indicated sense is not to say that human subjectivity plays no role in constituting morality. This was part of the point I was hoping to make with my earlier April Fools Day post about amusement. In order for there to be such a thing as “the funny”, there have to be creatures like us who react to things with amusement. In the absence of such creatures having such subjective responses, nothing would be funny. Funniness exists only in relation to risible beings. (I love the word "risible").
But it doesn’t follow that something is funny just in case one is amused by it—that, in other words, being in a subjective state of amusement is sufficient to make it true “for you” that it is funny. It doesn’t follow because it’s possible that Linda Zagzebski is right about emotions: they are “ways of seeing” things in the world (to be amused is to see something as funny; to be offended is to see something as rude) that can fit their intentional objects or not (in something like the way color experiences can fit with what is going on in the physical world—such that when you see something as red, you might be mistaken if, in fact, something has broken down in your color perception mechanisms so that color experiences no longer track the ways in which different objects differentially reflect different wavelengths of light).
(For more on this, see Zagzebski's book, Divine Motivation Theory).
The point is not to argue here that Zagzebski is right about emotional fittingness, but simply to stress that the fact that our subjective states are bound up with moral judgments is not enough to conclude that they aren’t objective in the sense I have in mind.
My second point is that to say morality is objective is different from saying that it is absolute. The former is about whether there is more needed for the truth of a moral judgment than the attitudes and preferences of the one making the judgment. The latter is about whether what is true of something in one context is necessarily true of it in all contexts. As I noted in a comment on my earlier post, even if the boiling point of water varied enormously from case to case, such that it was true of water that it boiled at precisely 100˚C only in very rare but specifiable contexts, it would still be objectively true that it boiled at 100˚C in those contexts.
To think of this distinction in connection with morality, it may be helpful to think of it in connection with a particular ethical theory. I choose one that I do not personally accept, but which has the virtue of being easy to quickly explain: A simple version of preference utilitarianism in its act utilitarian form. Act utilitarianism holds that the right action to perform in any situation is that act which, among all the available courses of actions, has the best results for all affected. But what makes the results “best”? For the simple preference utilitarian, the value of an action’s consequences is a function of the actual preferences of the individuals affected. In other words, preference utilitarianism has an entirely subjective standard of value: what is good for me is determined by my preferences; what is good for you is determined by yours, etc.
But the utilitarian is convinced that it is not rational for me, in decision making, to prioritize my good just because it is mine. I must extend equal consideration to the good of all. And your good is what it is based on your preferences, not mine. And this means your good is, for me, an objective fact I must come to grips with: My preferring that you prefer Bellini to Lady Gaga does not make it true that you prefer Bellini to Lady Gaga. And so, what is true about the general good is determined almost entirely apart from my subjective preferences (which only determine what is good for me). And what is right for me to do is whatever maximizes the good of all affected—in other words, whatever does the most to satisfy the most preferences (typically weighted in terms of importance to the person).
Now as I said, I introduce this theory solely because it is a fairly simple one to understand, and hence one that can be introduced quickly for the sake of applying the absolutist/objective distinction to morality. What I want to do is suppose—purely for the sake of argument—that this form of utilitarianism is correct. If it is correct, what that means is that a judgment such as “John’s lying to Susan about his affair was wrong” is true or false based not merely (or even mainly) on the subjective attitudes of the one making the judgment, but based on the actual effect that John’s lying to Susan had on the welfare of everyone effect, where their welfare is conceived in terms of their actual preferences. And so, “John’s lying to Susan about his affair was wrong” is going to be either objectively true or objectively false, depending on the actual effects of the lie in the specific case.
But it should be clear that, given this version of utilitarianism, the moral status of lying will be highly context dependent. We will have to look at instances of lying on a case-by-case basis. In one set of circumstances lying may be the thing that does the best job of satisfying the most preferences. In another it may not. And so, if this theory is right, the moral status of lying will be highly context-dependent; but in each case, whether the lie is moral or not will depend on its total impact on preference-satisfactions, not on the approval or disapproval of the one making the judgment. Those who make moral judgments can therefore be mistaken. They can disapprove of what is right and approve of what is wrong—because the truth or falsity of such moral judgments is more than a matter of taste. Even though moral truth is highly contextual on this theory, it remains objective in the indicated sense.
I should also note how this theory is related to culture. Clearly, culture strongly influences our preferences. As such, cultural context becomes enormously significant for determining what is right and wrong. But it doesn’t follow that morality is determined by culture. If the preference utilitarian theory is right, whole cultures can be mistaken in their moral judgments. For example, a culture might maintain that the enslavement of blacks is morally acceptable—but if the preference-satisfactions enjoyed by the beneficiaries of slavery are outweighed by the thwarting of the slaves’ preferences, the practice would be wrong despite the culture’s endorsement. Put another way, in this theory cultural context plays a role in what is morally true, but culture cannot dictate moral truth.
Even if you reject this species of utilitarianism (as I do, for reasons I won’t get into here), you might still believe that this theory is onto something. You might think (as I do) that the effect of one’s actions on human welfare is part of what makes them right or wrong, and that human preferences are part of what constitutes human welfare (and hence that welfare is partly a function of culture). And if so, then you will think that context—including cultural context—will play a big role in determining what is right or wrong. And so you will not be a moral absolutist. But that doesn’t mean you won’t be an objectivist.
Given the distinction between objectivity and subjectivity offered in that earlier post, it should be clear what I mean by the term “objective morality.” Put simply, I have in mind the conjunction of the following two theses: (1) Some moral judgments are true and others are false; (2) What makes a moral judgment true (or false) is never merely the fact that the one making the judgment is in certain subjective states (most notably in possession of certain attitudes and preferences) with respect to what the judgment addresses.
In other words, if the judgment at issue is “Rape is wrong,” the fact that I disapprove of rape is not sufficient to make rape wrong. If rape is wrong (which I am convinced it is) then what makes it wrong is more than the mere fact that I happen to disapprove of it. By implication, the mere fact that someone else happens to approve of it is insufficient to make rape right “for them.” In short, to be an objectivist about morality is to hold that subject S’s approval/disapproval of action A is not sufficient to render A moral/immoral “for S.”
There are two important points I want to stress about “objective morality” conceived in this way—points that I think it is crucial to keep in mind for the sake of avoiding confusions of various sorts. Both points have been made in other posts, so this is largely an exercise in recapitulation and reframing.
First, to say that morality is objective in the indicated sense is not to say that human subjectivity plays no role in constituting morality. This was part of the point I was hoping to make with my earlier April Fools Day post about amusement. In order for there to be such a thing as “the funny”, there have to be creatures like us who react to things with amusement. In the absence of such creatures having such subjective responses, nothing would be funny. Funniness exists only in relation to risible beings. (I love the word "risible").
But it doesn’t follow that something is funny just in case one is amused by it—that, in other words, being in a subjective state of amusement is sufficient to make it true “for you” that it is funny. It doesn’t follow because it’s possible that Linda Zagzebski is right about emotions: they are “ways of seeing” things in the world (to be amused is to see something as funny; to be offended is to see something as rude) that can fit their intentional objects or not (in something like the way color experiences can fit with what is going on in the physical world—such that when you see something as red, you might be mistaken if, in fact, something has broken down in your color perception mechanisms so that color experiences no longer track the ways in which different objects differentially reflect different wavelengths of light).
(For more on this, see Zagzebski's book, Divine Motivation Theory).
The point is not to argue here that Zagzebski is right about emotional fittingness, but simply to stress that the fact that our subjective states are bound up with moral judgments is not enough to conclude that they aren’t objective in the sense I have in mind.
My second point is that to say morality is objective is different from saying that it is absolute. The former is about whether there is more needed for the truth of a moral judgment than the attitudes and preferences of the one making the judgment. The latter is about whether what is true of something in one context is necessarily true of it in all contexts. As I noted in a comment on my earlier post, even if the boiling point of water varied enormously from case to case, such that it was true of water that it boiled at precisely 100˚C only in very rare but specifiable contexts, it would still be objectively true that it boiled at 100˚C in those contexts.
To think of this distinction in connection with morality, it may be helpful to think of it in connection with a particular ethical theory. I choose one that I do not personally accept, but which has the virtue of being easy to quickly explain: A simple version of preference utilitarianism in its act utilitarian form. Act utilitarianism holds that the right action to perform in any situation is that act which, among all the available courses of actions, has the best results for all affected. But what makes the results “best”? For the simple preference utilitarian, the value of an action’s consequences is a function of the actual preferences of the individuals affected. In other words, preference utilitarianism has an entirely subjective standard of value: what is good for me is determined by my preferences; what is good for you is determined by yours, etc.
But the utilitarian is convinced that it is not rational for me, in decision making, to prioritize my good just because it is mine. I must extend equal consideration to the good of all. And your good is what it is based on your preferences, not mine. And this means your good is, for me, an objective fact I must come to grips with: My preferring that you prefer Bellini to Lady Gaga does not make it true that you prefer Bellini to Lady Gaga. And so, what is true about the general good is determined almost entirely apart from my subjective preferences (which only determine what is good for me). And what is right for me to do is whatever maximizes the good of all affected—in other words, whatever does the most to satisfy the most preferences (typically weighted in terms of importance to the person).
Now as I said, I introduce this theory solely because it is a fairly simple one to understand, and hence one that can be introduced quickly for the sake of applying the absolutist/objective distinction to morality. What I want to do is suppose—purely for the sake of argument—that this form of utilitarianism is correct. If it is correct, what that means is that a judgment such as “John’s lying to Susan about his affair was wrong” is true or false based not merely (or even mainly) on the subjective attitudes of the one making the judgment, but based on the actual effect that John’s lying to Susan had on the welfare of everyone effect, where their welfare is conceived in terms of their actual preferences. And so, “John’s lying to Susan about his affair was wrong” is going to be either objectively true or objectively false, depending on the actual effects of the lie in the specific case.
But it should be clear that, given this version of utilitarianism, the moral status of lying will be highly context dependent. We will have to look at instances of lying on a case-by-case basis. In one set of circumstances lying may be the thing that does the best job of satisfying the most preferences. In another it may not. And so, if this theory is right, the moral status of lying will be highly context-dependent; but in each case, whether the lie is moral or not will depend on its total impact on preference-satisfactions, not on the approval or disapproval of the one making the judgment. Those who make moral judgments can therefore be mistaken. They can disapprove of what is right and approve of what is wrong—because the truth or falsity of such moral judgments is more than a matter of taste. Even though moral truth is highly contextual on this theory, it remains objective in the indicated sense.
I should also note how this theory is related to culture. Clearly, culture strongly influences our preferences. As such, cultural context becomes enormously significant for determining what is right and wrong. But it doesn’t follow that morality is determined by culture. If the preference utilitarian theory is right, whole cultures can be mistaken in their moral judgments. For example, a culture might maintain that the enslavement of blacks is morally acceptable—but if the preference-satisfactions enjoyed by the beneficiaries of slavery are outweighed by the thwarting of the slaves’ preferences, the practice would be wrong despite the culture’s endorsement. Put another way, in this theory cultural context plays a role in what is morally true, but culture cannot dictate moral truth.
Even if you reject this species of utilitarianism (as I do, for reasons I won’t get into here), you might still believe that this theory is onto something. You might think (as I do) that the effect of one’s actions on human welfare is part of what makes them right or wrong, and that human preferences are part of what constitutes human welfare (and hence that welfare is partly a function of culture). And if so, then you will think that context—including cultural context—will play a big role in determining what is right or wrong. And so you will not be a moral absolutist. But that doesn’t mean you won’t be an objectivist.
Sunday, April 17, 2011
Hell, Bell, and Christian Sales Tactics
As most people interested in the Christian universalism-vs-hellism controversy already know, Rob Bell's recent book (and the conservative backlash to it) sparked Time Magazine to do this week's cover article, "Is Hell Dead?", on the topic. As I was reading the article, I was particularly struck by journalist Jon Meacham's account of what lies behind the "traditionalist" resistance to questioning the doctrine of eternal damnation:
But what struck me first when reading this passage was the first part--the part which asks why anyone should bother to accept Christ, to confess Jesus as Lord, if it isn't true that all non-Christians roast for eternity in fiery torment of the most horrific imaginable kind. I mean, why should I bother to tuck my kids in at bedtime if failing to do so doesn't mean eternal anguish in the pits of hell? Why eat breakfast if I could skip breakfast and yet still avoid unremitting agony? Clearly, everything I choose to do would be pointless if the alternative to doing it weren't damnation.
Of course I'm being sarcastic. The point is that we do all kinds of things without being threatened with damnation if we don't do them. I tuck my kids in because I love them and because I enjoy tucking them in, not because I'm trying to avoid some bad result (let alone one of eternal duration and ultimate horror). And while this shows that the rhetorical question Meacham poses is only marginally coherent, it doesn't mean that Meacham is wrong to pose it as part of what lies behind traditionalist resistance to universalism. I've heard rhetorical questions of precisely this sort often enough from Christian conservatives to know that there is something in the vicinity of these questions that truly worries them.
Of course, part of what may really worry them is that Christ is being rendered inessential for salvation--which they think undermines Christ's life and sacrifice, trivializing the Incarnation and Atonement. But this worry is clearly misguided, since Christ is hardly made inessential by supposing that the scope of His success in achieving the salvation of humanity is universal. Christian universalists do not hold that all are saved apart from Christ's saving work, but that all are saved because of it.
Perhaps, then, what is made inessential is our subjective response to Christ--what evangelicals have in mind when they speak about "accepting Jesus Christ as Lord and Savior." But this doesn't follow from universalism either. The universalist could believe (and many Christian universalists do believe) that eventually everyone comes to make this subjective response--if not in this life, then at the moment of death or in a future state when the truth becomes clear to them in all its joyous glory (and the universalist might very well hold that this realization occurs only after a period of denial and rejection, during which they arguably suffer the natural consequences of living in alienation from God--a finite hell--and so come to see the intrinsic undesirability of such a condition).
(I won't pursue the free will arguments for eternal rejection here--if you're interested in why I find them unconvincing, buy John's and my book when it comes out, or look at a briefer version of the argument in my article in Universal Salvation? The Current Debate).
In any event, the point is this: Universalism neither entails that Christ is unnecessary for salvation nor that a subjective response of acceptance is unnecessary. It does, however, seem to entail that conversion to Christianity in this life, participation in Christian life, church attendance, etc., are unnecessary for avoiding eternal hell. If Gandhi--who had nice things to say about Jesus but remained a Hindu all his life--is not in hell, then being a Christian in this life is not necessary for avoiding hell.
But this brings me back to my original sarcastically-expressed point about the rhetorical question, "Why bother becoming a Christian if non-Christians are saved?" This question assumes that the only good reason to convert to Christianity in this life is what happens in the next, and more specifically that becoming a Christian in this life is the only way to avoid damnation in the next. But do Christians really believe that? Do they believe that there is nothing positive to be gained in this life from participation in Church life, nothing worthwhile that is gained during our earthly tenure by being a part of a Christian communion, by living with a sense of God's presence, by meditating on the gospel narrative, etc.?
Do any Christians seriously want to say that? If not, then the rhetorical question collapses on itself--because there are all sorts of reasons why someone might "bother" to embrace a Christian life even if the ultimate destiny of those who embrace a secular life, or a Hindu life, or a Buddhist life, is the same in the end.
For all these reasons, the only sense I can make of the rhetorical question so many conservatives ask is this: What they are really worried about (although they may not be fully conscious of this) is that they will be deprived of a tried-and-true sales gimmick that many Christians have been using for centuries in their efforts to swell the ranks of Christian churches. Specifically, the gimmick of making people scared of the consequences of not participating in Christian communities.
This is not a new conclusion for me--and I think I may have made the same basic point more eloquently in a post from a couple of years back, Selling Christianity. Nevertheless, it is a point worth making again. And if this is what is really going on, then Bell may have realized something that conservative evangelicals like John Piper haven't quite caught onto yet: This sales gimmick isn't working anymore.
Rather than being a selling point for participation in Christian life, the doctrine of eternal damnation is increasingly becoming a liability. In our pluralistic world, to cleave to a religion that says everyone else is going to roast is to cleave to something that is hard to see as anything but ugly. And the old theological arguments that try to paint it as something other than ugly, and that try to represent our uneasiness with the doctrine of hell as nothing more than a suspect side-effect of a demonized "enlightenment philosophy" (as if enlightenment philosophy were entirely divorced from the ethical ideas of the Christian culture in which it was born)--well, those arguments are sounding increasingly implausible.
I'm not suggesting that Rob Bell is just a salesman with a better marketing campaign. Rather, I am suggesting that Bell may better represent the values of the emerging generation of evangelicals--a point that finds support in a great recent essay by Rachel Held Evans. If so, then when the conservative establishment rails against Bell with cries of heresy and excommunications by Tweet, what we may be witnessing is a once-privileged group scrambling desperately to cling to a position of authority that is steadily slipping from their grasp.
I don't know if that is true, but I really think it might be.
If heaven, however defined, is everyone's ultimate destination in any event, then what's the incentive to confess Jesus as Lord in this life? If, in other words, Gandhi is in heaven, then why bother with accepting Christ? If you say the Bible doesn't really say what a lot of people have said it says, then where does that stop? If the verses about hell and judgment aren't literal, what about the ones on adultery, say, or homosexuality? Taken to their logical conclusions, such questions could undermine much of conservative Christianity.Now the second part of this account covers issues I've discussed before. I've talked quite a bit about biblical inerrancy and literalism on this blog, and my recent RD article about the conservative backlash to Bell focuses mostly on the motivations that spring from a failure to distinguish one's own beliefs about God from the truth about God--a confusion (or deliberate blurring of distinctions) that seems to underlie much of the impetus for treating critical questions as anathema.
But what struck me first when reading this passage was the first part--the part which asks why anyone should bother to accept Christ, to confess Jesus as Lord, if it isn't true that all non-Christians roast for eternity in fiery torment of the most horrific imaginable kind. I mean, why should I bother to tuck my kids in at bedtime if failing to do so doesn't mean eternal anguish in the pits of hell? Why eat breakfast if I could skip breakfast and yet still avoid unremitting agony? Clearly, everything I choose to do would be pointless if the alternative to doing it weren't damnation.
Of course I'm being sarcastic. The point is that we do all kinds of things without being threatened with damnation if we don't do them. I tuck my kids in because I love them and because I enjoy tucking them in, not because I'm trying to avoid some bad result (let alone one of eternal duration and ultimate horror). And while this shows that the rhetorical question Meacham poses is only marginally coherent, it doesn't mean that Meacham is wrong to pose it as part of what lies behind traditionalist resistance to universalism. I've heard rhetorical questions of precisely this sort often enough from Christian conservatives to know that there is something in the vicinity of these questions that truly worries them.
Of course, part of what may really worry them is that Christ is being rendered inessential for salvation--which they think undermines Christ's life and sacrifice, trivializing the Incarnation and Atonement. But this worry is clearly misguided, since Christ is hardly made inessential by supposing that the scope of His success in achieving the salvation of humanity is universal. Christian universalists do not hold that all are saved apart from Christ's saving work, but that all are saved because of it.
Perhaps, then, what is made inessential is our subjective response to Christ--what evangelicals have in mind when they speak about "accepting Jesus Christ as Lord and Savior." But this doesn't follow from universalism either. The universalist could believe (and many Christian universalists do believe) that eventually everyone comes to make this subjective response--if not in this life, then at the moment of death or in a future state when the truth becomes clear to them in all its joyous glory (and the universalist might very well hold that this realization occurs only after a period of denial and rejection, during which they arguably suffer the natural consequences of living in alienation from God--a finite hell--and so come to see the intrinsic undesirability of such a condition).
(I won't pursue the free will arguments for eternal rejection here--if you're interested in why I find them unconvincing, buy John's and my book when it comes out, or look at a briefer version of the argument in my article in Universal Salvation? The Current Debate).
In any event, the point is this: Universalism neither entails that Christ is unnecessary for salvation nor that a subjective response of acceptance is unnecessary. It does, however, seem to entail that conversion to Christianity in this life, participation in Christian life, church attendance, etc., are unnecessary for avoiding eternal hell. If Gandhi--who had nice things to say about Jesus but remained a Hindu all his life--is not in hell, then being a Christian in this life is not necessary for avoiding hell.
But this brings me back to my original sarcastically-expressed point about the rhetorical question, "Why bother becoming a Christian if non-Christians are saved?" This question assumes that the only good reason to convert to Christianity in this life is what happens in the next, and more specifically that becoming a Christian in this life is the only way to avoid damnation in the next. But do Christians really believe that? Do they believe that there is nothing positive to be gained in this life from participation in Church life, nothing worthwhile that is gained during our earthly tenure by being a part of a Christian communion, by living with a sense of God's presence, by meditating on the gospel narrative, etc.?
Do any Christians seriously want to say that? If not, then the rhetorical question collapses on itself--because there are all sorts of reasons why someone might "bother" to embrace a Christian life even if the ultimate destiny of those who embrace a secular life, or a Hindu life, or a Buddhist life, is the same in the end.
For all these reasons, the only sense I can make of the rhetorical question so many conservatives ask is this: What they are really worried about (although they may not be fully conscious of this) is that they will be deprived of a tried-and-true sales gimmick that many Christians have been using for centuries in their efforts to swell the ranks of Christian churches. Specifically, the gimmick of making people scared of the consequences of not participating in Christian communities.
This is not a new conclusion for me--and I think I may have made the same basic point more eloquently in a post from a couple of years back, Selling Christianity. Nevertheless, it is a point worth making again. And if this is what is really going on, then Bell may have realized something that conservative evangelicals like John Piper haven't quite caught onto yet: This sales gimmick isn't working anymore.
Rather than being a selling point for participation in Christian life, the doctrine of eternal damnation is increasingly becoming a liability. In our pluralistic world, to cleave to a religion that says everyone else is going to roast is to cleave to something that is hard to see as anything but ugly. And the old theological arguments that try to paint it as something other than ugly, and that try to represent our uneasiness with the doctrine of hell as nothing more than a suspect side-effect of a demonized "enlightenment philosophy" (as if enlightenment philosophy were entirely divorced from the ethical ideas of the Christian culture in which it was born)--well, those arguments are sounding increasingly implausible.
I'm not suggesting that Rob Bell is just a salesman with a better marketing campaign. Rather, I am suggesting that Bell may better represent the values of the emerging generation of evangelicals--a point that finds support in a great recent essay by Rachel Held Evans. If so, then when the conservative establishment rails against Bell with cries of heresy and excommunications by Tweet, what we may be witnessing is a once-privileged group scrambling desperately to cling to a position of authority that is steadily slipping from their grasp.
I don't know if that is true, but I really think it might be.
Friday, April 15, 2011
Toemageddon
I must take a break from my recent series on distinctions to spend a few moments talking about Toemageddon. All you need to know about it is neatly summarized in this report by John Stewart from The Daily Show:
Now this news item particularly caught my attention because I had just finished reading a fascinating Smithsonian article about the transition, over the last century, from gender-neutral children's clothing--where "gender neutral" meant frilly dresses for both boys and girls until around age six--to the current patterns of gender specific clothing. According to the article, pink became established as a "girl's" color not until the middle of the 20th Century. In fact, as the pink/blue convention was being established, there was no clear agreement about which color should be for which gender--some even insisting that boys should be the ones to wear pink because it is the more "masculine" color.
Obviously, this had ruinous effects on the gender identity of boys of earlier generations. Consider this photo of FDR when he was a small child:
Clearly, this goes a long way towards explaining why FDR and his generation of men were all such sissies. I mean, what did the men of that generation accomplish? They were clearly ruined by being put in girl's clothes. Likewise, boys who have joyfully playful moments with their mothers, as we find in the J. Crew ad, can only be harmed by it. Science teaches that a childhood full of affection, delight, and nonjudgmental play turns children (especially boys) into...well, um, FDR may have been an extremely influential President who overcame polio to lead the country out of the Great Depression and through the Second World War, but let's be real: He was a liberal by any Tea-Party-Fox-News standard I know.
And that, ultimately, is why we must stand firm against painted toe nails.
Now this news item particularly caught my attention because I had just finished reading a fascinating Smithsonian article about the transition, over the last century, from gender-neutral children's clothing--where "gender neutral" meant frilly dresses for both boys and girls until around age six--to the current patterns of gender specific clothing. According to the article, pink became established as a "girl's" color not until the middle of the 20th Century. In fact, as the pink/blue convention was being established, there was no clear agreement about which color should be for which gender--some even insisting that boys should be the ones to wear pink because it is the more "masculine" color.
Obviously, this had ruinous effects on the gender identity of boys of earlier generations. Consider this photo of FDR when he was a small child:
Clearly, this goes a long way towards explaining why FDR and his generation of men were all such sissies. I mean, what did the men of that generation accomplish? They were clearly ruined by being put in girl's clothes. Likewise, boys who have joyfully playful moments with their mothers, as we find in the J. Crew ad, can only be harmed by it. Science teaches that a childhood full of affection, delight, and nonjudgmental play turns children (especially boys) into...well, um, FDR may have been an extremely influential President who overcame polio to lead the country out of the Great Depression and through the Second World War, but let's be real: He was a liberal by any Tea-Party-Fox-News standard I know.
And that, ultimately, is why we must stand firm against painted toe nails.
Monday, April 11, 2011
Distinctions, Part II: Absolute vs. Objective Truths
In thinking about objectivism in morals, it is important to distinguish objectivism from something with which it is often confused in polemical debates--something that we might call "absolutism." This distinction is best characterized in relation to what absolutism and objectivism are paired against--their contrasting concepts, if you will--which are context-dependence and subjectivism respectively.
We're typically called absolutists or objectivists with respect to propositions in which we predicate something of an object, event, state of affairs, natural kind, etc.--that is, statements of the form "A is a p" or "A has property p." So, consider the following statement: "Water boils at 100˚C" ("Water has the property of boiling at 100˚C").To be an absolutist about this is to hold that water has this property of boiling at 100˚C regardless of context. It is to say that this is a context-independent truth about water (that, in other words, the boiling point of water is not a function of any other variable). Of course, to say this is to be committed to something that is false. Indeed, we might say that it is objectively false. To be an absolutist about the boiling point of water is to believe things that are objectively false of water, and as such is to fail to believe what is objectively true of it (such as that its boiling point is in part of function of atmospheric pressure).
Of course, I'm getting ahead of myself--but the point is that to be able to say this, "objective" must mean something different from "absolute." And indeed it does. When I say that "Water boils at 100˚C" is true objectively, I am not thereby saying that it is true absolutely. As such, if I say this, it is not an objection to my claim to point out that there are a range of conditions under which water does not boil at 100˚C. Either I will take this as a friendly amendment to my claim (as a more precise characterization of what the objective truth is, but not as a challenge to my main contention, which is that what is at issue is an objective truth), or I will treat it as an annoying failure to realize that I was speaking elliptically (that I was intending to refer implicitly to a set of agreed "standard conditions," and was simply saying, in an abbreviated way, "Water boils at 100˚C under these standard conditions").
So what do I mean when I say that it is objectively true that water boils at 100˚C? I mean that when I attribute the predicate "boils at 100˚C (under standard conditions)" to water, I am saying something of water that is true of water, as opposed to merely saying something about me (that I happen to feel 100˚C-iously towards boiling water, or something to that effect). By contrast, when I say from the swimming pool, "The water is pleasant," I am really just saying something about myself--that I happen to feel comfortable in the water. This latter claim is thus subjective rather than objective. In short, to say that "Water boils at 100˚C" is objectively true is to say that what one is saying is not primarily about oneself, that the truth-maker for the claim is not some wholly subject-relative response to the water. One has, in effect, discovered something about the water, as opposed to merely discovering something about oneself.
But the line here is trickier than it may at first appear. After all, there is a fairly narrow range of water temperatures such that anyone immersed in them would be inclined to call them pleasant. Given the nature of human physiology, anyone who leaped into near-boiling or near-freezing water and declared it pleasant would be lying, kidding, or suffering from a dangerous disability of the nervous system (the sense-of-touch equivalent of blindness). Furthermore, the property of being at 100˚C is the property of corresponding in a certain way to a measuring system created by human beings.
(In fact, it was created using the boiling point of water under standard conditions as the basis for setting the 100˚ mark--just to complicate matters and make me wish I'd chosen a different example. But let us set this aside for now and presume that "Water boils at 100˚C under standard conditions" is not simply an analytic truth, that is, true by definition).
The point is that water's boiling point being 100˚C is arguably a relational fact about how the behavior of water effects certain human artifacts (thermometers) in terms of a human system of measurement (the Celsius system). And one could imagine a much cruder system of measurement being highly effective for certain purposes--one involving immersing body parts in water, and appealing to the pleasantness or unpleasantness (and kind of unpleasantness) that resulted. ("Are you feeling hot or cold? On a scale of 1 to 10, just how badly do you want to get out of the water?") So why isn't "The water is pleasant" treated as stating an objective property of a specific body of water--only much cruder and vaguer, less informative (for those familiar with the relevant measuring system), than "The water is 30˚C"?
Perhaps the thing to say is this: While a temperature measurement is a relational property between the thing being measured and certain human artifacts (measuring tools and systems of measure), the artifacts are carefully designed to precisely and consistently track certain features of the water, and as such are designed to be unaffected (or largely unaffected) by variable features of the individual doing the measuring. There is a (relatively successful) attempt to refer to and track something that exists independent of the human subject--something that was true of water long before humans ever started sticking thermometers into boiling kettles (indeed, well before there were humans in existence). When someone says "The water is pleasant," while this statement does typically tell us something that is true of the water (insofar as human nervous systems are generally callibrated to generate unpleasant sensations outside of a certain fairly narrow range), what it is primarily aimed at telling us is something about how the subject feels (a certain qualitative state that the subject is in). And should it turn out that the person who makes this claim has an unusual physiology--an unusual resistance to hypothermia paired with a neurological resistance to cold temperatures that would set other people to shivering and seeking a quick escape from the water--it would remain true that, for him--the water was pleasant. Why? Because the statement is really about the qualitative state that the subject happens to be in, and as such remains true even if the "objective features of the world" that ordinarily correspond with that qualitative state are not present in this particular case.
To put the point another way, the truth-maker for a subjective statement is something "in the head" of the person making the statement--what makes the statement true or false is whether the person's consciousness is characterized by this subjective qualitative condition or not. By contrast, the truth-maker for an objective statement is something outside the head of the person making the statement--something "in" the object under discussion.
But this distinction leaves something out--something that may be helpfully pointed out if we change our example to one having to do with color. When it comes to color perception, the standard contemporary view is that our experience of color is linked to encountering wavelengths of light of different frequencies. Our color perception is a fairly (but hardly perfectly) refined tool for tracking different wavelengths of light, and as such might be seen as doing for us something very like what a thermometer and a system of temperature measurement does for us: it gets us in touch with something "out there," tracking changes in the external world with a fair degree of precision.
More significantly, when we say that the ball is blue, we mean to be referring to something out there. That is, we intend to name a property that is possessed by the ball independent of our subjective qualitative states. At the same time, however, there is a qualitative subjective experience that corresponds with the term "blue." We can close our eyes and "picture" what blue is like. According to the dominant contemporary paradigm, this subjective color experience, blue's "quale" (to use a quasi-technical term from philosophy of mind), is "all in our heads" in the sense that it isn't actually a feature of the ball at all. Instead, what is "out there" is a surface that differentially reflects different wavelengths of light, such that more of the "blue" wavelengths are reflected and fewer absorbed. Our eyes have mechanisms for discerning this and communicating it to the visual center of the brain, which in turn somehow (mysteriously) plays a role in creating the subjective color experience with which we are immediately familiar.
But let us suppose that Mary has suffered a head injury, and that--while possessed of vivid color experiences like the rest of us--has these experiences in a manner that no longer reliably tracks what is "out there." Her color experiences used to track--and so she learned how to use color language, and for a long time wedded her language usage to her qualitative color experiences with great success: She'd see something, experience it as blue, call it blue, others would agree, and everyone was happy.
But not anymore. (If this way of framing the example seems unnecessarily convoluted to you, it's because you haven't read Wittgenstein--who probably would still be unhappy with my way of putting this example despite my care). Now, when Mary sees the ball, has an immediate color experience, and calls it blue, others say things like, "Um, that's yellow. Do you need to get your eyes checked?" She tries to relearn her language usage--but the next time she sees a "seemingly" blue object and calls it yellow, she'd told that it's red.
We can imagine that she gives up, concluding that her color experiences no longer track onto anything objective in the world. But suppose, instead, that--being rather stubborn--she insists that everyone else has got it wrong. She sees that the ball is blue--and so it is. In that case, when the ball in question happens to be preferentially reflecting light in the yellow spectrum, we'd be inclined (well, I'd certainly be inclined) to say that her subjective color experience doesn't fit with the objective reality, and hence that she is objectively mistaken in attributing "blue" to the ball--even if (as may be the case) the subjective color experience she is having is the same one that, before her accident, tracked "blue objects" very well (and--although I'm not sure how we could know this--is the same one that I have when I see blue objects).
The point of all of this is that much of what goes on in color experience is "subjective"--but color judgments are not subjective ones, because the purpose of color judgments is to track something in the world that is independent of the subject--to say something that is true of objects in the world. When my subjective qualitative color experiences, produced during my visual encounter with the external world, "fit" with their intentional object (in the way that Mary's do not), the judgments that follow from them are objectively true. When these subjective color experiences do not fit (as is the case with Mary), then the judgments that follow (in Mary's case because she is too stubborn to give up making such judgments) are objectively false. And Mary's judgment that the ball is blue is objective false even though it is true of Mary that she is having a subjective "blue" color experience when she looks at the ball. What makes it true that the ball is blue is not that Mary has an experience of this sort, but that the experience "fits" the ball--in something like the way that it would in the case of someone with normally functioning vision, or in something like the way that temperature measurements fit their objects when the measuring equipment and scale are not faulty.
In any event, it should be clear that being objective in this sense is nothing like being absolute--and that it does not preclude subjective experiences of a certain kind being the primary mechanism through which (objective) judgments are reached.
We're typically called absolutists or objectivists with respect to propositions in which we predicate something of an object, event, state of affairs, natural kind, etc.--that is, statements of the form "A is a p" or "A has property p." So, consider the following statement: "Water boils at 100˚C" ("Water has the property of boiling at 100˚C").To be an absolutist about this is to hold that water has this property of boiling at 100˚C regardless of context. It is to say that this is a context-independent truth about water (that, in other words, the boiling point of water is not a function of any other variable). Of course, to say this is to be committed to something that is false. Indeed, we might say that it is objectively false. To be an absolutist about the boiling point of water is to believe things that are objectively false of water, and as such is to fail to believe what is objectively true of it (such as that its boiling point is in part of function of atmospheric pressure).
Of course, I'm getting ahead of myself--but the point is that to be able to say this, "objective" must mean something different from "absolute." And indeed it does. When I say that "Water boils at 100˚C" is true objectively, I am not thereby saying that it is true absolutely. As such, if I say this, it is not an objection to my claim to point out that there are a range of conditions under which water does not boil at 100˚C. Either I will take this as a friendly amendment to my claim (as a more precise characterization of what the objective truth is, but not as a challenge to my main contention, which is that what is at issue is an objective truth), or I will treat it as an annoying failure to realize that I was speaking elliptically (that I was intending to refer implicitly to a set of agreed "standard conditions," and was simply saying, in an abbreviated way, "Water boils at 100˚C under these standard conditions").
So what do I mean when I say that it is objectively true that water boils at 100˚C? I mean that when I attribute the predicate "boils at 100˚C (under standard conditions)" to water, I am saying something of water that is true of water, as opposed to merely saying something about me (that I happen to feel 100˚C-iously towards boiling water, or something to that effect). By contrast, when I say from the swimming pool, "The water is pleasant," I am really just saying something about myself--that I happen to feel comfortable in the water. This latter claim is thus subjective rather than objective. In short, to say that "Water boils at 100˚C" is objectively true is to say that what one is saying is not primarily about oneself, that the truth-maker for the claim is not some wholly subject-relative response to the water. One has, in effect, discovered something about the water, as opposed to merely discovering something about oneself.
But the line here is trickier than it may at first appear. After all, there is a fairly narrow range of water temperatures such that anyone immersed in them would be inclined to call them pleasant. Given the nature of human physiology, anyone who leaped into near-boiling or near-freezing water and declared it pleasant would be lying, kidding, or suffering from a dangerous disability of the nervous system (the sense-of-touch equivalent of blindness). Furthermore, the property of being at 100˚C is the property of corresponding in a certain way to a measuring system created by human beings.
(In fact, it was created using the boiling point of water under standard conditions as the basis for setting the 100˚ mark--just to complicate matters and make me wish I'd chosen a different example. But let us set this aside for now and presume that "Water boils at 100˚C under standard conditions" is not simply an analytic truth, that is, true by definition).
The point is that water's boiling point being 100˚C is arguably a relational fact about how the behavior of water effects certain human artifacts (thermometers) in terms of a human system of measurement (the Celsius system). And one could imagine a much cruder system of measurement being highly effective for certain purposes--one involving immersing body parts in water, and appealing to the pleasantness or unpleasantness (and kind of unpleasantness) that resulted. ("Are you feeling hot or cold? On a scale of 1 to 10, just how badly do you want to get out of the water?") So why isn't "The water is pleasant" treated as stating an objective property of a specific body of water--only much cruder and vaguer, less informative (for those familiar with the relevant measuring system), than "The water is 30˚C"?
Perhaps the thing to say is this: While a temperature measurement is a relational property between the thing being measured and certain human artifacts (measuring tools and systems of measure), the artifacts are carefully designed to precisely and consistently track certain features of the water, and as such are designed to be unaffected (or largely unaffected) by variable features of the individual doing the measuring. There is a (relatively successful) attempt to refer to and track something that exists independent of the human subject--something that was true of water long before humans ever started sticking thermometers into boiling kettles (indeed, well before there were humans in existence). When someone says "The water is pleasant," while this statement does typically tell us something that is true of the water (insofar as human nervous systems are generally callibrated to generate unpleasant sensations outside of a certain fairly narrow range), what it is primarily aimed at telling us is something about how the subject feels (a certain qualitative state that the subject is in). And should it turn out that the person who makes this claim has an unusual physiology--an unusual resistance to hypothermia paired with a neurological resistance to cold temperatures that would set other people to shivering and seeking a quick escape from the water--it would remain true that, for him--the water was pleasant. Why? Because the statement is really about the qualitative state that the subject happens to be in, and as such remains true even if the "objective features of the world" that ordinarily correspond with that qualitative state are not present in this particular case.
To put the point another way, the truth-maker for a subjective statement is something "in the head" of the person making the statement--what makes the statement true or false is whether the person's consciousness is characterized by this subjective qualitative condition or not. By contrast, the truth-maker for an objective statement is something outside the head of the person making the statement--something "in" the object under discussion.
But this distinction leaves something out--something that may be helpfully pointed out if we change our example to one having to do with color. When it comes to color perception, the standard contemporary view is that our experience of color is linked to encountering wavelengths of light of different frequencies. Our color perception is a fairly (but hardly perfectly) refined tool for tracking different wavelengths of light, and as such might be seen as doing for us something very like what a thermometer and a system of temperature measurement does for us: it gets us in touch with something "out there," tracking changes in the external world with a fair degree of precision.
More significantly, when we say that the ball is blue, we mean to be referring to something out there. That is, we intend to name a property that is possessed by the ball independent of our subjective qualitative states. At the same time, however, there is a qualitative subjective experience that corresponds with the term "blue." We can close our eyes and "picture" what blue is like. According to the dominant contemporary paradigm, this subjective color experience, blue's "quale" (to use a quasi-technical term from philosophy of mind), is "all in our heads" in the sense that it isn't actually a feature of the ball at all. Instead, what is "out there" is a surface that differentially reflects different wavelengths of light, such that more of the "blue" wavelengths are reflected and fewer absorbed. Our eyes have mechanisms for discerning this and communicating it to the visual center of the brain, which in turn somehow (mysteriously) plays a role in creating the subjective color experience with which we are immediately familiar.
But let us suppose that Mary has suffered a head injury, and that--while possessed of vivid color experiences like the rest of us--has these experiences in a manner that no longer reliably tracks what is "out there." Her color experiences used to track--and so she learned how to use color language, and for a long time wedded her language usage to her qualitative color experiences with great success: She'd see something, experience it as blue, call it blue, others would agree, and everyone was happy.
But not anymore. (If this way of framing the example seems unnecessarily convoluted to you, it's because you haven't read Wittgenstein--who probably would still be unhappy with my way of putting this example despite my care). Now, when Mary sees the ball, has an immediate color experience, and calls it blue, others say things like, "Um, that's yellow. Do you need to get your eyes checked?" She tries to relearn her language usage--but the next time she sees a "seemingly" blue object and calls it yellow, she'd told that it's red.
We can imagine that she gives up, concluding that her color experiences no longer track onto anything objective in the world. But suppose, instead, that--being rather stubborn--she insists that everyone else has got it wrong. She sees that the ball is blue--and so it is. In that case, when the ball in question happens to be preferentially reflecting light in the yellow spectrum, we'd be inclined (well, I'd certainly be inclined) to say that her subjective color experience doesn't fit with the objective reality, and hence that she is objectively mistaken in attributing "blue" to the ball--even if (as may be the case) the subjective color experience she is having is the same one that, before her accident, tracked "blue objects" very well (and--although I'm not sure how we could know this--is the same one that I have when I see blue objects).
The point of all of this is that much of what goes on in color experience is "subjective"--but color judgments are not subjective ones, because the purpose of color judgments is to track something in the world that is independent of the subject--to say something that is true of objects in the world. When my subjective qualitative color experiences, produced during my visual encounter with the external world, "fit" with their intentional object (in the way that Mary's do not), the judgments that follow from them are objectively true. When these subjective color experiences do not fit (as is the case with Mary), then the judgments that follow (in Mary's case because she is too stubborn to give up making such judgments) are objectively false. And Mary's judgment that the ball is blue is objective false even though it is true of Mary that she is having a subjective "blue" color experience when she looks at the ball. What makes it true that the ball is blue is not that Mary has an experience of this sort, but that the experience "fits" the ball--in something like the way that it would in the case of someone with normally functioning vision, or in something like the way that temperature measurements fit their objects when the measuring equipment and scale are not faulty.
In any event, it should be clear that being objective in this sense is nothing like being absolute--and that it does not preclude subjective experiences of a certain kind being the primary mechanism through which (objective) judgments are reached.
Tuesday, April 5, 2011
Distinctions, Part I: Contrasting the Epistemic Circumstances Underlying Agnosticism and Fallibilism
It seems to me that some of the conversations we are having on this blog could benefit from making some distinctions. As such, I want to devote a couple of posts to simply making some distinctions that I think may prove helpful--although in some cases the distinctions are subtle and hard to make, and so I welcome advice in refining them.
The first distinction I want to make has to do with a pair of contrasting epistemic circumstances that, it seems to me, are often inadequately distinguished. I say this because, as I reflect on what I was doing in Is God a Delusion?, I worry that I was blurring together these contrasting epistemic circumstances myself.
The distinction bears on the relationship between two other contrasting concepts, namely agnosticism and what I like to call fallibilism. These are, if you will, contrasting epistemic attitudes. In roughest terms, to be an agnostic is to withhold belief on a matter, whereas to be a fallibilist is to have a belief but recognize that you could be mistaken, that those who disagree with you could have some or all of the truth, and that it is important to comport yourself accordingly.
What I want to suggest is that these contrasting epistemic attitudes may be correlated with contrasting epistemic circumstances--where agnosticism is (all else being equal) the most fitting response to one while fallibilism (again all else being equal) is the most fitting response to the other. It is the distinction between these underlying epistemic circumstances that I want to try to get at.
The contrasting epistemic circumstances I have in mind are ones that might be faced by reasonable people confronted with a body of evidence. For the sake of sketching out these circumstances, I am going to leave the concepts of “evidence” and “reasonable people” largely unanalyzed. All I will say is that I when I speak about "presumptive evidence" below, I mean to use the term in a broad sense so as to include anything that can be propositionally expressed, where that proposition strikes one as clearly ("evidently") true (in the way that propositions which express what one's senses are immediately delivering strike one as clearly true), and its truth supports the truth of some other proposition(s). But this definition is itself couched in terms that require unpacking, and which might be understood in different ways. In short, I am fully aware that these terms are not uncontroversial, but I am going to sidestep these controversies for the sake of focusing on a different issue, while remaining fully conscious of the fact that in order to adequately address the issues raised here, it is likely that we will eventually need to return to these more basic controversies.
Here, then, are the two epistemic circumstances I want to consider:
Epistemic Circumstance 1 (EC1): You confront a body of presumptive evidence that "reasonable people" (however that is to be understood) generally accept, but you recognize that there are different ways of fitting that evidence into a coherent whole—different "stories" we can tell that fit just as well with the given evidence. In other words, we have certain mutually exclusive holistic ways of seeing the evidence, each of which maps onto the evidence just as well. For simplicity, let us assume there are only two such ways of seeing that fit as well onto the evidence, which we will call Worldviews A and B.
Epistemic Circumstance 2 (EC2): You confront a body of presumptive evidence that reasonable people generally accept, as well as certain further “apparent truths,” that is, things you experience as clearly true/self-evident/obvious/hard to deny/intuitively correct. But some of the people you regard as rational don’t find these apparent truths nearly as apparent as you do, and may instead find other things evident which are hardly evident to you. So, within the total body of “evidence” with which you are confronted, some of it is “shared evidence” whereas some of it is “personal evidence.” Now suppose that, as before, Worldviews A and B both map onto the shared evidence (and are the only worldviews you have so far encountered that do this). But now let us suppose, furthermore, that Worldview A maps well onto the conjunction of the shared evidence and your personal evidence, while B doesn’t (accepting B would force you to abandon things that seem clearly right to you). At the same time, Worldview B maps well onto the conjunction of the shared evidence and what is apparently the personal evidence of reasonable people other than you.
These, in brief, are the two contrasting epistemic circumstances I want to consider. They are not meant to be exhaustive. There may even be a kind of continuum between EC1 and EC2--that is, a range of cases that are a bit like both in one way or another, some more like on and some more like the other. As such, they might be seen as ideal types.
Now if you find yourself in EC1, there is a clear sense according to which, from your standpoint, A and B are equally plausible on the evidence. Put another way, there is no reason, on the evidence, for you to favor A over B. If A and B are the only ways of seeing that you've so far encountered that offer a good fit with the evidence, you may have some reason to endorse the disjunctive proposition, "A or B." But the evidence as such favors neither.
This doesn't mean that you won't have pragmatic or personal reasons for operating as if Worldview A is true and not Worldview B. You might find A more hopeful. Or you might like who you are better when you live as if A is true. Or perhaps you’ve grown up with a community that embraces A, and you continue to have a sense of solidarity with that community. Or perhaps you’ve tried to see the world through the lens of B and it just doesn’t sit right with you because of what you identify as mere quirks of personality. Or perhaps it is a combination of these factor. In any event, you recognize these factors as personal and idiosyncratic ones, and you see those who choose to operate as if B is true (or who choose neither, assuming that is pragmatically possible) as being motivated by personal idiosyncratic motives that are no better and no worse than the ones that motivate you.
Put another way, whatever it is that you take to be motivating you to adopt A over B, you don't take that something to be evidence for the truth of A. It is, rather, a practical reason for you to engage in the act of behaving as if A is true, even though you don't think the evidence especially favors A over B. In a real sense, you experience A and B as equally plausible, and you see your own choice of A as nothing more than a story you like to tell yourself. As such, on a theoretic or intellectual level your stance is clearly agnostic (although, on a pragmatic level, you qualify as a kind of pragmatic believer in A).
But now suppose you find yourself in EC2. In this case, in addition to whatever pragmatic and personal reasons might move you towards Worldview A in EC1, you also have the further reason that Worldview A fits with the totality of the evidence available to you in a way that B does not. You have available to you a set of apparently compelling truths that you are striving to account for, and A accounts for them well. B, however, would force you to give up on a subset of the things that seem obviously correct to you. So, in terms of everything that strikes you as evident, A is more defensible than B.
But the very subset of apparent truths you would have to give up were you to accept B is a subset which some people—who otherwise seem like reasonable people to you—do not find intuitively obvious in the way you do. And they seem otherwise reasonable. And this gives you some reason to attach less weight to the “personal” evidence than you do to the shared evidence, and so be less confident than you might otherwise have been about A.
Put simply, looked at purely on its own merits, the personal evidence seems as compelling to you as the shared evidence. But the fact that others do not see the personal evidence in the same way that you do--in addition to being the primary reason why it ends up being put into the "personal" rather than the "shared" category--gives you reason (especially in the face of a general awareness of your own fallibility) to have doubts about the reliability of the personal evidence. But you are also aware that those who don't find the evidence compelling are also fallible--and you have no special reason to think that you are the one who is wrong in this case, rather than them. For all you know, the reason you regard as clearly true what others don't is that you are so situated so as to be able to immediately intuit truths that these others can't intuit from where they are situated.
Put in somewhat more technical terms, you have so far not encountered any "defeaters" for your personal evidence--that is, nothing that gives you clear reason to believe either that what seems right to you is false, or that the mechanism whereby it comes to seem right is sufficiently suspect to place no credence in its fruits. All you have, at this point, is the immediate intuitive sense that something is the case, along with the clear awareness that other seemingly reasonable people lack this sense.
In EC1, your reasons for favoring A over B are ones that do not appear to you as evidence for the truth of A, and in this sense are seen by you as nothing but pragmatic reasons to operate as if A is true. But in EC2, your reasons for favoring A over B have the "look and feel" of evidence, that is, they seem to be truths that speak in favor of the truth of A. And this makes your epistemic situation clearly different. It means, among other things, that when you endorse A, it is because A seems right to you in a way that B does not. You favor A over B on the basis of considerations that present themselves to you as evidence for the truth of A and against the truth of B. In this sense, you are not an agnostic on the theoretical level (although you may be a kind of pragmatic agnostic, insofar as you choose to operate as if either A or B were equally plausible for pragmatic reasons).
But the broader features of EC2 may inspire you to adopt an attitude of fallibilism with respect to your endorsement of A. It might also inspire you to reassess the reasonableness of those you had previously taken to be reasonable, insofar as they do not accept what strikes you as clearly true. But at least in general, fallibilism seems a better "fit" with EC2. If so, then while A just seems right to you in a way that B does not, you also know that you are fallible, and you know that some of the evidence you are using in arriving at A is not regarded as veridical by other people who otherwise seem eminently reasonable. This fact alone does not make the evidence seem less veridical to you, but it does motivate an attitude of due caution, a willingness to investigate, to hear opposing arguments and be open to be moved by them if they do amount to "defeaters" of your presumptive evidence. And it also makes you resistent to condemning those who endorse B.
In effect, you are inclined to say, "This is an issue about which it seems that reasonable people can disagree; but in my judgment, based on the considerations that seem convincing to me, A seems true and B false. That others reach the opposite judgment calls for respect, but it doesn't as such require that I abandon my judgment. Rather, it only requires that I hold to the judgment fallibilistically."
A few final remarks. It seems clearly possible for one person to be in EC1 and another in EC2 with respect to the same body of shared evidence. That is, Amy may confront the shared evidence without any substantive personal evidence to add, and so may see A and B as equally plausible on the evidence--while Ben confronts the shared evidence with a supplement of personal evidence, on the basis of which Ben sees A as clearly more plausible than B. In such a case, Amy might decide for pragmatic reasons to operate in terms of A--in which case we might say that both are pragmatic believers in A but Ben is also a theoretic believer (wherease Amy is a theoretic agnostic). But there are a number of alternative permutations here that need to be kept in mind--permutations which can generate theoretic accord and practical divergence, etc.
I will confess that this is a first run at my thinking on this issue, and so it doubtless needs considerable refinement. Thoughts?
The first distinction I want to make has to do with a pair of contrasting epistemic circumstances that, it seems to me, are often inadequately distinguished. I say this because, as I reflect on what I was doing in Is God a Delusion?, I worry that I was blurring together these contrasting epistemic circumstances myself.
The distinction bears on the relationship between two other contrasting concepts, namely agnosticism and what I like to call fallibilism. These are, if you will, contrasting epistemic attitudes. In roughest terms, to be an agnostic is to withhold belief on a matter, whereas to be a fallibilist is to have a belief but recognize that you could be mistaken, that those who disagree with you could have some or all of the truth, and that it is important to comport yourself accordingly.
What I want to suggest is that these contrasting epistemic attitudes may be correlated with contrasting epistemic circumstances--where agnosticism is (all else being equal) the most fitting response to one while fallibilism (again all else being equal) is the most fitting response to the other. It is the distinction between these underlying epistemic circumstances that I want to try to get at.
The contrasting epistemic circumstances I have in mind are ones that might be faced by reasonable people confronted with a body of evidence. For the sake of sketching out these circumstances, I am going to leave the concepts of “evidence” and “reasonable people” largely unanalyzed. All I will say is that I when I speak about "presumptive evidence" below, I mean to use the term in a broad sense so as to include anything that can be propositionally expressed, where that proposition strikes one as clearly ("evidently") true (in the way that propositions which express what one's senses are immediately delivering strike one as clearly true), and its truth supports the truth of some other proposition(s). But this definition is itself couched in terms that require unpacking, and which might be understood in different ways. In short, I am fully aware that these terms are not uncontroversial, but I am going to sidestep these controversies for the sake of focusing on a different issue, while remaining fully conscious of the fact that in order to adequately address the issues raised here, it is likely that we will eventually need to return to these more basic controversies.
Here, then, are the two epistemic circumstances I want to consider:
Epistemic Circumstance 1 (EC1): You confront a body of presumptive evidence that "reasonable people" (however that is to be understood) generally accept, but you recognize that there are different ways of fitting that evidence into a coherent whole—different "stories" we can tell that fit just as well with the given evidence. In other words, we have certain mutually exclusive holistic ways of seeing the evidence, each of which maps onto the evidence just as well. For simplicity, let us assume there are only two such ways of seeing that fit as well onto the evidence, which we will call Worldviews A and B.
Epistemic Circumstance 2 (EC2): You confront a body of presumptive evidence that reasonable people generally accept, as well as certain further “apparent truths,” that is, things you experience as clearly true/self-evident/obvious/hard to deny/intuitively correct. But some of the people you regard as rational don’t find these apparent truths nearly as apparent as you do, and may instead find other things evident which are hardly evident to you. So, within the total body of “evidence” with which you are confronted, some of it is “shared evidence” whereas some of it is “personal evidence.” Now suppose that, as before, Worldviews A and B both map onto the shared evidence (and are the only worldviews you have so far encountered that do this). But now let us suppose, furthermore, that Worldview A maps well onto the conjunction of the shared evidence and your personal evidence, while B doesn’t (accepting B would force you to abandon things that seem clearly right to you). At the same time, Worldview B maps well onto the conjunction of the shared evidence and what is apparently the personal evidence of reasonable people other than you.
These, in brief, are the two contrasting epistemic circumstances I want to consider. They are not meant to be exhaustive. There may even be a kind of continuum between EC1 and EC2--that is, a range of cases that are a bit like both in one way or another, some more like on and some more like the other. As such, they might be seen as ideal types.
Now if you find yourself in EC1, there is a clear sense according to which, from your standpoint, A and B are equally plausible on the evidence. Put another way, there is no reason, on the evidence, for you to favor A over B. If A and B are the only ways of seeing that you've so far encountered that offer a good fit with the evidence, you may have some reason to endorse the disjunctive proposition, "A or B." But the evidence as such favors neither.
This doesn't mean that you won't have pragmatic or personal reasons for operating as if Worldview A is true and not Worldview B. You might find A more hopeful. Or you might like who you are better when you live as if A is true. Or perhaps you’ve grown up with a community that embraces A, and you continue to have a sense of solidarity with that community. Or perhaps you’ve tried to see the world through the lens of B and it just doesn’t sit right with you because of what you identify as mere quirks of personality. Or perhaps it is a combination of these factor. In any event, you recognize these factors as personal and idiosyncratic ones, and you see those who choose to operate as if B is true (or who choose neither, assuming that is pragmatically possible) as being motivated by personal idiosyncratic motives that are no better and no worse than the ones that motivate you.
Put another way, whatever it is that you take to be motivating you to adopt A over B, you don't take that something to be evidence for the truth of A. It is, rather, a practical reason for you to engage in the act of behaving as if A is true, even though you don't think the evidence especially favors A over B. In a real sense, you experience A and B as equally plausible, and you see your own choice of A as nothing more than a story you like to tell yourself. As such, on a theoretic or intellectual level your stance is clearly agnostic (although, on a pragmatic level, you qualify as a kind of pragmatic believer in A).
But now suppose you find yourself in EC2. In this case, in addition to whatever pragmatic and personal reasons might move you towards Worldview A in EC1, you also have the further reason that Worldview A fits with the totality of the evidence available to you in a way that B does not. You have available to you a set of apparently compelling truths that you are striving to account for, and A accounts for them well. B, however, would force you to give up on a subset of the things that seem obviously correct to you. So, in terms of everything that strikes you as evident, A is more defensible than B.
But the very subset of apparent truths you would have to give up were you to accept B is a subset which some people—who otherwise seem like reasonable people to you—do not find intuitively obvious in the way you do. And they seem otherwise reasonable. And this gives you some reason to attach less weight to the “personal” evidence than you do to the shared evidence, and so be less confident than you might otherwise have been about A.
Put simply, looked at purely on its own merits, the personal evidence seems as compelling to you as the shared evidence. But the fact that others do not see the personal evidence in the same way that you do--in addition to being the primary reason why it ends up being put into the "personal" rather than the "shared" category--gives you reason (especially in the face of a general awareness of your own fallibility) to have doubts about the reliability of the personal evidence. But you are also aware that those who don't find the evidence compelling are also fallible--and you have no special reason to think that you are the one who is wrong in this case, rather than them. For all you know, the reason you regard as clearly true what others don't is that you are so situated so as to be able to immediately intuit truths that these others can't intuit from where they are situated.
Put in somewhat more technical terms, you have so far not encountered any "defeaters" for your personal evidence--that is, nothing that gives you clear reason to believe either that what seems right to you is false, or that the mechanism whereby it comes to seem right is sufficiently suspect to place no credence in its fruits. All you have, at this point, is the immediate intuitive sense that something is the case, along with the clear awareness that other seemingly reasonable people lack this sense.
In EC1, your reasons for favoring A over B are ones that do not appear to you as evidence for the truth of A, and in this sense are seen by you as nothing but pragmatic reasons to operate as if A is true. But in EC2, your reasons for favoring A over B have the "look and feel" of evidence, that is, they seem to be truths that speak in favor of the truth of A. And this makes your epistemic situation clearly different. It means, among other things, that when you endorse A, it is because A seems right to you in a way that B does not. You favor A over B on the basis of considerations that present themselves to you as evidence for the truth of A and against the truth of B. In this sense, you are not an agnostic on the theoretical level (although you may be a kind of pragmatic agnostic, insofar as you choose to operate as if either A or B were equally plausible for pragmatic reasons).
But the broader features of EC2 may inspire you to adopt an attitude of fallibilism with respect to your endorsement of A. It might also inspire you to reassess the reasonableness of those you had previously taken to be reasonable, insofar as they do not accept what strikes you as clearly true. But at least in general, fallibilism seems a better "fit" with EC2. If so, then while A just seems right to you in a way that B does not, you also know that you are fallible, and you know that some of the evidence you are using in arriving at A is not regarded as veridical by other people who otherwise seem eminently reasonable. This fact alone does not make the evidence seem less veridical to you, but it does motivate an attitude of due caution, a willingness to investigate, to hear opposing arguments and be open to be moved by them if they do amount to "defeaters" of your presumptive evidence. And it also makes you resistent to condemning those who endorse B.
In effect, you are inclined to say, "This is an issue about which it seems that reasonable people can disagree; but in my judgment, based on the considerations that seem convincing to me, A seems true and B false. That others reach the opposite judgment calls for respect, but it doesn't as such require that I abandon my judgment. Rather, it only requires that I hold to the judgment fallibilistically."
A few final remarks. It seems clearly possible for one person to be in EC1 and another in EC2 with respect to the same body of shared evidence. That is, Amy may confront the shared evidence without any substantive personal evidence to add, and so may see A and B as equally plausible on the evidence--while Ben confronts the shared evidence with a supplement of personal evidence, on the basis of which Ben sees A as clearly more plausible than B. In such a case, Amy might decide for pragmatic reasons to operate in terms of A--in which case we might say that both are pragmatic believers in A but Ben is also a theoretic believer (wherease Amy is a theoretic agnostic). But there are a number of alternative permutations here that need to be kept in mind--permutations which can generate theoretic accord and practical divergence, etc.
I will confess that this is a first run at my thinking on this issue, and so it doubtless needs considerable refinement. Thoughts?
Subscribe to:
Posts (Atom)