That’s not to say I always agree with him. About a month ago he put up a post, "In Search of an Arsonist," that I would likely have commented on—in critical terms—if I hadn’t been grieving my father’s death. The post had to do with the method by which we determine whether something is the product of intelligent design. Randal’s thesis is that we decide that something is the product of intelligent design by ruling out other causes until intelligent agency is all we’re left with.
Sometimes, of course, this is exactly how we proceed. Randal offers the example of forensic investigators who conclude that a fire was arson (and hence the result of intelligent agency) by ruling out other causes. But can we generalize from such cases? Is it always or even usually true that we infer intelligent agency by a kind of process of elimination? More significantly, can we or should we rely on such a process in the effort to infer an intelligent designer behind natural phenomena?
Before tackling these questions, I want to take a slight digression. Specifically, Randal’s arson investigation case is precisely the kind of case commonly invoked by members of the so-called “ID movement” to support their claim that what they are doing is science—that it is methodologically in line with established scientific procedures and so should qualify as science. Is this right?
I’m not sure Randal wants to draw this conclusion. After all, if intelligent agency is best inferred by ruling out other kinds of explanations, then the quest to decide whether phenomena in the natural world are the product of intelligent design might best be pursued by dedicating a discipline to the task of uncovering and testing these other kinds of explanations. In short, we might use Randal’s point as a basis for arguing that science should be “methodologically naturalistic” in something like the way that opponents of ID movement insist it should be.
But let’s set this concern aside for now. To determine whether the ID movement is pursuing an approach that qualifies as scientific, we need to know how ID theorists actually defend their views. As I understand it, the modern ID movement (as opposed to believers in design or defenders of philosophical arguments from design) grew out of "creation science," and it shares with its predecessor the political aim of getting the God hypothesis into the public school science classroom. But ID's approach is much more sophisticated than what one finds in creation science, setting aside pseudo-scientific arguments for the literal inerrancy of Genesis in favor of modern updates of William Paley’s version of the argument from design. Where the modern updates differ from Paley is not in the basic logical structure of the argument, but rather in their choice of examples of things-that-are-best-explained-by-positing-a-God.
Contemporary ID theorists typically rely on examples taken from two sources: molecular biology and physics. The first version of the modern argument, which might be called the Argument from Irreducible Complexity, relies primarily on the views of biologist Michael Behe. Put simply, the argument runs as follows: Certain complex biological systems on which organisms rely are said to possess the property of “irreducible complexity”—that is, they are such that, were they to be rendered any simpler by having any of their components removed, they would cease to function altogether and so would confer no adaptive advantage on organisms possessing them. Neo-Darwinian evolutionary theory, it is argued, cannot account for the emergence of such irreducibly complex systems, since evolution explains complex systems in terms of incremental increases in complexity, where each such increase is preserved by the adaptive advantage it supposedly confers. Intelligent design, by contrast, can account for such systems. There is, supposedly, no credible third alternative. Therefore, these systems are best explained by positing an intelligent designer.
Second, we have what’s sometimes called the Fine-Tuning Argument. A set of physical constants are said to possess the property of being “fine-tuned” for the emergence of organized complexity (and hence life). No purely physical theory, it is argued, can adequately account for such fortuitous fine-tuning. Intelligent design can. There is no credible third alternative. Therefore, the fine-tuning of the universe is best explained by positing an intelligent designer.
In each of these cases, what the ID movement offers is an argument, some of whose premises are susceptible to assessment in the light of established scientific methods. But notice what it doesn’t offer: a strategy for positively testing the “intelligent design hypothesis” itself. Instead, what we have is a disjunctive argument in which ID theory is endorsed based on a process of elimination—which is, of course, precisely the mechanism that Randal endorses as the proper one for inferring intelligent agency.
One question we can ask is whether reliance on such a disjunctive argument alone can ever justify one in saying that the conclusion reached was arrived at scientifically. Clearly, scientists can and do make use of this sort of disjunctive reasoning—ruling out known causes for a phenomenon as a way of concluding that some unknown cause is at work. But this is typically a kind of prelude to further scientific work, involving speculation about what the unknown causes might be, and then conducting experimental tests (in some sense repeatable) to determine whether one’s guesses have any merit.
But maybe invocations of intelligent design just can’t work like that, because intelligent design brings things about through agency, and agency is subject to will rather than uniform laws. The argument might go as follows: When a hypothesized cause is mechanistic (to use Hermann Lotze’s language), we can test it—by, paradigmatically, making predictions and seeking to falsify them. But freedom isn’t law-like and so doesn’t allow for that kind of testing. And intelligent design inevitably involves an exercise of freedom. Thus, intelligent design can’t be tested for scientifically, and so can only be rationally embraced in some other way. Perhaps this “other way” is the process-of-elimination approach Randal endorses: If nothing else can explain it, we are left with intelligent agency by default.
If so, we might well ask whether this process-of-elimination approach qualifies as science (i) always, (ii) sometimes (and if so, when and why?), or (iii) never. If it isn’t science, then this just goes to show that intellectual inquiry can and does proceed beyond the boundaries of scientific inquiry, invoking a palette of resources that are still available when science has hit the limits of what it can do with its methods. At stake here is not just the credibility of other methods of inquiry, but the political agenda of the ID movement. If this sort of thing isn’t science, then it shouldn’t be in a science classroom—although it arguably should be part of high school education even so, as part of the philosophy curriculum that high schools shamefully lack.
But the question of whether the process-of-elimination approach to inferring intelligent agency is science needs to be assessed in the light of a deeper question: Is it generally true that we can and do infer intelligent design by elimination of other causes?
I think that, in fact, the situation is much more complex. Consider again the case of the forensic scientists investigating a fire. In this case, we have a certain kind of event (a fire) about which we have considerable experience. On the basis of this experience we have derived a list of “known culprits”—that is, kinds of causes (lightning strike, untended campfire, discarded cigarette, deliberate arson, etc.) which are typically responsible for an event of this kind.
In a situation of this sort, we can systematically rule out the various kinds of causes until we are left with only one—and thus, by process of elimination, arrive at the conclusion that, most probably, the cause was of the remaining kind. I say “most probably” because, even though a rich body of experience tells us that events of this kind are ordinarily produced by causes within this list, there might be unusual kinds of causes that don’t appear on the list. The list is fairly exhaustive, but not completely so.
Some contexts aren’t like this, however. Suppose I’m a space explorer who has recently landed on Planet X. The terrain is uniformly flat in most places, but on my third day I come across a big mound of dirt. After investigating the mound, the ground beneath, and other bits of evidence, I’m able to ascertain that what I’m witnessing is the result of a kind of “dirt-geyser” phenomenon produced when trapped gas pushed up through a silt-filled fissure.
Now I come across another mound of dirt. Upon investigating, I conclude that it is not the effect of a dirt-geyser. But, being new to the planet, I have very little experience with such mounds, and hence very little experience with what might cause them. My list of “known culprits” has one member, and I’ve eliminated it. Presumably, in this case, we can’t reasonably infer intelligent agency on the basis of eliminating all the other known culprits.
What we might say is that the explorer is in the process of creating a known-culprits list for dirt mounds. At that stage of the game, the negative method of determining causes through a process of elimination is unavailable, or in any event untenable. There is just too little that is known about how things work on the planet, and hence no reason to suppose that the list of “known culprits” for dirt mounds even approaches being exhaustive.
Furthermore, there is no reason as of yet for the explorer to suppose that intelligent agency should be included in the list of causes for dirt-mounds on Planet X. The explorer has seen no intelligent denizens on the planet, let alone any who were busy making dirt mounds. This distinguishes our explorer from forensic scientists on Earth who are exploring an unexplained fire, insofar as these scientists know there to be intelligent agents running around and also know that these agents have the means to start fires and sometimes do so.
Of course, this may not be quite right. Suppose our explorer is exploring the planet with a colleague, who is a known practical joker. In that case, the explorer would be well advised to investigate the theory that his colleague created the dirt mound as a joke.
But there’s a difference between appealing to a known sort of intelligent agent—an intelligent agent of a kind known to exist and known to be capable of producing the effect observed—and using observed phenomena as the basis for concluding that a new kind of intelligent agent, one not otherwise observed to exist, in fact does exist. If, after years of study, the Planet X explorer has produced a fairly exhaustive list of causes for dirt mounds—but has never observed any intelligent denizens of the planet—can this explorer really deduce that there must be such denizens if he encounters a dirt mound that cannot be explained by any of the known culprits on his list?
It doesn’t seem so. In fact, it seems that were the explorer to reason in this way, he’d be guilty of a kind of question-begging. What running out of known culprits warrants is the conclusion that there is a heretofore unknown culprit. To assume that the new culprit is an intelligent agent is, in effect, to operate as if the “gap” in one’s list is in fact not a gap at all but is filled by precisely the new kind of intelligent agent one is seeking to establish. The explorer has, in effect, treated the hypothesized new sort of intelligent agent as a member of the known culprits list in order to reach the conclusion that a new sort of intelligent agent should be included in the know culprits list.
But now suppose I’m exploring Planet X and come across an enormous rock in the shape of Justin Bieber’s head. I mean the resemblance is perfect. Of course, I scream in utter terror. Not only are there intelligent beings here, but they clearly wish me ill.
In this case, unlike the dirt-mound case, I immediately infer intelligent agency. I don’t infer this because I have eliminated all non-agent causes from my list of things-that-can produce-perfect-stone-replicas-of-Justin-Bieber’s-head. Rather, I infer it immediately from the nature of the phenomenon that stands in need of explanation. And I infer it (rightly, I would say) without having ever observed any intelligent agents at work on this planet, without having any idea of what those intelligent agents are like, how they produced the stone head, etc.
The reason I justifiably make this inference is because a sculpture of someone’s head is the kind of thing that, in my experience (and not just mine), is only produced by intelligent agents. Once I rule out my practical-joker colleague as the cause, I might now reasonably add a new kind of intelligent agent to my list of known culprits for things observed on Planet X.
In effect, then, from the above we can identify two distinct ways of arriving at the view that intelligent agency is responsible for some phenomenon of type P: (1) A body of experience teaches us that P’s are typically caused by a range of causes, one of which is intelligent agency; the phenomenon at issue is a P; and all causes other than intelligent agency have been eliminated; (2) A body of experience teaches us that P’s are caused only by intelligent agency, and the phenomenon at issue is a P.
(1) and (2) may not be exhaustive. They wouldn’t be if, for example, we could ever immediately intuit, without a body of experience, that certain phenomena require intelligent agency. I'm inclined to suspect that, in fact, we can do exactly this. But I won't pursue that case here. Instead, I simply want to summarize what I take to be the lessons of the above analysis:
(a) Inferring intelligent agency by a process-of-elimination is an acceptable approach (arguably a scientific one) in cases where there is a known set of culprits for a given phenomenon, intelligent agency is among the known culprits, and there is reason to suppose that the set of culprits is fairly exhaustive (that is, most phenomena of the given sort are explained by one of the known culprits).In place of Randal Rauser’s process-of-elimination strategy for inferring intelligent design, I would therefore offer up (a)-(c). And given (a)-(c), it would take more work than Randal has done to say that the fine-tuning case should be approached in the same way that forensic scientists investigate a possible arson.
(b) In cases where we have no firm reason to suppose that our set of “known culprits” is fairly exhaustive, the process-of-elimination approach is not acceptable for inferring intelligent agency or any other cause.
(c) If we are asking whether there exists a new kind of intelligent agency that we haven’t seen before, the process-of-elimination approach is question-begging—unless the phenomenon we are seeking to explain is the sort that we justifiably believe on other grounds could only be produced by an intelligent agent. In that case the process-of-elimination approach would operate on known intelligent agents who might have caused the phenomenon, with the inference to an unknown intelligent agent reached when all known intelligent agents have been eliminated.
Anyway, that a first run at articulating my thinking about this. Thoughts?