In his wonderful paper, "Indicative Conditionals", (which I have recently reread because of a great paper on it by Fabrizio Cariani) Stalnaker presents a theory that allows him to maintain a unified semantics of the ordinary-language conditional, that is, without having one semantics (broadly construed) for so-called indicative conditionals like 'if it's raining, the streets are wet' and another for so-called subjunctive conditionals (aka 'counterfactuals') like 'if it were raining, the streets would be wet'. I want to challenge his maneuver here, and I want to suggest that any maneuver won't work, either. In an important sense, supposition is too powerful an attitude; there are too few restrictions on what we may suppose, and thus on what conditionals we may assert.
Among other things, Stalnaker wants to explain why (to take Ernest Adams's example) 'if Oswald didn't kill Kennedy, then someone else did' and 'if Oswald didn't kill Kennedy, someone else would have' have intuitively different truth, acceptability, or assertability conditions without having a very disjunctive semantics. What would count as disjunctive? Not mere difference, since otherwise Stalnaker's would be disjunctive. More disjunctive would be treating indicative conditionals as material conditionals and counterfactuals as variably strict, as with Lewis. In what follows, I will interpret Stalnaker's idea as a kind of contextualism, a partial story about how context determines relevant "selection functions" for evaluating conditionals (on which more in a moment); no matter how we understand Stalnaker on this point, i.e., as giving different truth conditions, or just acceptability or assertability conditions, my arguments will go through the same way.
(More immediately his goal is to introduce his idea of "reasonable inference" and show how it allows the ordinary conditional to not collapse into the material conditional. I won't focus on this aspect, but what I have to say does have implications for his defense.)
In "A Theory of Conditionals", Stalnaker presents this semantics: 'if p, then q' is true at w just in case the contextually specified selection function f is such that f(p, w) is a world in which q is true. Intuitively f(p, w) picks out the most similar p-world to w, where similarity is kept cipher-like. All conditionals work this way, according to Stalnaker. The difference between indicatives and conditionals will then arise from which sorts of selection function are relevant for evaluating indicatives, versus which for subjunctives. To understand how these are determined, I'll have to dig a little more into Stalnaker's overall picture.
I'll be brief in explaining the Stalnakerian framework; a highly idealized version will do. Suppose we have conversational participants S1, ..., Sn. The common ground is the set of propositions they mutually believe---they all believe (or accept), they all know they all believe, etc. More briefly: it's the stuff all the participants take for granted, know they all take for granted, ... . If we interpret propositions as sets of worlds, then the context set is the intersection of all the propositions in the common ground. (If we don't, then the context set will be all the worlds compatible with the conjunction of the propositions in the common ground.)
When it comes to indicatives, then, the relevant selection functions f are such that f(p, w) is always in the context set. Stalnaker says: "It is appropriate to make an indicative conditional statement or supposition only in a context which is compatible with the antecedent." Call this the presuppositional constraint. To evaluate 'if Oswald didn't kill Kennedy, someone else did', we remain in worlds in which Kennedy was killed, thus making the conditional rather trivially true; and to evaluate 'if Oswald didn't kill Kennedy, someone else would have', we need not look only at worlds in which Kennedy was killed. So far, so good. A good many philosophers and linguists have found this idea reasonable, too.
But what about examples like this:
(1) If we're presupposing something that's false, we're going to encounter some trouble.
(2) If we were presupposing something false, we would encounter some trouble.
If Stalnaker were right, only (2) would be felicitous; (1) would be unacceptable. This follows from the presuppositional constraint. But (1) is acceptable, and in many contexts it is straightforwardly true. So it seems that indicative conditionals don't require that their antecedents be compatible with the context set.
'Presupposition', etc., as Stalnaker uses them are technical terms. So you might object that I am implicitly equivocating: (1) requires an ordinary-language understanding of 'presupposing', but Stalnaker doesn't need to use that notion. Yet the following are all good:
(1') If we're taking for granted something that's false, we're going to encounter some trouble.
(1'') If we're assuming something that's false, we're going to encounter some trouble.
The point here is just that however Stalnaker explicates his technical term, we can have felicitous conditionals like (1), etc. Seeing this, you might think something is fishy. After all, doesn't the following sound somehow bad?
(3) Tommy's barking. If Tommy's not barking, he might be sick.
I more or less agree that it does. Suggestion:
Identifiability Constraint. If the indicative conditional 'if p, q' is true, felicitous, acceptable, etc, then there is no identifiable presupposition r in the common ground that entails ~p.
First point: this means that we cannot do with the flat structure that the context set itself provides. We need the structure of the common ground. Second, identifiability requires clarification to be usable in any satisfying theory. But third, I suspect even this condition does not work. Consider:
(4) If everything we're presupposing is wrong, we've made some truly enormous mistakes.
(4) seems acceptable to me. But so long as we can identify at least one element of our common ground (which of course we should be able to do, publicity being a constraint on the common ground), then even the Identifiability Constraint won't work.
I don't exactly know how to fix Stalnaker's constraint on selection functions. Because of its ability to handle some of these examples so well, including the Oswald example and (3), I think it likely will be fixable. But I can't easily think of a fix. The basic problem is that indicative suppositions are just really easy to make. Supposition is a "powerful", by which I mean relatively unconstrained, attitude. That's why I am not completely convinced that there is a fix.
I suspect this problem has been addressed before, but I don't know where. Any responses, or any pointers toward literature where it has been, would be of serious use!