Thoughts Arguments and Rants


As of July 8 I will be posting on Crooked Timber. This blog will keep running, but it will be more focussed on metaphysics, epistemology and philosophy of language than it used to be, with other posts going to Crooked Timber.


I’m off for a few days while I head over to Amsterdam for Christmas. I will be back early in the New Year with hopefully some slightly more polished things to say about modus ponens failures, Dr. Evil and countable additivity, privileged access and narrow content, and, it being unavoidable in January and February, philosophical gossip. (Though I can't promise that the gossip will be polished - I half think I should promise that it will not be.) Happy holidays all, and good luck to everyone going on the job market!

posted by Brian Weatherson 12/21/2002 09:23:00 PM

Jonathan Schaffer’s webpage is mysteriously lacking in online papers, but I did like the description of his current work:

Contrastive Theories of Everything (or at least of knowledge and causation), Anything by David Lewis.

posted by Brian Weatherson 12/21/2002 01:44:00 PM

I was updating a few links when I discovered something I didn’t really expect to find out during a regular webcrawl. If you crawl on over to John Hawthorne’s CV, and scroll not too far down, you’ll see that he has a paper forthcoming in The Monist, co-authored with me. This is very exciting news, especially to me!

Despite the somewhat incredulous tone of the last paragraph, I was more or less aware that the paper was more or less likely to appear, so it wasn’t like I had a paper accepted at a journal with no knowledge of it (and in fact I even have a draft of the paper available) but it still wasn’t exactly how I expected to get confirmation of another publication. I wonder if news of its acceptance is official enough to put on this year’s annual report?

In other co-authoring news, it looks likely that the paper Andy Egan and I wrote on pranks will be presented at the Symposium on Theoretical and Applied Ethics at LSU next February. I think it’s my duty to play this up for all it’s worth - it’s only fair that adding a few jokes to a good idea for a paper that someone else (i.e. Andy) had gets me to count as an ethicist. I’m not sure why it’s fair, but now that I’m an Ethicist, I can just say that it’s fair and that’s already got some evidential weight.

posted by Brian Weatherson 12/21/2002 06:43:00 AM


Well, here’s something you don’t see every day. The review in Notre Dame Philosophical Reviews of Beyond Rigidity takes Soames to task for not being Millian enough. That’s the kind of thing that happens in any field when you stake out an extreme position early on, any subsequent movement back towards the middle ground will be interpreted as betrayal by someone ;)

posted by Brian Weatherson 12/20/2002 01:55:00 AM


There are about a million other things I should be doing right now, so it’s probably time to say something more about Dr. Evil. I knew that deep down one of the reasons I disliked approaches to probability based on principles of indifference was that they threatened to collapse the important distinction between risk and uncertainty. What I hadn’t realised, until very recently, was Adam’s argument for his indifference principle involves just such a collapse at one point.

First some background. To my mind, what should have been a very important discovery in early 20th century work on probability was that there is a distinction between risk and uncertainty. Here’s how Keynes introduces the concept of uncertainty in an article from 1937 (“The General Theory of Employment” Quarterly Journal of Economics).

By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which the for any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we ha behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.

I think this is all incredibly important, and any theory that ignores the distinction between what is probable and what is genuinely uncertain is mistaken. Decisions based on what is probable or improbable are grounded at least in well understood principles about risk; decisions grounded in what is genuinely uncertain are not. And I’m inclined to think that any theory that says that an agent’s attitude to some uncertain propositions can be expressed by a single probability function does ignore the distinction. This is especially true for theories that say this about ideal agents.

This is hardly an original thought. It was the basis of Keynes’s theory of probability outlined in his dissertation of 1909, which eventually became the Treatise on Probability of 1921. Keynes had the probability, which for him was just rational credence, of an uncertain proposition be a non-numerical value. Ramsey criticised this on the grounds that probability values are meant to enter into computations, according to the theory we can add and multiply them, for example, and we don’t know how to add and multiply non-numerical values. In my dissertation, I proposed that the theory that holds that the credal states of a rational agent can be represented by a set of probability functions rather than just a single probability function could capture all of Keynes’s insights without being vulnerable to Ramsey’s objection. This is not a new theory, it has been discussed by Isaac Levi (“Ignorance, Probability and Rational Choice” 1982) Richard Jeffrey (“Bayesianism with a Human Face” 1983), Bas van Fraassen (“Figures in a Probability Landscape” 1990) and extensively by Peter Walley (Statistical Reasoning With Imprecise Probabilities 1991), and in Walley’s case there’s some connection drawn to Keynes’s work, so I still don’t want to make any dramatic claims to originality.

We draw a connection between Keynes’s theory and these new theories by identifying the probability of a proposition p as a function from members of S, the set of probability functions that represents the credal states of an ideal agent, to [0, 1], where the value of the function is the value of P(p) according to each P in S. For most purposes we can simplify this by saying the probability of p is the range of that function. Then p has a numerical probability in Keynes’s sense iff its probability is a singleton, it is uncertain otherwise. Arguably the range of the function should always be an interval (well, I argue for this at any rate) and if so we can say p is more uncertain the larger that interval is. This gives us a concept of comparative uncertainty, and with that we can say that everything Keynes says in the above quote is true.

Now one of the surprising things about interpreting Keynes’s term ‘uncertainty’ this way is that a proposition can become more uncertain as we acquire more evidence about it. Keynes seemed to think this was impossible, but here I think he was just mistaken about the behaviour of some of his own concepts. (We all make mistakes.) Here’s a case where just that happens. (As it turns out, it’s a case I’ve written about. See my “Keynes, Uncertainty and Interest RatesCambridge Journal of Economics 2000).

I’m watching a roulette game going on, and in particular paying close attention to one player, called Kim. It’s a crowded room, so I can’t see the roulette wheel, or the board where bets are placed, but I can see the croupier, and I can see Kim. I see Kim place a bet on either red or black (I can see that from where she’s leaning over the table) but I can’t tell which. And I have no evidence that tells me one way or the other. I know from prior observation that this is a fair roulette wheel. And I can see that the croupier is about to spin the wheel. Now consider the following propositions. (For simplicity we’ll assume it’s a roulette wheel with no green slots - this makes the example rather unrealistic, but simplifies the computations no end without having any major philosophical costs.)

kr = Kim bet on red
kb = Kim bet on black
br = The ball lands on red
bb = The ball lands on black
 h = Kim is happy in a few seconds

At this stage, I think I can assign numerical probabilities in the following cases:

1. P(hkr Ù br) = 1
2. P(hkr 
Ù bb) = 0
3. P(hkb 
Ù bb) = 1
4. P(hkb 
Ù br) = 0
5. P(br | kr) = ½  
6. P(bbkr) = ½
7. P(br | kb) = ½
8. P(bbkb) = ½

Also note {kr, kb} and {br, bb} are partitions, and my credences reflect that (e.g. P(kr Ú kb) = 1.)

What I can’t do is assign a numerical probability to kr or to kb, they are just uncertain. Perhaps they’re not so uncertain that their probability is [0, 1] - that’s what happens when a proposition is completely uncertain, but they are uncertain to a degree.

Now I wait a few seconds, and see that when the wheel stops, Kim is happy. So I update my credences accordingly. What should my new credences be? Some may suggest that my credences in br, bb and bg should be unchanged, because I have no new evidence that is relevant to their assessment. But this must be false. For if it were true, I could do the following computations (11 and 12 are background, the new assumptions come in at 13 and 14).

11. P(br) = ½ from 5 and 7
12. P(bb) = ½ from 6 and 8
13. P(br | h) = ½ by assumption
14. P(bb | h) = ½ by assumption
15. P(kr | h) = P(kr 
Ù br | h) by 2
16. P(kr 
Ù br | h) = P(br | h) by 4
17. P(kr | h) = ½ by 13, 15 and 16
18. P(kb | h) = ½ by identical reasoning to the last three lines
19. P(br | 
Øh) = ½ (since by 11 and 13 br and h are independent)
20. P(bb | 
Øh) = ½ (since by 12 and 14 bb and h are independent)
21. P(kr
Øh) = ½ (by equivalent reasoning to 15-17, with just the relevant appeals changed)
22. P(kr) = ½ by 17 and 21

And 22 is just what we said we couldn’t conclude, because we weren’t in a position to assign numerical probabilities to kr and kb. So the simple assumption that we shouldn’t change our credences in br and bb when we learn h must have been mistaken. What should happen is that after learning h, br and bb should go from being not at all uncertain to being rather uncertain, in fact exactly as uncertain as kr and kb were (and I guess still are).

This is contentious, but I think that the same thing is going on in Adam’s main argument. (I.e. it’s contentious that it’s the same thing.) Here are the main examples again.

TOSS&DUPLICATION After Al goes to sleep, researchers toss a coin that has a 10% chance of landing heads. Then (regardless of the toss outcome) they duplicate Al. The next morning, Al and the duplicate awaken in subjectively indistinguishable states.

Adam wants to argue that in this case when Al wakes up his credence in HEADS should be 1/10. A crucial premise in the argument for this is that P(HEADS/HeadsAl or TailsDup) (TailsDup is the proposition that he’s the duplicate and the coin landed tails - you can figure out the rest of the code from that) is also 1/10. And he argues for that as follows.

COMA As in TOSS&DUPLICATE, the experimenters toss a coin and duplicate Al. But the following morning, the experimenters ensure that only one person wakes up: If the coin lands heads, they allow Al to wake up (and put the duplicate into a coma); if the coin lands tails, they allow the duplicate to wake up (and put Al into a coma)

Suppose that in the COMA case, Al gets lucky: the coin lands heads, and so the experimenters allow him to awaken. Upon awakening, Al is immediately in a position to assert “Either I am Al and the coin landed heads, or else I am the duplicate and the coin landed tails”. So when Al wakes up in the COMA case, he has just the evidence about the coin toss as he would have if he had been awakened in TOSS&DUPLICATE and then been told [HeadsAl or TailsDup]. So to defend (3)—to show that in the latter case Al’s credence in HEADS ought to be 10%—it is enough to show that when Al wakes up in the COMA case, his credence in HEADS ought to be 10%.9 Let me argue for that claim now.

Before Al was put to sleep, he was sure that the chance of the coin landing heads was 10%, and his credence in HEADS should have accorded with this chance: it too should have been 10%. When he wakes up, his epistemic situation with respect to the coin is just the same as it was before he went to sleep. He has neither gained nor lost information relevant to the toss outcome. So his degree of belief in HEADS should continue to accord with the chance of HEADS at the time of the toss. In other words, his degree of belief in HEADS should continue to be 10%.

Adam considers an objection that Al’s memories should give him evidence that he’s Al, and hence given HeadsAl or TailsDup, he should have a very high credence in HEADS. He responds as follows:

That’s all wrong. TRUST YOUR MEMORIES, AL makes the same mistake that TRUST YOUR MEMORIES, O’LEARY does. While it is true that in the absence of defeating auxiliary beliefs, one ought to trust one’s memories, when Al wakes up he does have defeating auxiliary beliefs. He is sure that—whatever the  outcome of the coin toss—someone was to wake up in just the subjective state he is currently in. As far the outcome of the coin toss goes, the total evidence Al has when he wakes up warrants exactly the same opinions as the total evidence he had when he went to sleep.

This is what I think is wrong. Adam is concerned to reject the line of reasoning that memories provide evidence, because he thinks that they’re really only q-memories and they don’t count for very much. But this ignores a crucial point I think. Al doesn’t know whether his memories are real memories or mere q-memories. But Adam thinks that he can assign a very precise credence to their being real: in this case exactly 1/10. I don’t think this is true, and I think the only way you’d come to infer it is by more or less presupposing an indifference principle.

I’d put the dialectic as follows. Al has some memories. These are actually conclusive evidence that HEADS, though of course Al doesn’t know this. In fact he has no idea whatsoever what the evidential force of those memories is. But that doesn’t mean he should act as if they have no evidential value at all - if he does he’s drawing a substantive conclusion, that q-memories have no evidential value from premises that are essentially worthless, that he has no idea how much evidential worth they have. (Substantive and, we might as well note, false.) He should act like he has no idea how valuable the evidence is, just like in the casino case I should act like I have no idea what the evidential force of h is. In that case I go from regarding br as risky to regarding it as uncertain. I think Al’s attitude towards HEADS should be the same in COMA. And if it is, the argument for the indifference principle in the Dr. Evil paper fails.

posted by Brian Weatherson 12/19/2002 04:18:00 PM


I’ve had a few complaints about the way pictures work on the blog, so I’ve deleted the posts involving graphics. So from now on it’ll just have to be me talking. That might not be a good thing, but we’ll see.

posted by Brian Weatherson 12/18/2002 01:12:00 PM


In both my recent notes on indifference principles, the comments on Nick Bostrom’s computer simulation paper and Adam Elga’s Dr Evil paper, I’ve mentioned that the proponents of these theories assume a theory of evidence that is intuitively quite plausible, and may have been the mainstream view not long ago, and may even be ultimately true, but which is not very popular among philosophers of perception these days. I didn’t think much followed from this, save perhaps that those presupposing a theory that is widely viewed as being hopelessly befuddled owe us an explanation as to why they are sticking with it. And in this little endeavour I have been utterly unsuccessful. This could be because my heart hasn’t really been in it due to underlying internalist sympathies, or because I’m wrong that the indifferentists need to address this, or because I’m no good at convincing people of things, or because of any number of other reasons. Suffice to say that in some circles, the idea that when we look at a hand we have evidence of an epistemically different kind to a brain-in-a-vat that is stimulated in the way our brains are when we look at a hand is not viewed as being particular plausible.

When in trouble in a case like this, call in the heavy hitters. Alex Byrne has a paper forthcoming in Noûs in which he argues that the sceptical paradoxes are not really deep paradoxes. By this he means, in part, that there isn’t anything like a compelling argument for scepticism. And this is because he thinks that the canonical arguments for scepticism turn out to rest on very implausible premises on close inspection. One of those premises is that perceptual evidence underdetermines what the external world is like: we could have just this evidence and be dreaming (or a brain-in-a-vat, etc.). This, Byrne thinks, can be shown to be false simply by carefully reflecting on the nature of evidence. The whole paper is worth reading, but let me just extract a few choice quotes.

          The known (evidence) proposition e has yet to be identified. [Byrne has just argued that evidence should be propositional. The challenge is to determine whether there is any candidate to be e that is compatible with thorough-going external world scepticism.]The candidates may be divided into two classes. The first—class I— consists of propositions about S’s sense-data, ideas, impressions, phantasms or other queer entities allegedly “given” in experience. The second—class II—consists of propositions about how things look or (visually) appear to S (cf. the first paragraph of this section [not excerpted here.]).

          It is quite doubtful that (trivial exceptions aside) any propositions in class I are true, a fortiori known; they may accordingly be dismissed. This would have sounded dogmatic as recently as the first half of the twentieth century: it is only in the last fifty years or so that the deep flaws in what used to be called the “representative theory of perception” have become gradually visible. Admittedly, not everyone agrees that the theory rests on a soggy bog of error: in one form or another, it still has its defenders. However, it is unnecessary here to rehash the argument: because we are playing the first sceptical game, the sceptic must steer clear of philosophical controversy.

          That leaves the members of class II: propositions about how things look or appear to S—in other words, certain propositions about S’s mental states. But because the representative theory of perception is off-limits, there is very little motivation for thinking that one’s knowledge of the external world rests on a foundation of knowledge about one’s own psychology…

          Propositions about how things look or appear to S can be divided into two types. The first—type IIE—comprises external world propositions, because they entail the existence of o: that o looks square to S, that it appears to S that o is square, etc. Hence, propositions of type IIE, despite not entailing p, and perhaps being known by S, are quite unsuitable candidates to be e. For e is not supposed to be an external world proposition.

          The second—type III—comprises those propositions about how things look or appear to S that are not external world propositions (or so we may suppose): that it appears to S that (some x) x is square, that it appears to S that the F is square (for various fillings for ‘F’, e.g. ‘tile’, ‘pink thing’), etc. If e is to be found in class II, it must be of type III.

          [I]t is not plausible that e is a type III proposition. First, these propositions have to be true; clearly we need not suppose that it appears to S that the tile, or the pink thing, is square. But is it even clear that it must appears to S that (some x) x is square? If not, then since there are no better candidates, e is not a type III proposition. Second, S believes e, and it is quite unobvious why S, if he is to know p via his senses, must have any beliefs about how things appear, let alone believe one of the specific propositions under consideration. Suppose S is a conceptually challenged animal who cannot entertain these comparatively sophisticated thoughts about appearances; does this fact alone imply that S cannot use his eyes to come to know that o is square?

posted by Brian Weatherson 12/16/2002 11:40:00 PM

Kieran Healy writes on the (slow-)growing controversy over the role of intuitions in philosophy. For background, see the papers by Jonathan Weinburg et al here, here and here. (If you haven't seen the survey results about intuitions on Gettier cases across cultural and social groups in these papers yet, you should. And prepare to be a little suprised.) Kieran has a rather funny caricature of the way philosophers (or at least metaphysicians) generally argue, but then goes off on a riff about why we should care more about where intuitions come from.

In the meantime, you might be interested in looking at other writers, who have explored the idea that our intuitions might have institutional roots; that culture might mold conceptions of rationality and thus deeply affect how you think; that classification is a social process which might have its origins in material life; and that although individual and social cognition interact in complex ways, getting socialized into a culture often implies subscribing to its point of view.

I’m not sure how any of this undercuts the use philosophers make of intuitions. It seems to me that even if we acknowledge all of this, there are still epistemological and metaphysical reasons to use intuitions in philosophy. (You mean you’ll be defending philosophy by using more philosophy? Yeah, well what did you expect me to use, chemistry or something?)

The epistemological reason is that for each of these facts about intuition, we could (I think) find an equally disturbing fact about perception. How we see the world around us is affected by the kind of culture we’re in, what we expect to find and so forth. But none of that implies that we should stop trusting perceptions as a source of evidence, provided we’re suitably careful about how we employ them. Of course, practically nothing should stop us trusting perception as a source of evidence; that way lies madness, if not philosophical immortality.

The metaphysical reason is that intuitions are sometimes constitutive of the concepts we’re aiming to analyse. Want to know what’s a house? Well, presumably houses are things that satisfy the predicate “house”, or fall under the concept HOUSE. And presumably the facts about what makes an object satisfy the predicate “house” include facts about how the term “house” gets the meaning it gets in the language we speak. And presumably those facts include facts about the intuitions people have about houses. A similar story is probably true for the concept HOUSE, though here there are some more prominent dissenters. Now it’s rather controversial whether a similar story could be true if we replaced “house” with “item of knowledge”, or “rational belief”, or “mind”, or “person”, or “just act”, or (I guess most controversially) “object”, but at least for terms towards the left of that list, it seems plausible enough.

posted by Brian Weatherson 12/16/2002 09:18:00 AM

Brad DeLong writes that he only just realised that there could be non-spectral colours.

Until yesterday, it had never occurred to me that I could see colors that weren't in the spectrum--I had thought that all colors were somewhere in the rainbow (or could be made from rainbow colors by darkening or lightening them).

But that is clearly false. Consider magenta. A magenta light plus a green light equals a white light--all colors. But green is in the middle of the spectrum. So where in the spectrum is magenta? Magenta is red and blue--the complement of green. And nowhere in the spectrum is there a wavelength of light that excites both the red-cones and the blue-cones but does not excite the green-cones.

I was going to write a comment saying just how magenta was possible, then I realised I wasn’t exactly sure. Then I was going to link to a website that explained it all clearly, until I realised I couldn’t find one. So if anyone could enlighten me, or Brad, please write in!

Here’s what I think happens, though I’m not entirely sure. The spectral colours are colours produced by light of constant length. But we know there’s lots of waves that do not have constant wave lengths. This is obvious for sound: you never hear the sound of a trumpet, even a trumpet playing a ‘constant’ note, when you just listen to waves of constant length. Magenta, I think, is one of the things that happens when the light in question is not a wave of constant frequency.

But, that doesn’t really say enough about what happens. I don’t know how the waves ‘mix’. Is it that magenta light contains only photons of a constant frequency, but some of them are around the typical frequency of red light and some of them around the typical frequency of blue light? Or is it that individual photons ‘vibrate’ in some non-sinusoidal pattern, as the air does when two or more notes are played? Or does this distinction not really make sense when we’re dealing with light?

And I’m not even sure this is the right story about magenta. I think it is, but for all I’m certain of, magenta could be a contrast colour, like brown, that is only apparent when there are other visible colours with which it contrasts.

Some might think that it’s embarrassing how little I know about colours, but (a) if I was going to be embarrassed by my ignorance there are many other things I’d be embarrassed about first, and (b) since my department already has an expert on colour, the marginal value of my learning more is not very high.

posted by Brian Weatherson 12/16/2002 07:33:00 AM


The most fun seminar I’ve been attending this semester has been Jeff King’s seminar at Harvard on the semantics/pragmatics distinction. (Hang on, isn’t that the only seminar you’ve been attending? - ed. Not at all, I’ve also been attending my own seminar, and normally I’d think that would be the most fun seminar, because I get to talk all the time.) The main theme of the seminar has been a sustained attack on theories that provide too small a role for semantics in a theory of communication. (Some of the attack is presented in this paper co-written with Jason Stanley.) These theories usually say, in one way or another, that the explanation for the success of certain kinds of communication is pragmatic not semantic. (They often go on to say other things too, but that’s the part that I’m most interested in.) So, to provide a representative sample, consider two stories about how (1) gets the intuitive truth conditions that it has.

(1)      If Charlie drank ten beers and drove home, she broke the law.

Intuitively, (1) is true, because (1) is true iff it is the case that if Charlie drank ten beers and drove home shortly afterwards, she broke the law, and that’s clearly true. How could (1) have those truth conditions? Some theorists (including some time-slices of me) say that the semantic content of (1) is just that if the conjunction (Charlie drank ten beers Ù Charlie drove home) is true then it is true that Charlie broke the law. The intuition is explained by the truth of some more or less complicated pragmatic theory, that somehow predicts that if “Charlie drank ten beers and drove home” is normally only said if the events happened in that order, then (1) is normally only said if Charlie’s drinking and driving in that order implies that she broke the law. And of course there’s a story in Grice about why “Charlie drank ten beers and drove home” is normally only uttered if the events occurred in that order, even if the ordering is not part of the truth conditions.

Jeff doesn’t want to accept any of that. He argues that the most plausible story about the semantics of (1) has the intuitive truth conditions fall out as being the truth conditions. The first point to note is that every sentence in English (and every other natural language) is tensed, and the tenses are presumably part of the semantic content. So “Charlie drank ten beers” has as its semantic content $t (is in the past) (Charlie drinks ten beers at t). Importantly, the quantifier here is restricted. Whether Charlie drank ten beers at Bill Clinton’s second inaugural doesn’t really matter to the truth of an ordinary utterance of “Charlie drank ten beers” unless for some reason we are talking about Clinton’s second inaugural.

Arguably (and better philosophers than I have persuasively argued for this at length) every sentence that isn’t in the present tense literally expresses a proposition that contains a quantifier over time. And this quantifier isn’t present because of some mysterious pragmatic process, it’s encoded in the verbs of the sentence, just like most semantic content is encoded somewhere in surface structure. And what goes for whole sentences goes for constituent sentences too, so to a first approximation, the semantic content of (1) is (2).

(2)      If $t1 (Past t1)(Charlie drinks ten beers at t1) and $t2 (Past t2)(Charlie drives home at t2) then $t3 (Past t3)(Charlie broke the law at t3).

This isn’t much help yet, but if we also hold (a) all three quantifiers here are restricted, and (b) the restrictions are somehow co-ordinated, then we can have the semantic content of (1) really be something like (3).

(3)      If $t1 (Past t1)(Salient t1)(Charlie drinks ten beers at t1) and $t2 (Past t2)(t2 is shortly after t1)(Charlie drives home at t2) then $t3 (Past t3)(t3 = t2)(Charlie broke the law at t3).

This is obviously very rough, because as it stands we’ve got variables appearing outside the scope of the quantifers that bind them, but at least this is a workable suggestion for how (1)’s truth conditions might match its intuitive truth conditions. And to the extent that the argument for radical pragmatic theories was premised on the assumption that there isn’t even a workable suggestion for how (1)’s truth conditions might match its intuitive truth conditions, well those arguments are looking fairly weak. (That would include some arguments I’d previously adopted. Oh well - you can’t be right all the time.)

But not all the examples of alleged separation between truth conditions and intuitive truth conditions are handled with quite such ease.

(4)      If Hannah insulted Joe and Joe resigned, then Hannah is in trouble.

As Jeff and Jason note, (4) “seems to express the proposition that if Hannah insulted Joe and Joe resigned as a result of Hannah's insult, then Hannah is in trouble.” The suggestions above about using restricted quantifiers over times won’t help here, because they won’t get the causal link between Hannah’s (possible) insult and Joe’s (possible) resignation into the proposition. So what can our heroes do? They start by taking a rather sensible approach: when in trouble, ask What Would Bob Stalnaker Do?

As Robert Stalnaker has argued, indicative conditionals normally exploit a similarity relation that counts only worlds compatible with the mutually accepted background assumptions as the most similar worlds for purposes of semantic evaluation. … An indicative conditional is true if and only if the consequent is true in every one of the most relevantly similar worlds in which the antecedent is true. (King and Stanley, 48)

Well, I’m not sure that’s exactly what Stalnaker said, for reasons that shall become apparent presently. Anyway, applying this theory to (4) we get the following conclusions.

Fortunately, however, there is no reason to give a non-semantic account of the intuitive readings of (4). The relevant reading of (4) is simply predicted by the semantics for indicative conditionals that we have endorsed. In a context in which the speaker has in mind a causal relationship between Hannah's insulting of Joe and Joe's resignation, all relevantly similar worlds in the speaker's context set in which Hannah insulted Joe and Joe resigned, will ipso facto be ones in which Joe's resignation is due to Hannah's insult. The speaker's context set is what is epistemically open to her. This may include worlds in which the conjunction holds, and there is no causal relationship between the conjuncts. But given that she has a causal relationship saliently in mind, such worlds will not be the most relevantly similar worlds in the context set. So, if she has a causal relation in mind between the two events, that is just to say that the similarity relation for indicative conditionals will select those worlds in which there is a causal relationship between the conjuncts of the antecedent as the most similar worlds to the world of utterance in which the antecedent is true. So, the causal reading of (4) is predicted by the simple semantics for the indicative conditional that we have adopted above. (King and Stanley, 53, numbering adjusted.)

Imagine that all the following circumstances obtain:

(5)      Jeff and Jason are right about the semantics of indicative conditionals;
(6)      Hannah recently insulted Joe;
(7)      Shortly after that, Joe resigned
(8)      Joe’s resignation was not due to Hannah’s insult
          (in fact it was because he just realised he always wanted to be a lumberjack)
(9)      Hannah is not in trouble.
(10)    Someone uttered (4) knowing (6) and (7), but not (8).

In those circumstances, I think the utterance of (4) may well be true. All the epistemically open scenarios in which (6) and (7) are true are ones in which Hannah is in trouble. And according to Jeff and Jason, if the antecedent of (4) is true iff (6) and (7) are true. So all (epistemically) nearby worlds in which the antecedent is true are worlds in which the conclusion is true, so the utterance of (4) is true.

But, per hypothesis, the actual world is also a world in which (6) and (7) are true, and hence the antecedent of (4) is true. And the actual world is a world where the consequent of (4) is false. So the actual world is a world where the premises of the following argument are true and the conclusion false.

If Hannah insulted Joe and Joe resigned, then Hannah is in trouble.
Hannah insulted Joe and Joe resigned.
So, Hannah is in trouble.

So modus ponens is not a valid argument form. Something may have gone awry. There’s two problems here, both of them potentially serious. First, on the formal semantics Stalnaker adopts for the indicative conditional, modus ponens is valid, yet Jeff and Jason claim to just be implementing Stalnaker, and they’ve ended up rejecting modus ponens. Either Stalnaker’s got his own theory wrong, or Jeff and Jason have got him wrong.

Secondly, THEY’RE REJECTING MODUS PONENS. Isn’t this something that should be a serious issue? I mean, it’s at least somewhat surprising. Not as surprising as, say, the fact that Rocky VI is going to get made. But surprising. Reading through Jeff and Jason’s papers, and certainly listening to Jeff, one gets the impression that the forces they’ve lined up against present views that are seriously flawed in some way or other. I do hope that rejecting modus ponens is not the only alternative to these positions.

posted by Brian Weatherson 12/15/2002 01:45:00 AM

Sometimes I think it would be fun to run a critical thinking course focussing on how to spot fallacious reasoning that only ever used examples drawn from the contemporary media. Depending on how sensitive Brown students are, I could end up getting accused of every sort of bias imaginable. (And the evidence is that some of them are much too sensitive.) But I don’t have such a course yet, so I’ll have to stick to the blog. This is from the Washington Post.

"This Lott story has continued primarily because of criticism from conservatives," said Whit Ayres, a Republican pollster based in Atlanta. “If the only people raising doubts were Jesse Jackson and Al Sharpton, this story would have died of its own weight several days ago. It's the anguish from conservatives that has kept the story going.”

Um, yeah. The hidden premise here that only people who ‘raised doubts’ were Jesse Jackson, Al Sharpton and conservatives. Given that extra premise, the conclusion that “it's the anguish from conservatives that has kept the story going” I guess would follow. And you know, if you’re prepared to count Josh Marshall, Paul Krugman and Al Gore as conservatives, well the hidden premise still wouldn’t be true, but at least there wouldn’t be a refutation I could find within five seconds of scanning the NY Times.

posted by Brian Weatherson 12/15/2002 12:13:00 AM


The following strikes me as a pretty persuasive argument against a thorough-going process reliabilism. Since I’m no expert on the field, I don’t know how similar it is to existing arguments against process reliabilism, which is to say that if this turns out to be a boring repetition of familiar points, well at least it wasn’t intentional plagiarism.

Process reliabilism says that the justification of a belief is proportional to the reliability of the process that generated the belief. This raises the generality problem, as stressed in Conee and Feldman’s 1998 paper - what is the process by which the belief is generated? Or, to put the point more obscurely, what are the individuation conditions for process types being used in this formulation. At one level the generality problem is the problem of making the basic claim of process reliabilism contentful - if we are prepared to count gruesome enough types, then every belief is the product of some very reliable processes, and some very unreliable processes. But let’s assume that problem has been handled.

At another level, the generality problem raises a tension that I think can’t be resolved for a full-blown process reliabilist. On the one hand, we want processes to be instantiated more than one time, or else we’ll be led to the crazy view that a belief is justified iff it is true. So we don’t want the instantiation to be too fine-grained. On the other hand, the definition of justification entails rather immediately (so immediately that it might surprise you to learn how long it took me to realise this) every belief generated by the same process is equally justified. To the extent that justificatory status can be very sensitive to the particular ways a belief is formed, that implies we want processes to be individuated quite finely. I think, and I think I have an example that supports this, that these two constraints can’t be satisfied at once. Onto the example…


Morgan is displaying symptoms S. Dr Watson knows that symptoms S normally imply that the patient has a liver disease. But he also knows that in some cases, happily enough in all and only cases where the patient has genetic condition C, a patient with symptoms S doesn’t have a liver disease, but in fact has a kidney disease. Dr. Watson also knows that genetic condition C is rare, only 1% of males and 7% of females are C. And he knows that there’s no easy way to test for whether a patient has condition C, for usually it has no readily observable effects. And he knows he has no other relevant information about whether Morgan is has condition C. So Watson concludes that Morgan has a liver disease.

How justified is Dr. Watson’s belief?

I think you don’t know enough to say yet, because you don’t know whether Morgan is male or female. If Morgan is male, then Watson’s belief is very well justified. If Morgan is female, then Watson’s belief isn’t particularly well justified, for he should be taking more seriously the possibility that Morgan has condition C.  Even in that case, it isn’t a disastrous belief, but not as well justified as in the case where Morgan is male. Since the two possible beliefs are not equally well justified, we need to say that they are the results of different processes.

That alone might not be a problem. Perhaps we can find a different way of categorising beliefs such that the belief that a male patient displaying S has a liver disease falls into a different category than the belief that a female patient displaying S has a liver disease, though I’m not entirely convinced that existing (pure) reliabilist theories have the resources to do this.

The problem is that the example generalises. If x and y are both relatively small numbers, and Watson knows that x% of males have condition C and y% of females do, then his conclusion that Morgan has a liver disease is more justified if Morgan is male rather than female for any such x and y, even if they are very close, say x = 4.5 and y = 5, or even, I’d guess, if x = 4.5 and y = 4.51.

That means that we’re going to have to posit infinitely many different categories of belief-forming processes, just to account for all the different possible processes via which Watson could form the belief that Morgan has a liver disease. The problem is that when categories belief-forming processes get so fine-grained, we will start to get some lucky guesses counting as justified beliefs, because they are the only beliefs ever formed by that process, and some unlucky reasoned judgments counting as unjustified beliefs, again because of the small sample size. This I take it should be intolerable.

One response to related problems raised in the 1980s was to modalise the notion of reliability. Maybe I’ll come back to that in later posts, but I think it should be pretty clear that won’t help. The problem is that there’s too many darn worlds to possibly count successes and failures of a process, and no other approach to summarising the data from nearby possible worlds seems to be much use.

This is not a problem for theories of justification that incorporate some aspects of process reliabilism, but also build in some more traditional internalist evaluations of modes of reasoning. Ernie Sosa’s virtue reliabilism is like this, and my theory, which is reliabilist about observational beliefs and (sorta kinda) foundationalist about non-observational beliefs isn’t either. But a theory that is all process reliabilism all the time really looks like it has problems with DIAGNOSIS.

posted by Brian Weatherson 12/14/2002 02:20:00 AM


There’s been a rush on the vagueness experiment in the last few hours, from where I have no idea. Anyway, as best I can tell from looking through the counters (and taking into account comments like Ehud’s that they hit some of the counter pages because they were just looking around) the score is now Consistency 53 - Contextualism 12. I’m going to be away from the computer for a few hours - at this rate the over/under for the combined score when I get back is about 100.

UPDATE: It turns out that the flood of responses to the vagueness experiment are because of this rather kind link by Matthew Yglesias, who runs one of the best combined academic/political blogs around. Go read it, and if you agree you can even vote for him in Dwight Meredith’s Koufax Awards. The Koufax Awards are for the best lefty blogs around, and are allegedly named after the best lefty pitcher ever. Though in that case why they aren’t named the Grove awards is a bit of a mystery. Perhaps it’s because if the award were really for best lefty pitcher they’d have to change their name to the Johnson awards sometime between when Randy starts next year’s All-Star game and when he wins next year’s Cy Young award. Oh, in the experiment the score is now 71-18, so everyone who took the under on the bet I mentioned above wins.

posted by Brian Weatherson 12/13/2002 03:06:00 PM


Via Martijn Blaauw I got a notice of this graduate student conference in epistemology to be held in Amsterdam next May. It looks like fun, and not just because it’s in Amsterdam. Anyone who can get funding for going to Amsterdam and participating in a fun philosophy conference should pause and reflect on just how much good fortune they possess. Sometimes grad students have all the luck!

UPDATE: I didn't read the fine print very closely. It seems the deadline for submitting papers to this conference has passed. I don't know how strict they will be about enforcing things like deadline rules. (It's at the Free University of Amsterdam, you'd think there wouldn't be things like rules anyway.) But if they are strict this isn't as appealing as it first looked. Thanks to Alyssa Ney for picking up this little detail that I missed.

posted by Brian Weatherson 12/12/2002 10:11:00 PM

I’ve been emailing with Adam Elga about his Dr Evil paper (and my objections to it) and on at least one point I’ve been totally trounced. I said that predicaments were only situations that were in some way unpleasant, but Adam used the term to cover all sorts of situations, even ones involving the twins from the Coors Light commercials. Adam replied that Michael Jordan uses ‘predicament’ the way he does.

“We've got 26 wins and we still have 35 games left,” Jordan said. “We've got a good chance of putting ourselves in a good predicament, which all along I felt like we could. In some ways you want to think greedy, but nut-cutting time is starting to come.”

You know I don’t think I’ve ever lost an argument so convincingly since the last time I tried disagreeing with Tim Williamson.

posted by Brian Weatherson 12/12/2002 10:06:00 PM

Powered by Blogger


My random philosophical musings, more often in premise-conclusion form than is normal for this media


Philosophy Directories
Updates to online papers in Philosophy
Online papers in Philosophy (via Dave Chalmers)
Philosophy in Cyberspace
Semantics on the Web
Philosophy Sites
My home page
Brown Philosophy
Philosophical Gourmet Report
American Philosophical Association
Australasian Association of Philosophy
Analytic Philosophy Blogs
Philosophy from the (617)
Chris Bertram
Geoffrey Nunberg
Greg Restall's Consequently.Org
Kai von Fintel
Matthew Yglesias
Sam Quigley's Gavagai
Tom Stoneham's Philosophy Blog
Wo's Weblog
Other Philosophy Blogs
Existentialism Philosophy
Fallacy Files
Philosophical Investigations
Stoic News
Too Much Text
Academic Blogs
Brad DeLong's Semi-Daily Journal
D-Squared Digest
Kieran Healy's Weblog
Language Hat
Lawrence Solum's legal theory weblog
Other (Mostly Political) Blogs
Alas, A Blog
Long Story, Short Pier
Talking Points Memo
Australian Blogs
Hot Buttered Death
John Quiggin
Tim Dunlop
Virulent Memes
Baseball Prospectus
Baseball Primer
National Gallery of Australia
Tate Gallery
The Age
The Sydney Morning Herald
The Guardian
The New York Times

Mail me

XML feed powered by RSSify at Wytheville Community College

What I'm reading

White Teeth by Zadie Smith

Harry Potter V by J.K. Rowling

Atonement by Ian McEwan

The Clinton Wars by Sidney Blumenthal

Ulysses by James Joyce

«  aussie  blogs  »

Free interactive commenting by - click to sign-up!