Monday, January 01, 2007

 

Elga on Disagreement

Happy new year to everyone!

The Eastern APA finished on Saturday, and I'd like to post about my impressions at some point soon. Joe Salerno, who I finally got to meet at the APA, already has a long post up on Knowability about some of the talks about modality that went on. What I want to talk about here is Adam Elga's interesting talk 'Reflection and Disagreement' that took place on Saturday morning - I had a question I wanted to ask, but unfortunately we ran out of time before I could ask it.

I should say first off that I haven't read the full-paper, so my apologies if this issue is discussed there.

Elga's talk had two main purposes. Firstly, he wanted to persuade us of the equal weight view of peer disagreement, and secondly to argue that the equal weight view doesn't entail 'spinelessness', that is, that rationality demands sitting on the fence about almost every issue.

Peer disagreement occurs when you and someone you regard as your epistemic peer come to opposing verdicts about some factual claim despite having just as good access to the evidence relevant for assessing the claim in question. You count someone as your epistemic peer if 'conditional on the two of you disagreeing, you think it is just as likely that you will be mistaken as that your friend will be'. The equal weight view simply says in peer disagreement, you should give both verdicts (yours and that of your peer) equal weight.

The argument for the equal weight view is also gratifyingly simple - as Elga stresses it's a version of the familiar bootstrapping arguments commonly wielded against reliablism and, more recently, dogmatism. Prior to the disagreement, you regard it as just as likely that you would be mistaken if such disagreement were to arise as it is that your peer is the mistaken one. Suppose you then in fact reach different verdicts about some factual claim (but that you haven't yet been able to check with some third party who is right). If it were reasonable for you to invest greater confidence in your own verdict than in your peer's, then 'you would have gotten some evidence that you are a better judge than your friend, since you would have gotten some evidence that you judged this race correctly, while she misjudged it. But that is absurd.' So you should think it's just as likely that your peer is correct regarding this claim as it is that you are.

The worry is that we're likely to disagree with people we regard as peers on most interesting issues: philosophical and non-philosophical. If these people all take opposing positions, as happens all the time in philosophical debates, it looks like the equal weight view entails a spineless suspension of judgment on all of such issues, since it requires that you regard each participant in the debate who you regard as a peer as just as likely to be correct about the claim in question as you are.

Elga's suggestion is that in these 'messy real-world cases' it's in fact very unlikely that we'll regard all these people as genuine epistemic peers, even if we recognize and admire their intelligence, competence to judge the claim in question, relevant education, and so on. Elga imagines two people with views lying at opposing ends of the political spectrum arguing about whether abortion is morally permissible. Although Ann admires Beth's intelligence, education, knowledge of the facts and literature concerning that issue and so on, is it likely that Ann regards it as just as likely that Beth has judged this issue correctly? That seems pretty far-fetched. So while the equal-weight view gives just the right verdict when we are restricting our attention to disagreements with someone one genuinely regards as an epistemic peer, other people's verdicts about most interesting claims are so tied up with their verdicts about related matters that you are unlikely to regard them as just as likely as you to have gotten things right concerning this particular claim, given the frequently massive amounts of background disagreement between you and them. That's not wonderfully clear, so let me quote Elga himself (21-22 of the online version of the paper):

'...consider Ann and Beth, two friends who stand at opposite ends of the political spectrum. Consider the claim that abortion is morally permissible. Does Ann consider Beth a peer with respect to this claim? That is: setting aside her own reasoning about the abortion claim (and Beth's contrary views about it), does Ann think Beth would be just as likely as her to get things right?

The answer is "no". For (let us suppose) Ann and Beth have discussed claims closely linked to the abortion claim. They have discussed, for example, whether human beings have souls, whether it is permissible to withhold treatment from certain terminally ill infants, and whether rights figure prominently in a correct ethical theory. By Ann's lights, Beth has reached wrong conclusions about most of these closely related questions. As a result, even setting aside her own reasoning about the abortion claim, Ann thinks it unlikely that Beth would be right in case the two of them disagree about abortion.'

So the basic idea is that in disagreements concerning most interesting claims, the attitude one adopts to those one is disagreeing with is shaped by more widespread disagreement, making it unlikely that one will regard lots of those who have reached verdicts that conflict with one's own as epistemic peers. Thus the equal weight view doesn't entail spinelessness.

I'm worried we get spinelessness back as soon as we consider disagreement between one's epistemic superiors. Let's take an example. I've thought pretty hard about vagueness over the last while, and I'm reasonably up on the relevant literature. But I'm inferior to both Crispin Wright and Tim Williamson in terms of intelligence, relevant education and training, familiarity with the literature, etc. According to Elga's proposed solution to the problem of spinelessness, each of those guys is rational in regarding the other as mistaken and themselves as correct, given that they won't regard each other as peers in the relevant sense (perhaps because of background disagreement about logical revisionism, realism/anti-realism, etc). But prior to there being some revelation about the correct theory of vagueness (I know, I know), what attitude should I take towards such claims? The worry is that faced with my epistemic superiors taking a variety of conflicting stances concerning some claim, I should just suspend judgment (since the kinds of reasons that Elga suggests legitimize Wright and Williamson each believing the other to be mistaken wouldn't legitimize my regarding either as more likely to be right than the other). If there's anything in this line of thought, we get spinelessness back on a pretty wide scale despite Elga's attempt to avoid it.

Labels:


Comments:
Hi Aidan,
On the way you've presented it, it sounds like spinelessness does come back in when you take into account the claims of your epistemic superiors, although I'm not sure exactly how the raitonale goes for the claim that my superiors not regarding each other as peers provides no reason for me to regard each one or the other as right. (I haven't read the paper either.)

One thing I was wondering is what the problem is with spinelessness? If there is genuine controversy and disagreement, then it seems like caution is the better path to take for epistemic reasons. Non-epistemic reasons of various sorts could favor one or the other view in particular circumstances, but if there is controversy I'm not sure what is so bad about sitting on the fence. I can see a problem arising if you are worried about how to get out of a spineless situation, e.g., if your epistemic superiors have an extended period of disagreement. In that situation it seems like most people would take one side over the other, but it isn't clear (I suppose the worry goes) what puts the epistemic spine back.

As for the particular situation you pose, it seems like one key thing that seems to be missing is why they evaluated their evidence the way that they did, in addition to the fact that they came up with the conclusions that they did. If one or the other's theory required moves that looked wildly ad hoc or an argument contains a non-sequitur (happens even to the greats), then it seems you'd be justified in feeling one or the other is right in this case. With claims involving detailed theories that are interdependent, the claims will be harder to assess, but it seems like the same basic idea applies. To change examples a bit, this is the problem with "he said, she said" journalism, right? Journalists say one expert says p, another expert says not-p, and Joe Q. Folk is forced to straddle the fence. Whereas when the reasons for the claims are given, it becomes possible to avoid straddling the fence although the mere fact of disagreement betewen knowledgable parties might motivate some extra epistemic caution on the agent's part.
 
This is in response to Shawn. One problem with spinelessness might be that it's a plausible norm of practical reason that you ought not to do A for reason P unless you believe P. (This is a lot weaker than the real norm, but it's implied by it.) So if it's rationally required for you to suspend judgment about P, then it would be irrational for you to do A for reason P.

We often do things for aesthetic, ethical, political, etc. reasons about which there's controversy; if that controversy is among our epistemic peers then on the assumption that this requires suspension of belief we couldn't rationally act for such reasons. So, e.g.: my dad's my epistemic peer; he thinks I'm wrong that the Democrats are better for the country than the Republicans; hence (on the equal weight view) I couldn't rationally vote for a Democrat on the grounds that Democrats are better for the country than Republicans.
 
Maybe the problem I'm running up against is in the equal weight view. I'll try to illustrate what that is. It seems to imply what I will call sheepishness, that is believing or suspending belief on the basis of the beliefs of your peers or superiors (maybe if the equal weight view recommends only suspension, or other negative doxastic moves, and never positive doxastic moves, it could be negative sheepishness). The disagreement seems like it should form (at most a) part of the evidence that you take into account not override all other evidence. Maybe fully taking into account the reasoning used in reaching the conflicting views could convince one to suspend judgment. It seems to be a little bit off to just suspend belief on the basis of a disagreement without figuring out why you're disagreeing.

How about this situation to illustrate a worry. Immanuel and Georg are epistemic peers and disagree about whether p. Immanuel, attempting to follow the equal weight rule, suspends judgment about p. Georg, being a little slower, sees that Immanuel has suspended judgment keeps his belief that not-p since there is no longer a peer believing p. They're both trying to follow the principle, but one is a little slower about it.

Geoff's example clarifies why spinelessness is important. The principle of practical reason he mentions seems pretty reasonable. But, I think the example supplies reason to reevaluate the principle that would require suspending belief on the basis of just disagreement. One could have several reasons for thinking Democrats would be better leaders, and it seems like peer disagreement would be one reason to reconsider, but still one reason among others. Regarding someone as an epistemic peer doesn't guarantee that they've thought things through as much as you have or given sufficient weigh to certain relevant considerations you see as important (or was as perceptive, etc.). I guess that's a long-winded statement of what seems off about it.

Maybe this means I should just read Elga's paper... that would probably be the epistemically responsible thing to do...
 
Shawn,

I wasn't claiming that my superiors not regarding each other as peers provides no reason for me to regard one or the other as right. The point was rather this. What legitimizes Crispin taking a stance on some relevant claim is that he isn't forced to regard Williamson as a peer wrt that claim. Likewise for Williamson. That, according to the proposal under discussion, is what makes it rational for them to stake up opposed positions about what they regard as a factual claim in the face of disagreement of this sort.

What would vindicate my rationality in taking one or other of these stances (assuming for the moment that I am indeed rational in doing so)? Let's say I disagree with Williamson over some claim x. By the bootstrapping argument, even if I regarded him as merely a peer, I couldn't dismiss his verdict about x just on the grounds that he was wrong about x (since he's reached a different verdict to mine, and I regard my own verdict as correct). But it would also clearly be absurdly presumptuous of me to dismiss his verdict because I refuse to regard him as a peer wrt x due to the background of deep disagreement between us; I'm just not in a position to do that, given his status as my superior. If there are a number of such figures taking incompatible stances about some disputed claim, Elga doesn't seem to have offered me any grounds for not giving them all credence, and so it looks like suspension of judgment is all I'm left with. That was the thought anyway.

In his response to Elga, David Christensen suggested we give up on Elga's label 'spinelessness', and replace it with 'open-mindedness'. So he wanted to defend something like Shawn's first reaction. I also really like Geoff's suggestion that part of what's wrong with spinelessness comes out when we examine the links to practical reasoning.

I'm intrigued by Shawn's case where one party suspends judgment before the other. I think I'll need to read the relevant material carefully before I feel in a position to weigh whether this is in fact a problem for the equal-weight view, but it's a neat suggestion.
 
Hey Aidan.
Might we be conflating a purely probabilistic with a more descriptive notion of epistemic superiority?

If we define epistemic superiority in terms of, say, learnedness, then Williamson is no doubt your superior wrt to vagueness. But then I wonder why epistemic superiority should have particularly powerful consequences regarding what you should believe. Cheney is more learned than I am regarding foreign policy, but I trust my own views more, and I'm pretty sure I'm rational to do so. (The example kind of cooks the books, but I think the point shouldn't be too controversial.)

If we define epistemic superiority probabilistically (as it sounded like Elga was doing), then Williamson is your epistemic superior wrt vagueness iff he has a better chance than you of being right about vagueness. But it's not so clear to me that, on this definition, he is your superior. He knows a lot more about it. So prior to hearing his position, you should, _all things equal_, think he has a better chance of getting it right. But to really _know_ whether he's your superior (ie, whether he's more likely right about it), I think I think you'd have to evaluate his position based on your own study of vagueness (and also to take other expert opinions into account).

In practice, things aren't always equal--in the same way as they weren't for the women disagreeing about abortion. You might find yourself disagreeing with him on philosophical stuff that's related to vagueness, but that isn't vagueness per se. And this might lead to you to think he's less likely to be right about vagueness. And another way of saying that is that you regard him as an epistemic inferior and you're not rationally required to believe him. The sentence is a bit jarring (Williamson my inferior?) but it's just a weird choice of terminology. So basically, the superior-inferior and equal-equal cases seem the same to me.
 
Actually, I think my last comment is kind of flawed.

Background disagreement explain why, prior to knowing Cheney's views on foreign policy, I already regard him as less likely than myself to be right (even though he's more learned, intelligent, etc). And so once we come to disagree, there's no problem about suddenly coming to think I'm a better judge than he. (If that's the only problem, it seems to me the issue is whether it's really empirically true that there's significant background disagreement in these cases, regardless of whether there's a superior-inferior or peer-peer relation.)

It might be less likely that background disagreement will be able to play the same role in the Williamson-vagueness case.

Let's run the argument again with superiors instead of peers:

"Prior to the disagreement, you regard it as [more] likely that you would be mistaken if such disagreement were to arise as it is that your [superior] is the mistaken one. Suppose you then in fact reach different verdicts about some factual claim (but that you haven't yet been able to check with some third party who is right). If it were reasonable for you to invest greater confidence in your own verdict than in your [superior]'s, then 'you would have gotten some evidence that you are a better judge than your friend, since you would have gotten some evidence that you judged this race correctly, while she misjudged it. But that is absurd.' So you should think it's [more] likely that your [superior] is correct regarding this claim as it is that you are."

It seems to me that Shawn is right: more than the views of our superiors matter. And maybe the "third party clause is supposed to cover this" (if we take third parties to include things other than epistemic agents). Suppose I have some evidence conditional upon which I assign probability .7 to proposition p. Now suppose that my superior disagrees with p and that I think my superior's chance of being right is 1.2 times mine. The question is how to reconcile the two assignments. Do I revise my conditional assignment of .7 to p in light of my superior's opinion? Or do degrade my superior down to inferior or equal based on said evidence? It seems to me there is no general law, at least at this level of abstraction, regarding what would be rational.
 
I don't know if people noticed, but Shawn's point about people who are slow to revise their credence was raised 3 days later here.

I'll be interested to see if anyone picks it up and discusses it over at TAR.
 
Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?