Wednesday, May 14, 2008
Knowledge How and Understanding
They claim (fn31) that the following case is a counterexample to any account of knowledge how that fails to make understanding necessary (as they sometime put the condition, knowing how to X requires a minimal understanding of X-ing):
'Irina knows a way of doing a salchow, namely, by taking off from the back inside edge of her skate, jumping in the air, spinning, and landing on the back outside edge of her skate. Moreover, she knows that this is a way of doing a salchow (her coach told her). Suppose, however, that Irina is deeply confused about the concepts back outside edge and back inside edge. In particular, suppose that she takes her back outside edge to be her front inside edge and her back inside edge to be her front outside edge. (As per Burge, we take it that this degree of misunderstanding is consistent with attributing to Irina possession of the concepts back outside edge and back inside edge and associated propositional attitudes.) However, as in the case described above, Irina has a severe neurological abnormality that makes her act in ways that differ dramatically from how she actually takes herself to be acting. Whenever she actually attempts to do a salchow (in accordance with her misconception of a correct way of doing one) this abnormality causes her to reliably perform the correct sequence of moves. Despite the fact that what she is doing and what she takes herself to be doing come apart, she fails to notice the mismatch. (48, fn suppressed)
It's supposedly a counterexample since any account that fails to include the understanding condition (including Stanley and Williamson's better known version of intellectualism) will rule this as a case in which Irina knows how.
But I'm sceptical. Here's a skeletal account of knowledge how, inspired by Nozick's tracking account of knowledge that (we may call it the guiding account):
S knows how to X iff:
1. S is able to X (there's a lot of complexity being suppressed here, of course. See the preceding post for discussion)
2. S has a true belief about how to X
3. ~2 []-> ~1
4. 2 []-> 1
If the tracking account gives some content to the idea that if one knows that p, one has the belief that p in virtue of p's being true, then the guiding account gives some (though admittedly not much) content to the idea that one successfully X's (or would successfully X under the right kind of conditions) in virtue of one's having a true belief about how to X.
These 4 conditions obviously raise as many questions and issues as they speak to, but let's not get into that here. The point for now is just that any account that endorses 4 seems to be in a position to deliver the result that Irina does not know how in John and Marc's case. For although in that case Irina is able to perform the jump, and she has a true belief about how to X (as a result of the testimony she's received from her trainer), there is a relevant class of worlds in which she has the belief and yet she isn't able to successfully perform the jumps; the class of worlds in which she lacks the 'severe neurological abnormality' which so fortunately cancels out her misunderstanding.
The guiding account is obviously just a toy account, but the verdict it offers in the case at hand doesn't seem at all ill-motivated. Just as Gettier-man's belief that either Jones owns a Ford or is in Barcelona fails to count as knowledge because it was merely luck that he formed a true belief, Irina fails to know how to perform the jump because her success at performing it is too lucky; there are close worlds in which Gettier man forms the belief by the same method and yet gets things wrong, and their are close worlds in which Irina tries to X, and has a true belief about how to X, but fails to because the factor that cancels out her misunderstanding is absent. On this analysis of why Irina fails to know how in such a case, it's not significant that Irina misunderstands X-ing. The relevant features of the case are just this; there's some factor which prevents her from knowing how and, crucially, it's a matter of luck, in a pertinent sense, that there's a countervailing factor which stops this from interfering with successful performance. Toy account or not, I think the guiding account provides the basis of a challenge to the claim that the case is a counterexample to any account that lacks the understanding condition.
Labels: Epistemology, Jason Stanley, Knowledge How, Tim Williamson
Wednesday, February 06, 2008
Does Knowing Figure Ineliminably in Causal Explanations?
'Given a potential substitute for 'knows', suppose that it does not provide a necessary and sufficient condition for knowing. One then constructs possible cases in which the failure of necessity or sufficiency makes a causal difference, making the proposed substitute not even causally equivalent to knowing. The potential substitute avoids this problem only if it does provide a necessary and sufficient condition for knowing.' (63)
So suppose one substitutes 'believes truly' for 'knows' in some causal explanation. To take Williamson's example, we can ask why a burglar spent the entire night ransacking a house, given that he increased his risk of being caught the longer he stayed. Start with the explanation that he knew that there was a diamond in the house. If we merely say he truly believed that there was a diamond in the house, we give a worse explanation of why he stayed so long, risking detection. For suppose he truly believed that there was a diamond in the house because he had been told that there was a diamond under the bed, when in fact the only diamond was in a drawer in the study. In that case, it seems he would be likely to give up his search after looking underneath the bed, having given up his true belief that there's a diamond in the house. Williamson now argues:
'Given suitable background conditions, the probability of his ransacking the house all night, conditional on his having entered it believing truly but not knowing that there was a diamond in it, will be lower than the probability of his ransacking it all night, conditional on his having entered it knowing that there was a diamond in it. In this case, the substitution of 'believe truly' for 'know' weakens the explanation, by lowering the probability of the explanandum conditional on the explanans.' (62)
If we try to get around this point by substituting 'believes truly without reliance on false lemmas', we can construct a scenario where the burglar's true belief does not rely on false lemmas, but he receives misleading evidence in the course of his search, which makes it less probable that he would risk staying the entire night than if he knew that there was a diamond there (63). We could switch to 'believes truly without reliance on false lemmas and with stubbornness in one's belief in the face of counterevidence', but then, since such stubbornness is not necessary for knowledge, we can find an alternative cases in which the failure of necessity for knowledge makes a causal difference. And so on; for each proposed substitute for 'knows', there will be an argument that that substitute doesn't cut it.
Frank Jackson has a paper offering a response to this argument (which I think is appearing in Duncan Pritchard and Patrick Greenough's Williamson on Knowledge volume with OUP), but I have a different worry. I'm just not sure with what right Williamson can assume that for any substitute which does not provide necessary and sufficient conditions for knowledge, there will be a possible case in which 'the failure of necessity or sufficiency makes a causal difference, making the proposed substitute not even causally equivalent to knowing'. That seems to beg the question against someone who wants to maintain that reference to states of knowing are not essential in a given causal explanation; it's just an assertion that any non-equivalent substitute can't be causally equivalent.
So far I've been arguing that a crucial premise in Williamson's argument for the essentiality of reference to states of knowing in some causal explanations is question-begging; it's just an unwarranted assumption that any substitute for 'knows' which does not give necessary and sufficient conditions for knowing cannot be causally equivalent, and I don't see why Williamson's opponent should grant that. Without that premise in place, I don't see how consideration of a few cases suffices to get us to Williamson's conclusion. But clearly my point here would be much stronger if one could show that premise to be not just under-motivated, but actually mistaken. I think Williamson's own account of knowledge provides materials for one attempt to show this.
Williamson holds that knowledge requires safe belief; that is, the following subjective conditional must hold:
Safety: Bp -> p
(In words: in the closest worlds in which you believe that p, p is true: your belief could not easily be false)
Now suppose our burglar enters the house with a confident, true belief that's not based on any false lemma that there's a diamond in the house, but that belief is unsafe. Now, how do we continue the case so that the failure of his belief to be safe makes it more likely that he'd leave the house before morning than if he knew? The relevant difference between him and his counterpart who knows is just that the modal space around them is different. I don't see how to continue the case so that this modal fact makes the kind of causal difference Williamson needs it to.
I don't mean to suggest that Williamson holds that knowledge is just confident, safe, true belief. Of course he'd reject this analysis of knowledge, just as he'd reject any other. Williamson writes, 'the search for a substitute for knowing in causally explanatory contexts is forced to recapitulate the history of attempts to analyse knowing in terms of believing, truth, and so on, a history which shows no sign of ending in success' (63). Though I'm not as pessimistic as Williamson on this score, I take the point. It is going to be very hard to specify the right substitute for 'knows', without just describing it as 'knowledge minus safety'.
But the general point I've tried to make should be clear despite these - admittedly difficult - complications; once we add modal constraints such as safety onto knowledge, it becomes hard to see how on what grounds we should hold that any substitute which does not give sufficient conditions for knowledge must fail to be causally equivalent. And that's a crucial premise in Williamson's argument that reference to states of knowing are essential in some causal explanations.
Labels: Epistemology, Tim Williamson
Thursday, October 04, 2007
Anti-Luminosity: The Forgotten Premise
A condition C is luminous iff the following holds:
For every case A, if C obtains in A, then one is in a position to know that C obtains in A.
Tim Williamson's anti-luminosity argument is meant to offer a proof that no non-trivial conditions are luminous in this sense (a condition is trivial iff it obtains in every case, or else fails to obtain in every case).
Williamson proceeds, he claims without loss of generality, by way of a reductio of the claim that the condition that one feels cold is luminous. So from the definition of luminosity, we have as our claim for reductio:
(LUM) For every case Ax, if one feels cold in Ax, then one is in a position to know that one feels cold in Ax.
Now imagine a series of times (T0, T1, T2.......Tn) between dawn and noon, each a millisecond apart. Fixing the subject and the world, we obtain a series of cases (A0, A1, A2........An) individuated by those times. One warms up very slowly, virtually imperceptibly, throughout this period. The following are also stipulated features of the case:
(COLD) In A0 one feels cold.
(WARM) In An one does not feel cold.
Williamson's two crucial premises are the following:
(REL) If in a case Ax, one knows that one feels cold, then in case Ax+1 one does feel cold.
(CON) If in a case Ax, one is in a position to know that one feels cold, then if one actively considers the matter, one knows in Ax that one feels cold.
On the assumption that one actively considers the matter of whether one feels cold in each case (A0, A1, A2........An), (LUM), (CON) and (REL) together entail the following tolerance principle for one feeling cold:
(TOL) For every case Ax, if one feels cold in Ax, then one feels cold in Ax+1.
And now Sorites reasoning will easy demonstrate the inconsistency of (TOL) with (COLD) and (WARM). So, since Williamson thinks he can independently motivate all the relevant analogues of (REL), and that (CON) is plausible, it follows that we should reject (LUM) for any non-trivial condition.
Virtually every discussion of Williamson's argument, online or in print, has focused on the reliability premise (REL). I agree that this is a weak point in the reasoning, and it should be explored, but I believe it's monopoly on attention has been deeply unfortunate - the issues surrounding (CON) have been completely neglected. Many presentations of the argument in the literature don't even acknowledge (CON) as a premise (Williamson's own presentations actually do acknowledge it). It's been the same online: in a recent discussion over at the Excluded Middle Errol Lord defines luminosity as I have above, and then when presenting Williamson's argument he writes:
'Now suppose feeling cold is luminous, which entails (2i):
(2i) If in Ax one feels cold, then in Ax one knows that one feels cold.'
(I've altered the reference to cases to be in line with that of the rest of my post here.)
I certainly don't mean to pick on Errol at all - the same move is made in several papers in print, and indeed, I only barely made things more explicit in my earlier post on this stuff. Moreover, Brueckner and Fiocco relegate the following remark to an endnote in their 2002 paper on the argument:
'Following Williamson, we will simplify things by speaking of knowing rather than being in a position to know.'
Over the next while I'll endeavor to convince you that this isn't a harmless simplification at all - we need to explore the issues surrounding (CON) just as much as those arising from (REL). (CON) is an instance of a more general principle, on which subsequent posts will focus:
Determination:
In every case Ax, if one is in a position to know that condition C obtains, then if one has done everything one is in a position to do to determine whether C obtains, then one knows in Ax that C obtains.
Williamson is quite explicit that as he intends to understand the notion of being in a position to know, Determination is a necessary condition on one's being in a position to know:
'To be in a position to know p, it is neither necessary to know p, nor sufficient to be physically and psychologically capable of knowing p. No obstacle must block one's path to knowing p. If one is in a position to know p, and one has done what one is in a position to do to decide whether p is true, then one does know p.' (Knowledge and its Limits: 95)
In posts to follow, I'll discuss the role of Determination in the debates on the viability of semantic anti-realism and on the constitutive norm of assertion. Stay tuned.
Labels: Anti-Luminosity, Epistemology, Tim Williamson
Friday, August 03, 2007
Is Knowledge-How Gettier-Susceptible?
Pettit's target is epistemic accounts of linguistic understanding, whereby understanding an expression just is, or at least requires, knowledge of some proposition stating that expression's meaning. Pettit's first attack on the epistemic view proceeds by offering a case in which a subject's belief in the proposition that 'Krankenschwester' means 'Nurse' is Gettierized, and yet we are strongly drawn to judge that the subject nonetheless understands 'Krankenschwester'. The natural reading of Pettit takes his argument to proceed as follows. The moral to draw from the case described is that understanding language, in stark contrast to propositional knowledge as it is usually understood by epistemologists, is unGettierizable. Hence the identification of linguistic understanding with propositional knowledge is untenable, and Gettier cases like the one Pettit offers will be examples of understanding without the relevant piece of propositional knowledge.
But it seems clear that Pettit does not need to make the really strong claim that understanding is unGettierizable. All he needs, and all his case strictly speaking shows, is that there are Gettier cases in which we are prone to judge that a subject understands some expression, and yet that subject's belief in the proposition knowledge of which - according to the epistemic view - constitutes or necessarily accompanies understanding of that expression is Gettierized. That's enough to defeat the epistemic view.
(I should stress, I don't buy Pettit's conclusion. I've been convinced by Stanley in his reply to Hornsby that there's a lot still to be said in favor of the epistemic view. Here I'm simply pointing out that the argument need not rely on drawing the moral that understanding is immune to Gettierization.)
Pettit and others have suggested that his argument against the epistemic view can also be wielded against Stanley and Williamson's proposal that it is true that one knows how to x (in a given context) if and only if for some contextually relevant way w, one stands in the knowledge-that relation to the (Russellian) proposition that w is a way for one to x (and one entertains this proposition under a practical mode of presentation). The basic suggestion is that while propositional knowledge is usually taken to be vulnerable in Gettier cases, knowledge-how - like linguistic understanding - is unGettierizable.
Stanley and Williamson offer two replies to this (435). Firstly, they doubt that propositional knowledge in general is Gettier-susceptible. Secondly, they describe what they take to be a Gettier case for knowledge-how, undermining the claim that knowledge-how is unGettierizable.
But, as with the epistemic view of understanding, it would suffice to object to their proposal if it could be shown that there are Gettier cases in which Stanley and Williamson's biconditional fails: so cases in which we would on reflection attribute the subject knowledge-how to x, and yet we'd hold that the subject fails to stand in the knowledge-that relation to the proposition that w (for some appropriate w) is a way for her to x because her belief has been Gettierized. On a first pass, Stanley and Williamson's responses don't seem to address this point; once we refrain from making the strong claim that knowledge-how is unGettierizable, their purported Gettier case for know-how is besides the point, and unless we have good reason to think that beliefs in propositions of the form 'w is a way for me to x' are amongst the ones that it might be plausible to regard as Gettier-immune, it's hard to see how it helps to point out that some knowledge-that might enjoy such immunity.
The upshot is it seems that there's still room to explore a version of the Pettit objection against the Stanley-Williamson proposal. Mark Sainsbury has suggested to me that if one accepts their account, it will be quite plausible that there could not be Gettier cases in which their biconditional fails (so their first reply, that not all knowledge-that is Gettier-susceptible, is very much to the point after all). As I've blogged before, Stanley and Williamson see themselves in part as trying to challenge the Rylean account of the nature of the knowledge-that relation, and Mark's suggestion is that once we take that challenge seriously, it's very difficult indeed to come up with Gettier cases in which it's intuitively the case that the subject knows how to x, and yet fails to possess the knowledge-that which the right-hand side of the Stanley-Williamson biconditional would have us attribute. So the suggestion is that once we understand the kind of view of the nature of knowledge-that which Stanley and Williamson favor, their proposal is not vulnerable to a Pettit-style objection, even as I've reconstrued it here. I think Mark's probably right about this, and there's no counterexample to Stanley and Williamson in the offing here, but it seems worth thinking more about, and I don't know of any discussion in the literature.
Labels: Epistemology, Jason Stanley, Knowledge How, Philosophy of Language, Tim Williamson
Thursday, July 19, 2007
Knowledge and Action
'It is clear enough that if we want multi-premise closure, we had better operate with a notion of probability according to which knowledge delivers probability 1.'
But I take it it's left open whether Hawthorne and Stanley really do retain MPC. Adopting a view on which knowledge delivers probability 1 may allow them to avoid the specific problems that the Fantl and McGrath proposal faced, but Hawthorne's own Knowledge and Lotteries pointed out that adopting knowledge-action principles, given the kind of sensitive-invariantist epistemology both Hawthorne and Stanley have expressed a preference for, may lead to trouble preserving MPC. (I argued last year that Hawthorne's proposal, while it may preserve the letter of single premise closure, seems unable to retain it's spirit). Now, perhaps their new proposal linking knowledge to reasons for action might allow them to steer clear of those problems. But it would have been nice to see the details; in their absence, it seemed entirely unclear whether the new proposal really marked that much of an improvement over Fantl and McGrath with respect to MPC, irregardless of whether knowledge delivers probability 1. As Hawthorne himself put it in his earlier discussion, ensuring that knowledge delivers probability 1 is just a 'first step' (182) towards a vindication of MPC.
But the main gripe I had, as with Hawthorne and Stanley's recent books, is the uncritical Williamsonianism which runs through the paper. That Williamson's anti-luminosity argument establishes its conclusion just seems to be taken for granted. Now, of course one doesn't want to have to reinvent the wheel every time one writes a paper. But on the other hand, it seems strange to treat Williamson's arguments almost like mathematical lemmas that one can simply help oneself to when needed, given their deeply controversial status. More importantly, I would really have liked to have seen a defense of the Williamson/DeRose proposal for handling putative counterexamples to the knowledge account of assertion. Virtually every time something that looks like a counterexample is produced, the reply is that it is a violation of the norm, but that's ok so long as we can find some excuse for the violator (he thought he was conforming to the norm, for example). Hawthorne and Stanley just seem to again take for granted that this kind of response is fine, but it's hard not to feel somewhat uneasy about it. Jennifer Lackey has given voice to these kinds of concerns in section 4 of her 'Norms of Assertion'.
Lastly, it seems relatively straightforward to convert Matt Weiner's counterexamples to the knowledge account of assertion in his excellent Phil Review paper into putative counterexamples to Hawthorne and Stanley's knowledge-action principle. (See p8 of this draft of 'Must We Know What We Say' for the counterexamples). Such cases are specifically designed to be cases in which the asserter doesn't even think their assertion conforms to the knowledge account, and yet we're meant to be pulled to judge that it was perfectly proper. Of course, there may be all kinds of ways to dodge the bullet here. But we're not given any clues about how to do that in the paper.
So, to sum up, the positive proposal of the paper seems really worth exploring, and the comparison with Fantl and McGrath's earlier attempt to articulate the relationship between knowledge and action is very welcome. I think Hawthorne and Stanley are right that this is an issue which has been unfairly neglected. But again I get the sense that chunks of Knowledge and its Limits are taken to be simply not up for discussion. That's very unfortunate. I'm sure our understanding of these aspects of Williamson's epistemology could well benefit from serious attention from Hawthorne and Stanley, but once again they don't receive it.
Labels: Epistemology, Jason Stanley, John Hawthorne, Tim Williamson
Friday, July 06, 2007
Mere Probability and Warranted Assertion
Here are just a couple of quotes I liked. First of all, there's Oliver Wendell Holmes Jr. on the KK principle (p62):
"I detest a man who knows what he knows."
"The abolitionists had a stock phrase that a man was either a knave or a fool who did not act as they knew to be right. So Calvin thought of the Catholics and the Catholics of Calvin. So I don't doubt do the more convinced prohibitionists think of their opponents today. When you know that you know persecution comes easy. It is as well that some of us don't know that we know anything."
More seriously, I thought a quote from Benjamin Peirce raised a nice issue. As part of a court case, he and Charles Sanders Peirce were to figure out the likelihood that a woman had signed her name perfectly twice, once on her will, and once on a purported secret 'second page' to that will nullifying any further will that might be drawn up on that person's behalf. The person who had produced both documents stood to gain all of the money if they were accepted as genuine, so obviously the authenticity of the documents became of great importance. The Peirces calculated that the chance that the lady had produced a second signature quite so like the first, with precisely the same distribution of distinctive strokes, was 1 in 2,666,000,000,000,000,000,000, and so they urged the signature on the 'second page' must have been traced from the first page. Peirce testified that this number:
"transcends human experience. So vast an improbability is practically an impossibility. Such evanescent shadows of probability cannot belong to actual life. They are unimaginably less than those least things which the law cares not for.
The coincidence which is presented in this case cannot therefor be reasonably regarded as having occurred in the ordinary course of signing a name. Under a solemn sense of the responsibility involved in the assertion, I declare that the coincidence which has here occured must have had its origin in an intention to produce it...[I]t is utterly repugnant to sound reason to attribute this coincidence to any cause but design." (p172-3)
I thought it was interesting that Peirce makes no bones about his merely probabilistic grounds to make this assertion, but he also acknowledges that he is in a situation which make the standards for warranted assertion particularly high. There's a well known argument from Williamson in favor of the knowledge account of assertion which proceeds by pushing the intuition that you cannot flat-out assert that a lottery ticket has lost (before the outcome of the draw is known), even if the probability that it hasn't lost is utterly minuscule. The diagnosis given is that mere probabilistic grounds are enough for assertion; one needs to know. But who would resent Peirce's assertion on these grounds were he to make it?
Labels: Epistemology, Tim Williamson
Sunday, May 06, 2007
Anti-Realism: Decidability and Detectability
'It is a common misunderstanding of the thrust of the anti-realist's criticisms of the role assigned to truth in classical semantics that he believes that the central notion in the theory of meaning should be an effectively decidable one.'
Amen. My entire life-span later, and this is still a point which simply has not been absorbed by opponents of anti-realism (a rare exception is Darragh Byrne, but it's hard to think of any others). I'm writing a paper at the moment, tentatively titled 'Anti-Realist DOs and DON'Ts', in which I'll substantiate Crispin's point, and show how doing so enables the anti-realist to avoid most of the objections recently leveled at him. More specifically, it'll be a defense of compatiblist anti-realism; that is, anti-realism which wants to hold on to truth-conditional semantics (the term is Byrne's). It's been claimed that such a view either inflates into realism (Alex Miller) or must deny that truth is undecidable (Williamson), but these criticisms simply make the assumption Wright warned against all those years ago. More on this in the finished paper.
Here, I want to offer a taster. I'll present two very recent arguments against anti-realism which are not major targets of my paper, but which casually overlook the possibility of a compatiblist anti-realism which holds that truth is necessarily detectable, but is not decidable. First up is Michael Loux's piece on the debate in The Oxford Handbook of Metaphysics. On page 656 we get the following passage:
'...Dummett's more particular claim that the assertability theorist should trade in verification conditions (that is, conditions that conclusively justify assertion) must face the objection that the resulting theory of meaning has precisely the difficulties Dummett takes to be the undoing of the truth-conditional theory. The claim is that to know the meaning of a statement is to know what would conclusively warrant its assertion; but statements that are in principle undecidable are such that there neither is nor can be anything that would conclusively justify their assertion or their denial. [My emphasis]. But, then, where is the strategic advantage of the assertibility-conditional theory? As we have seen, Dummett tries to forestall this objection by identifying the apparently epistemic state with a practical, discriminatory ability - the ability to recognize, if presented with it, a condition which would justify assertion. But, of course [sic], that is an ability that is in principle incapable of being exercised. [My emphasis]. And how is the attribution of that sort of ability any improvement on the truth-conditional theorist's attribution of epistemic states that can never can manifested?'
Look carefully at the crucial lines I put in italics, and ask yourself: how does it follow from a statement's undecidability - that is, the lack of a procedure which could be followed in a finite amount of time, and which if correctly implemented would be bound to conclusively verify or falsify that statement - that there could not be anything that would conclusively justify its assertion? Only, I submit, if we run together undecidability and undetectability; we assume that just because there is no such procedure, there can be no evidence which would decide the matter one way or the other (and hence our discriminatory capacities for such statements must be 'in principle incapable of being exercised') . Dummett himself is perfectly explicit that this is a mistake in his most recent book:
'Although we may have no means, even in principle, of putting ourselves into a position in which we can effectively decide whether the proposition expressed by the utterance of a given sentence is or is not true, it does not follow that we may not come to recognize that proposition as true or false; we may sometimes, and indeed often do, decide the truth or falsity of utterances of undecidable sentences, in the sense I gave to this expression.' (58)
So I submit, Loux's objection simply rests on the conflation of decidability and detectability. Now, the problem may be that Loux appears to interpret undecidable statements to be those which are 'verification- or falsification-transcendent' (635). But there's no real justification for such an interpretation, even if a few passages in Dummett may unfortunately encourage it.
A much more striking example of a failure to appreciate this point, and to miss the possibility of a compatiblist anti-realism, comes in Charles McCarthy's 'The Coherence of Anti-Realism'. In the opening line of the abstract, we are told:
'The project of anti-realism is to construct an assertibility semantics on which (1) the truth of statements obeys a recognition condition so that (2) counterexamples are forthcoming to the law of excluded middle and (3) intuitionistic formal predicate logic is provably sound and complete with respect to the associated notion of validity.'
On page 948, we are further told that in order to 'ensure that the recognition condition obtains', the anti-realist assumes that the central notion in the semantics is decidable in the sense that a decision procedure 'is available in principle'. It turns out, McCarthy argues, that this package is incoherent. All the worse for anti-realism so conceived, I say. But we're given no clues that this conception isn't mandated.
This gives you a taste of just how casually critics of anti-realism have been willing to run together the crucial notions of decidability and detectability, despite (and without so much as acknowledgment of) Crispin's explicit disavowal in print over twenty years ago. My paper, when it's done, will leave many important aspects of the debate completely untouched, but I think I'll have achieved something if I manage to start to change people's attitudes on this central issue. Seems unlikely on the face of it, but we'll see.......
Labels: Anti-Realism, Crispin Wright, Dummett, Tim Williamson
Monday, April 09, 2007
Luminosity, Coziness, and Borderline Cases
A condition C is luminous iff the following holds:
(LUM) For every case A, if C obtains in A, then one is in a position to know that C obtains in A.
Tim Williamson's anti-luminosity argument is meant to offer a proof that no non-trivial conditions are luminous in this sense (a condition is trivial iff it obtains in every case, or else fails to obtain in every case).
Williamson proceeds, he claims without loss of generality, by way of a reductio of the claim that the condition that one feels cold is luminous. Imagine a series of times (T0, T1, T2.......Tn) between dawn and noon, each a millisecond apart. Fixing the subject and the world, we obtain a series of cases (A0, A1, A2........An) individuated by those times. One warms up very slowly, virtually imperceptibly, throughout this period. So the following are stipulated features of the case:
(COLD) In A0 one feels cold.
(WARM) In An one does not feel cold.
Moreover, one is constantly attentive to whether or not one feels cold, so that if one were in a position to know one feels cold, then one would know that one feels cold. So we endorse the following strengthened instance of (LUM):
(LUM+) For every case Ax, if one feels cold in Ax, then one knows that one feels cold in Ax.
Essentially, the proof demonstrates the inconsistency, given a series of cases as described, of (LUM+) with the following principle:
(REL) If in a case Ax, one knows that one feels cold, then in case Ax+1 one does feel cold.
The reasoning is very familiar. (LUM+) and (REL) together entail the following tolerance principle for one feeling cold:
(TOL) For every case Ax, if one feels cold in Ax, then one feels cold in Ax+1.
Sorites reasoning will easy demonstrate the inconsistency of (TOL) with (COLD) and (WARM). So, since Williamson thinks he can independently motivate all the relevant analogues of (REL), we should reject (LUM) for any non-trivial condition.
Most of the literature that exists on the anti-luminosity argument has trying to undercut the motivation for (REL) and its relatives. However, several philosophers--most explicitly John Hawthorne--have taken a different line of response. They've suggested that nothing in Williamson's argument rules out that non-trivial conditions might be cozy rather than luminous. A condition C is cozy iff:
In every case A in which C determinately obtains, one is in a position to know that C obtains.
The suggestion, of course, is that by restricting our luminosity claims to the determinate (i.e. non-borderline) cases of C, we can side-step the anti-luminosity argument. Implicit in this suggestion is recognition of the following feature of the argument as Williamson presents it: Williamson's argument targets the luminosity of a condition by targeting the luminosity of its borderline cases. Brian Weatherson has recently stressed this point:
'The counterexamples to Luminosity we get from following this proof through are always borderline cases of C obtaining. In these cases Luminosity fails because any belief that C did obtain would be unsafe, and hence not knowledge.' ('Luminous Margins': 374)
And in a recent paper Williamson himself writes:
'The strategy is to construct a sorites series between a case in which the condition clearly obtains and one in which it clearly fails to obtain. Luminosity must fail close to the boundary between cases where the condition obtains and cases where it does not, just on the obtaining side.' ('Contextualism, Subject-Sensitive Invariantism and Knowledge of Knowledge': 230)
But notice that this result only holds on the assumption that borderline cases of a condition obtaining are to be thought in more or less the same manner as the Epistemicist would have us think of borderline cases of a vague predicate. On a familiar rival picture of borderline cases of vague expressions, they give rise to truth-value gaps (for example, the account of vagueness offered by Supervaluationists). Extending that picture to the current context, we obtain the following characterization of what it is for a case B to be a borderline case of a condition C obtaining:
C does not obtain in B and ~C does not obtain in B.
(We'll need to make sense of a distinction between C failing to obtain and ~C obtaining on this picture. I won't worry about how to do that here).
Now, Williamson claims that his anti-luminosity argument is neutral with respect to the correct account of vagueness and hence, one would hope, with respect to what the correct account of the nature of borderline cases is. But notice that on the envisaged account of what it is for a case B to be a borderline case of a condition C obtaining, no borderline case of C can be a counterexample to the luminosity of C. Such a counterexample would be a case in which C obtained, only unknowably. But clearly on this conception, no borderline case of C obtaining is a case of C obtaining unknowably, since no borderline case of C obtaining is a case of C obtaining. Hence, as advertised, no borderline case of C obtaining can be a counterexample to the luminosity of C.
However, Williamson's proof does, as he claims, go through as before. If one thinks of borderline cases in the 'gappy' way just sketched, Williamson's premises entail that the last case of C determinately obtaining is a counterexample to the luminosity of C. But now note the following; this case will also provide a counterexample to the coziness of C.
What's the upshot of all of this? Well, whether the coziness response to the anti-luminosity result has any chance of success depends on what our view of borderline cases is. Hawthorne himself has tended to favor an Epistemicist-like account of borderline cases over a gappy one, so presumably this result wouldn't bother him too much. However, a gappy view is still the dominant one amongst philosophers of vagueness. It seems, then, that not many philosophers are in good shape to offer the coziness response to Williamson.
(The preceding ignores the effects of analogues of higher-order vagueness. I leave it to those much smarter than me to figure out how taking HOV seriously here would change the conclusions that could be drawn.)
Labels: Epistemology, John Hawthorne, Tim Williamson, Vagueness
Friday, December 15, 2006
The Philosophy of Philosophy
Proper posts soon, I promise.
Labels: Links, Tim Williamson
Thursday, September 28, 2006
Vagueness in the Nineties

Something I'd like to know a lot more about is the initial reaction to Williamson's defense of epistemicism. It's hard not to get the feeling that the idea that bivalence could be seriously defended for vague discourse was regarded as pretty laughable even in the years leading up to the publication of Vagueness, and this despite well-known attempts to get the view on the table by James Cargile and Roy Sorensen as early as 1969 and 1988 respectively. (Not to mention Williamson's flirtation with the view in his first book).
Here's a quote that highlights the kind of attitude that seems to have been around at the time, taken from the introduction to Wright's Realism, Meaning & Truth, first published in 1987:
'To suggest that Bivalence is, or should be, the hallmark of realism everywhere is accordingly to be committed to claiming either that there is no such thing as realism about vague discourse, or that the vagueness of a statement, whatever it is held to consist in, is a feature consistent with its possession of a determinate truth-value. Neither suggestion is remotely plausible.' (p4)
I don't think Wright would find the view much more plausible now - but he'd certainly recognise that more work would have to be done to dismiss it.
We find Sainsbury writing in 1990's 'Concepts Without Boundaries':
'Sets have sharp boundaries, or, if you prefer, are sharp objects: for any set, and any object, either the object quite definitely belongs to the set or else it quite definitely does not. Suppose there were a set of things of which "red" is true: it would be the set of red things. However, "red" is vague: there are objects of which it is neither the case that "red" is (definitely) true nor the case that "red" is (definitely) not true. Such an object would neither definitely belong to the set of red things nor definitely fail to belong to this set. But this is impossible, by the very nature of sets. Hence there is no set of red things.
This seems to me as certain as anything in philosophy...'
(p252 in the Keefe and Smith reader. My italics)
The first edition of his book Paradoxes, published in 1987, also defined the phenomenon of vagueness in a way that left no room for epistemicism. By 1995's second edition, things have shifted:
'I found my earlier discussion of vagueness very unsatisfactory, in the main because it defined vagueness in such a way as to exclude the epistemic theory. I do not accept this theory, but Timothy Williamson has shown me that I am not able to refute a skilful and determined opponent.' (ix)
What's happened in the 5 years between the first quote and the second? First of all, Williamson has published his Joint session paper 'Vagueness and Ignorance' in 1992, arguing that the most common objections to epistemicism aren't in fact nearly as powerful as people have thought, and that's been followed by Vagueness in 1994.
How did people feel when they first realised they were going to have to take this view seriously? It really seems like it must have come as a complete shock to a lot of philosophers working on vagueness at the time.
Labels: Tim Williamson, Vagueness