### Friday, April 28, 2006

## The Ways of Paradox

It's the end of the term, and so blogging really has to take a back-seat for the moment. But I've been meaning to post on this topic for a while, and I'm not quite yet ready to face the syntax take-home exam I'm meant to be working on. I apologise in advance for the somewhat rambling nature of this post; I haven't really had time to make it clearer, and it draws together a bunch of otherwise fairly disparate things I've been thinking about recently.

Carrie Jenkins has written a number of interesting papers recently about the Fitch-Church knowability paradox, dealing with issues about how a global anti-realist should state her central thesis, and more relevantly here, issues to do with 'modal-collapse'; an 'able' goes missing between the premise and the conclusion of the Fitch-Church proof. (Just in case: the proof shows, on pretty uncontroversial assumptions about the K operator, that the thesis - often referred to as epistemic constraint or weak-verificationism - that every truth is knowable quickly collapses into the crazy "strong verificationist" thesis that every truth is known.)

Now, one of these latter issues is the following: why are we so surprised by this result? Even if one is not at all attracted by a global principle of epistemic constraint, one might well be shocked to see it collapse into strong verificationism in 6 lines of proof. Now, I don't want to go into the details of Carrie's answer here, but essentially she thinks that a lesson of the paradox is that a principle of epistemic constraint does not capture the central global anti-realist thesis, and so she explains why we're surprised by the Fitch-Church result by suggesting we've mistaken that principle for a codification of the claim the anti-realist really wants to (or at least, should want to) make. So basically we've mistaken a wolf-in-sheep's-clothing principle for a codification of a reasonably plausible (or at least, as even Williamson concedes, sane-sounding) thought, and that offers an easy explanation of why we're surprised to find it's eaten all our sheep. (No, I haven't stretched that metaphor too far.)

Such wolf-in-sheep's-clothing explanations are ubiquitous in attempts to deal with paradoxes, but I'm in general pretty suspicious of them. It's not that I think that they are all misguided - I think, for example, that something like that is surely right in the case of vagueness; obedience of a universally quantified Sorites or 'tolerance' principle cannot be the right way to characterise the vagueness of an expression, and I'm sympathetic (to varying degrees) to Stewart Shapiro, Diana Raffman, Richard Heck, Scott Soames, Patrick Greenough, Delia Graff, etc., all of whom offer consistent principles - weak tolerance, pairwise tolerance, quasi-tolerance, metalinguistic tolerance, epistemic tolerance and 'apparent boundarylessness' (which I take just to be pairwise tolerance again) respectively - that are meant to better express the vagueness of an expression and which explain our wrongheaded but understandable inclination to accept Sorites premises.

So I'm not entirely closed to wolf-in-sheep's-clothing moves, even though I worry that they can be churned out indiscriminately in response to pretty much any paradox. But I've found I'm not at all persuaded by such a story in the case of set theory. The idea there would be, roughly, that the two axioms of naive set theory were mistaken for an acceptable (and consistent!) codification of an intuitive grasp of sethood which finds its real expression in the 9 (depending on how you count) axioms of standard iterative set theory, ZFC. But I've always thought in contrast that the hierarchical conception of the set theoretic universe that underlies standard set theory is a quite radically different pre-axiomatic conception of set from the naive conception, and that the ill-fated naive conception does get precisely expressed in the axioms of naive set theory. The point, stripped of the rambling just offered, is this; the problem with naive set theory lay at the level of the pre-axiomatic conception of set in play, not in the choice of axioms used to give that conception formal expression. (In her paper mentioned above on why we find the Fitch proof surprising, Carrie used the set theory case as an analogy. I questioned the analogy on the grounds given in this paragraph here; see also footnote 5 of the final version of Carrie's paper).

In the past couple of weeks, I've found reason to doubt the point just made. I've been reading Michael Hallett's wonderful 'Cantorian Set Theory and Limitation of Size', and he makes what is apparently a common suggestion that the reason naive set theory took the form it did was due to desire for logical simplicity, rather than because logicians and mathematicians shared a bad pre-axiomatic conception of what sets are (p242). Hallett quotes (same page) the following passage from Dana Scott, which clearly states this point:

'It must be understood from the start that Russell's paradox is not to be regarded as a disaster. It and the related paradoxes show that the naive notion of all-inclusive collections is untenable. That is an interesting result, no doubt about it. But note that our original intuition of set is based on the idea of having collections of already fixed objects. The suggestion of considering all-inclusive collections only came later by way of formal simplification of language. This suggestion proved to be unfortunate, and so we must return to the primary intuitions.'

What I'm trying to figure out is whether this view of the early history of set theory undermines my scepticism about naive set theory being a misguided attempt to axiomatise a sound underlying intuitive conception of set; that the view I expressed in my response to Carrie might well have been based on an insufficiently attuned understanding of the sort of factors shaping naive set theory. I think the question will bear further investigation, both historical and philosophical, though I just wanted to table the issue here.

(In her response to a paper I gave on set theory in Western Ontario last weekend, Chiara Tabet pointed out that the whole issue of the relationship between our pre-axiomatic conception of a mathematical theory and the associated axiomatic theory is much harder than I tend to acknowledge. This is just as true with regards to this post as it was with my paper, but I still don't have anything remotely insightful to offer on the matter.)

Carrie Jenkins has written a number of interesting papers recently about the Fitch-Church knowability paradox, dealing with issues about how a global anti-realist should state her central thesis, and more relevantly here, issues to do with 'modal-collapse'; an 'able' goes missing between the premise and the conclusion of the Fitch-Church proof. (Just in case: the proof shows, on pretty uncontroversial assumptions about the K operator, that the thesis - often referred to as epistemic constraint or weak-verificationism - that every truth is knowable quickly collapses into the crazy "strong verificationist" thesis that every truth is known.)

Now, one of these latter issues is the following: why are we so surprised by this result? Even if one is not at all attracted by a global principle of epistemic constraint, one might well be shocked to see it collapse into strong verificationism in 6 lines of proof. Now, I don't want to go into the details of Carrie's answer here, but essentially she thinks that a lesson of the paradox is that a principle of epistemic constraint does not capture the central global anti-realist thesis, and so she explains why we're surprised by the Fitch-Church result by suggesting we've mistaken that principle for a codification of the claim the anti-realist really wants to (or at least, should want to) make. So basically we've mistaken a wolf-in-sheep's-clothing principle for a codification of a reasonably plausible (or at least, as even Williamson concedes, sane-sounding) thought, and that offers an easy explanation of why we're surprised to find it's eaten all our sheep. (No, I haven't stretched that metaphor too far.)

Such wolf-in-sheep's-clothing explanations are ubiquitous in attempts to deal with paradoxes, but I'm in general pretty suspicious of them. It's not that I think that they are all misguided - I think, for example, that something like that is surely right in the case of vagueness; obedience of a universally quantified Sorites or 'tolerance' principle cannot be the right way to characterise the vagueness of an expression, and I'm sympathetic (to varying degrees) to Stewart Shapiro, Diana Raffman, Richard Heck, Scott Soames, Patrick Greenough, Delia Graff, etc., all of whom offer consistent principles - weak tolerance, pairwise tolerance, quasi-tolerance, metalinguistic tolerance, epistemic tolerance and 'apparent boundarylessness' (which I take just to be pairwise tolerance again) respectively - that are meant to better express the vagueness of an expression and which explain our wrongheaded but understandable inclination to accept Sorites premises.

So I'm not entirely closed to wolf-in-sheep's-clothing moves, even though I worry that they can be churned out indiscriminately in response to pretty much any paradox. But I've found I'm not at all persuaded by such a story in the case of set theory. The idea there would be, roughly, that the two axioms of naive set theory were mistaken for an acceptable (and consistent!) codification of an intuitive grasp of sethood which finds its real expression in the 9 (depending on how you count) axioms of standard iterative set theory, ZFC. But I've always thought in contrast that the hierarchical conception of the set theoretic universe that underlies standard set theory is a quite radically different pre-axiomatic conception of set from the naive conception, and that the ill-fated naive conception does get precisely expressed in the axioms of naive set theory. The point, stripped of the rambling just offered, is this; the problem with naive set theory lay at the level of the pre-axiomatic conception of set in play, not in the choice of axioms used to give that conception formal expression. (In her paper mentioned above on why we find the Fitch proof surprising, Carrie used the set theory case as an analogy. I questioned the analogy on the grounds given in this paragraph here; see also footnote 5 of the final version of Carrie's paper).

In the past couple of weeks, I've found reason to doubt the point just made. I've been reading Michael Hallett's wonderful 'Cantorian Set Theory and Limitation of Size', and he makes what is apparently a common suggestion that the reason naive set theory took the form it did was due to desire for logical simplicity, rather than because logicians and mathematicians shared a bad pre-axiomatic conception of what sets are (p242). Hallett quotes (same page) the following passage from Dana Scott, which clearly states this point:

'It must be understood from the start that Russell's paradox is not to be regarded as a disaster. It and the related paradoxes show that the naive notion of all-inclusive collections is untenable. That is an interesting result, no doubt about it. But note that our original intuition of set is based on the idea of having collections of already fixed objects. The suggestion of considering all-inclusive collections only came later by way of formal simplification of language. This suggestion proved to be unfortunate, and so we must return to the primary intuitions.'

What I'm trying to figure out is whether this view of the early history of set theory undermines my scepticism about naive set theory being a misguided attempt to axiomatise a sound underlying intuitive conception of set; that the view I expressed in my response to Carrie might well have been based on an insufficiently attuned understanding of the sort of factors shaping naive set theory. I think the question will bear further investigation, both historical and philosophical, though I just wanted to table the issue here.

(In her response to a paper I gave on set theory in Western Ontario last weekend, Chiara Tabet pointed out that the whole issue of the relationship between our pre-axiomatic conception of a mathematical theory and the associated axiomatic theory is much harder than I tend to acknowledge. This is just as true with regards to this post as it was with my paper, but I still don't have anything remotely insightful to offer on the matter.)

Labels: Epistemology, Philosophy of Mathematics, Vagueness

### Wednesday, April 19, 2006

## Bombscare

Somehow Kenny already has a substantial post about the conference up over at Antimeta, focusing on an interesting set of questions that came up a number of times over the weekend about how we are to determine our ontological committments. This is something I have (deeply unpopular) views about, but I don't really have anything to write on that topic just now. There were some issues that got discussed in our session on Kenny's paper about the role of linguistic evidence in epistemology which I'd like to post something on sometime soon, but I'll need to give that matter more thought. I took pictures throughout the second day of the conference, which I'll post a link to as soon as I get them up on a website.

With everything that was going on this weekend, I forgot to ask Jason Stanley what he thought of the following case, and a proper post is overdue. Recall that Stanley holds the following views; IRI about knowledge (and most other epistemic notions), plus know-how is a species of know-that. According to the first thesis, whether or not a subject's true belief counts as knowledge is partly determined by the direness of being wrong, given the subject's practical interests ('The more you care, the less you know' as Stanley is reported to have put the view). According to the latter, having know-how isn't to be analysed as having a capacity or an ability of some sort, but rather as possession of a type of propositional knowledge (albeit under a 'practical mode of presentation').

In a very recent paper, Jonathan Schaffer has argued that IRI is incompatible with a certain natural and compelling picture of the 'social role of the expert'. The basic idea is that experts 'serve as a reservoir of knowledge' of a particular field, but for experts to have this status requires a certain stability; it cannot be the case that their possession of knowledge of the relevant body of information can 'fluctuate as the stakes rise and fall'.

Now for the case I want to raise (I hope the debt to Schaffer's discussion is by now clear). A bomb-disposal expert is presumably someone who knows how to diffuse and dispose of a bomb. But bomb-disposal seems like a risky-business; the consequences of getting something wrong are surely about as dire as we can imagine, supposing (as seems likely) that such an expert doesn't want to be blown to pieces. So according to IRI, a bomb-disposal expert diffusing a bomb doesn't know how to diffuse a bomb. (One might dispute whether the stakes really are high enough to force this conclusion. But throughout the literature on these issues, we are invited to conclude that losing a bet, being late for an important meeting, or failing to have enough money in your bank-account to cover a critical bill are consequences dire enough to defeat relevant knowledge attributions; getting blown up by a bomb seems sufficiently unpleasant too).

So like Schaffer, I feel there's a case to be made that IRI about knowledge does some violence to our intuitive picture of expertise. And given Stanley's other committments, it's not even an option to say that although the bomb-disposal expert loses various items of propositional knowledge, he retains some relevant know-how. (It is still an option to say that he retains some relevant set of abilities or capacities, but I expect that others will share my intuition that this does not sufficiently mitigate the conclusion that a bomb-disposal expert diffusing a bomb doesn't know how to diffuse a bomb).

(Cross-posted at Arche. Thanks to John Bengson for discussion.)

With everything that was going on this weekend, I forgot to ask Jason Stanley what he thought of the following case, and a proper post is overdue. Recall that Stanley holds the following views; IRI about knowledge (and most other epistemic notions), plus know-how is a species of know-that. According to the first thesis, whether or not a subject's true belief counts as knowledge is partly determined by the direness of being wrong, given the subject's practical interests ('The more you care, the less you know' as Stanley is reported to have put the view). According to the latter, having know-how isn't to be analysed as having a capacity or an ability of some sort, but rather as possession of a type of propositional knowledge (albeit under a 'practical mode of presentation').

In a very recent paper, Jonathan Schaffer has argued that IRI is incompatible with a certain natural and compelling picture of the 'social role of the expert'. The basic idea is that experts 'serve as a reservoir of knowledge' of a particular field, but for experts to have this status requires a certain stability; it cannot be the case that their possession of knowledge of the relevant body of information can 'fluctuate as the stakes rise and fall'.

Now for the case I want to raise (I hope the debt to Schaffer's discussion is by now clear). A bomb-disposal expert is presumably someone who knows how to diffuse and dispose of a bomb. But bomb-disposal seems like a risky-business; the consequences of getting something wrong are surely about as dire as we can imagine, supposing (as seems likely) that such an expert doesn't want to be blown to pieces. So according to IRI, a bomb-disposal expert diffusing a bomb doesn't know how to diffuse a bomb. (One might dispute whether the stakes really are high enough to force this conclusion. But throughout the literature on these issues, we are invited to conclude that losing a bet, being late for an important meeting, or failing to have enough money in your bank-account to cover a critical bill are consequences dire enough to defeat relevant knowledge attributions; getting blown up by a bomb seems sufficiently unpleasant too).

So like Schaffer, I feel there's a case to be made that IRI about knowledge does some violence to our intuitive picture of expertise. And given Stanley's other committments, it's not even an option to say that although the bomb-disposal expert loses various items of propositional knowledge, he retains some relevant know-how. (It is still an option to say that he retains some relevant set of abilities or capacities, but I expect that others will share my intuition that this does not sufficiently mitigate the conclusion that a bomb-disposal expert diffusing a bomb doesn't know how to diffuse a bomb).

(Cross-posted at Arche. Thanks to John Bengson for discussion.)

Labels: Epistemology, Jason Stanley

### Monday, April 03, 2006

## UT conference lineup

I gather we're still finalising the schedule for the grad conference, but it seems time to post a list of student papers and respondents for those who are interested:

Elia Zardini (St. Andrews) - 'Truth and What is Said', comments by Julie Hunter (UT Austin)

Josh Rasmussen (Notre Dame) - 'Mind-Body Supervenience's Cardinal Sin', comments by Tristan Johnson (UT Austin)

Stephen Kearns (Oxford) - 'Against Universalism', comments by Dan Korman (UT Austin)

Kenny Easwaran (Berkeley) - 'The Uniformity of Knowledge Attributions', comments by me

Russell Marcus (CUNY) - 'Three Grades of Instrumentalism', comments by Bryan Pickel (UT Austin)

Michael Allers (Michigan Ann Arbor) - 'Bruce: A Predicate', comments by Malte Willer (UT Austin)

Luke Potter (Notre Dame) - 'Sameness without Identity', comments by Briggs Wright (UT Austin)

While I'm on the subject of grad conferences, a quick thank you to all involved in the Manhattan conference this weekend, particularly Alex Madvi at Columbia for putting me up and putting up with me. It's been a blast.

Elia Zardini (St. Andrews) - 'Truth and What is Said', comments by Julie Hunter (UT Austin)

Josh Rasmussen (Notre Dame) - 'Mind-Body Supervenience's Cardinal Sin', comments by Tristan Johnson (UT Austin)

Stephen Kearns (Oxford) - 'Against Universalism', comments by Dan Korman (UT Austin)

Kenny Easwaran (Berkeley) - 'The Uniformity of Knowledge Attributions', comments by me

Russell Marcus (CUNY) - 'Three Grades of Instrumentalism', comments by Bryan Pickel (UT Austin)

Michael Allers (Michigan Ann Arbor) - 'Bruce: A Predicate', comments by Malte Willer (UT Austin)

Luke Potter (Notre Dame) - 'Sameness without Identity', comments by Briggs Wright (UT Austin)

While I'm on the subject of grad conferences, a quick thank you to all involved in the Manhattan conference this weekend, particularly Alex Madvi at Columbia for putting me up and putting up with me. It's been a blast.

Labels: Conferences