Tuesday, December 27, 2005

 

6 times better than all my previous posts!!!!

I've often thought that a lot of advertising claims work by presenting claims about a product that are pretty much just weak counterexamples to bivalence. Now, weak counterexamples are not sentences which are neither true nor false, but rather sentences which intuitionists have argued over the years that we're unjustified in asserting bivalence to hold for; traditionally certain mathematical sentences (most notoriously these days Goldbach's conjecture), perhaps sentences asserting that a borderline case of a vague predicate is in its extension or its anti-extension, statements about the past, and some carefully constructed empirical statements like 'There will never be a city built on this spot' (said in a particular spot deep in the countryside, say).

The intuitionist isn't just worried that we don't in fact know the truth-values of such sentences, but that we don't even know in principle how to decide their truth-value; to assert bivalence of such sentences is to endorse a conception of truth which allows sentences to have truth values which radically outrun our ability to settle them (even in principle, and usually even granting some hefty idealizations on human subjects). I won't get into why this is meant to be a bad thing (though see Bob Hale's excellent overview of the issues in his 'Realism and its Oppositions' in Hale and Wright's A Companion to the Philosophy of Language).

I'm impressed with how much philosophical scene-setting has been required to make my banal little point, which is that advertisers just chuck such sentences at us all the time. A lot of these claims sound really impressive on their first hearing, but a few seconds reflection (always dangerous) will convince one that we don't even know in principle how to decide whether they're true or not. Lest you think I'm going to leave you without an example, I just ate a new Kit Kat Extra Crispy, which apparently enjoys 'twice the crisp' of a regular Kit Kat. What's the metric on the crispiness of Kit Kats, pray tell?

Should classical semantics and logic be abandoned as a result? Of course. What can I say folks, advertising works.

(Of course, an easily determined truth-value can be bought at the cost of impressiveness; witness my local car-mechanic in Austin whose signs boldly declare 'If it's in stock, we've got it!'.)

Labels:


Friday, December 23, 2005

 

Who's the little Caesar?

It seemed time for a non-vagueness related post. Take Neo-Fregeanism in the philosophy of maths to be roughly the following three claims:

1. Hume's Principle suffices for the derivation of the Peano/Dedekind Axioms of arithmetic in full second-order logic. Hume's Principle has some nice epistemological status (analyticity perhaps), and second-order consequence preserves that status, so it is inherited by arithmetic. This is not, of course, how we learn arithmetic, but it does give a reconstruction of the arithmetical knowledge we have that vindicates its status as a priori knowledge of a body of necessary truths.

(Hume's Principle: the number of Fs = the number of Gs iff the Fs are equinumerous with the Gs.)

2. All it takes to establish the existence of a class of objects, the natural numbers say, is to show that the apparent singular terms purporting to refer to object of that sort are genuine singular terms, and feature in sentences which we are warranted in taking to be true (I'm glossing over some details here).

3. Hume's Principle suffices to introduce 'natural number' as a sortal concept.

The much discussed Julius Caesar Problem, intially at least, calls into question thesis 3. Grasp of a sortal concept is associated with grasp of both a criterion of identity and a criterion of application; that is, criteria that allow a subject to decide when items of the given sort are identical or distinct, and to decide whether a given item is of that sort or not. Hume's Principle is custom built to introduce a criterion of identity for the concept 'natural number'; Frege's worry with it (put into this terminology) was that it did not offer a criterion of application, since it fails to decide the truth-values of sentences of the form 'the number of Fs = Julius Caesar'.

There is now a whole family of Caesar problems facing the Neo-Fregean, not all of them attacking thesis 3, and it's fair to say that noone has played a bigger part recently in developing members of that family and assessing their potency and generality than Fraser MacBride. That said, I'm struggling to see the force behind his most recent version, the Julio Cesar Problem (introduced in a paper of that title, dialectica vol.59, 2005). Take the original Caesar problem in its most general form to be that of answering a demand for reassurance that terms drawn from different theories or domains of discourse (number theory and history in Frege's example) refer to distinct kinds of things. The Julio Cesar problem is that of providing an assurance that terms drawn from different languages refer to the same kinds of thing.

Here's the problem in brief. It is essential to the Neo-Fregean's second thesis that we be able to identify the genuine singular terms independently of knowing which of the purporting singular terms make reference to objects (otherwise the argument endorsed in 2 is circular). A suitable refinement of Dummett's suggestion that such terms can be identified by various tests to distinguish the inferential patterns singular terms generate from that of expressions of other categories would do the job here, but these tests seem language-specific. How is the radical translator, to take the extreme case, to identify the singular terms, given that he seems in no better position to pick out the quantifiers with which they interact?

MacBride argues that the problem is a real challenge to Neo-Fregean platonism, but that there is a solution as long as the Neo-Fregean is willing to realise that Cartesian certainty is beyond our grasp; we may not be certain that we have correct picked out the singular terms of a language, but we can make hypotheses using the following principle (from p49 of Evan's The Varieties of Reference):

(P) If S is an atomic sentence in which the n-place predicate R is combined with n singular terms t1. . .tn then S is true iff t1. . . the referent of tn> satisfies R.

Evans pointed out that (P) could be treated as a simultaneous implicit definition of reference and satisfaction in terms of truth, and so could be used to acquire defeasible evidence about how to pick out the various constituents of sentences even when we lack any independent characterisation of singular terms and predicates. MacBride's suggestion is that the Neo-Fregean invoke (P) as part of an answer to the Julio Cesar problem.


I'm just failing to see the Julio Cesar problem as any kind of deep challenge to Neo-Fregeanism. Suppose we use Dummett-style tests to confirm that the Arabic numerals are genuine singular terms in English, and we agree (though note that I haven't presented the argument for this) that thesis 1 above shows that we are warranted in taking various familiar sentences of number theory featuring such expressions to be true. By thesis 2, we have established that numbers exist. What exactly is the problem we're meant to have uncovered here? MacBride takes Crispin Wright to be offering an earlier discussion of the Julio Cesar problem under a different name in Frege's Conception of Numbers as Objects, and Wright expresses the following worry (p63):

'If the best which we can do is to explain the notion of singular term piecemeal, for different languages, this parochialism must surely also infect the notion of object which the Fregean proposes to see as correlative to it. How then can there be such a thing as ‘International Platonism’, so to speak? Must we not recognise that the claim that the natural numbers are objects permits of no general defence, that we must first specify whether they are being presented as English objects, or German objects, or Hindi objects...'

Whatever Wright's worry is, I think it's a much deeper problem than Julio Cesar problem. That we cannot identify the singular terms in some utterly foreign tongue in no way seems to undermine the argument given above that we are warranted in taking numbers to be objects. Now, it does mean that our epistemological reconstruction of our arithmetical knowledge is language relative, and that is very likely cause for discomfort; discomfort that MacBride's solution to the Julio Cesar problem might help ease. However, the issues raised by the Julio Cesar problem fall way short of Wright's Quinean worry that we're in danger of introducing a relativity into the ontological commitments of number theory, and it's hard to see how MacBride's solution helps us avoid this worry. Wright is motivated not by an unreasonable and unrealisable demand for Cartesian certainty, but by a desire to steer clear of this form of Quinean ontological relativity. So says I.

Labels: ,


Tuesday, December 13, 2005

 

UCLA/USC Graduate Conference

I'm pleased to say that my paper 'The price of bivalence; epistemicism and the forced-march sorites' has been accepted at the First Annual UCLA/USC Graduate Student Conference. Here's the abstract:

As I characterize it, epistemicism is the view that, our powerful intuitions to the contrary notwithstanding, vague expressions draw sharp boundaries - a single grain of sand marks the difference between the heaps and the non-heaps; a single hair divides the bald men from the not-bald men, etc. - but that independently plausible epistemic considerations explain why we are ignorant of where these boundaries lie. Such a position offers a strikingly simple solution to the Sorites paradox, and does so whilst preserving classical logic and semantics, but only at a price. In this paper I argue that Sorensen and Graff’s epistemicist theories of vagueness cannot meet all of the explanatory burdens they incur; in particular, I argue that they cannot diagnosis the allure of Sorites reasoning without leaving problems arising from the so-called forced-march Sorites looking intractable. In the final section, I offer a diagnosis of why their attempts to meet these two demands result in tension, and suggest that the problem will generalize to any epistemicist position that takes both of the explanatory demands seriously.

The keynote speaker is Professor John Perry from Stanford.

Labels: ,


Monday, December 12, 2005

 

Boundaryless Concepts and Set Theory

It seems appropriate to open this blog with a post on boundarylessness and language. In 'Concepts Without Boundaries', Mark Sainsbury argues against the common (if not ubiquitous) assumption that vague concepts classify by making set-theoretic divisions. The epistemicist, for example, holds that there is a particular number of grains of sand that marks the boundary between the heaps and the non-heaps. Rival pictures of the semantics of vague concepts try to avoid this committment by positing more divisions; so supervaluationists hold that vague concepts divide objects into an extension, an anti-extension, and those that have borderline status, while supporters of many-valued logics think that they parition the domain into even more sets.

Sainsbury thinks that this whole approach is radically misguided. There are borderline cases of 'red', i.e. things that are neither red nor not red. But sets are sharp, so there is no set of red things. Positing a set of objects that are borderline only helps if we ignore the phenomenon of higher-order vagueness; once we appreciate that there are borderline cases of 'borderline red', we can see that the problem just recurs. Positing yet more boundaries, and so partioning the domain even more finely, is just to repeat the original mistake a bunch more times. Vague concepts are boundaryless.

It's worth noting that the paper was delivered and published several years before people really started taking epistemicism seriously, and the argument that there is no set of red things looks undermined to the extent that characterising vagueness as an epistemic phenomenon is a viable option. More generally, Roy Cook has argued at length that Sainsbury's attack on many-valued approaches rests on bad philosophy of logic ('Vagueness and Mathematical Precision', Mind 2002). To formalise a region of natural langauge is to build a mathematical model, and if our model introduces sharpness that isn't present in the concepts we were trying to capture,we can see that as just useful idealisation - idealisation of the sort that has facilitated the success of formal semantics. We don't have to, and shouldn't, see our mathematical treatment of natural language as mis-describing its workings in the way Sainsbury alleges.

I don't want to evaluate either of these responses to Sainsbury, but rather to draw attention to a point in his paper that I think hasn't been sufficiently discussed in this debate. He observes that:

'Boundaryless concepts tend to come in systems of contraries: opposed pairs like child/adult, hot/cold, weak/strong, true/false, and the more complex systems exemplified by our colour terms.' (p258 in the Keefe and Smith reader).

Further down the same page he writes:

'Not just any clear case of the non-applicability of a concept will serve to help a learner see what the concept excludes. Television sets, mountains and French horns are all absolutely definite cases of non-children; but only the contrast with adult will help the learner grasp what child excludes. So it is no accident that boundaryless concepts come in groups of contraries.'

This observation goes awol on the picture Sainsbury is attacking. If we think of 'red' as classifying by partisioning the domain into an extension and an anti-extension, we don't seem to be able to explain the relationship between contraries. The anti-extension of a vague concept would contain all the junk that Sainsbury observes plays no role in grasping what the concept excludes.

I think Sainsbury has pointed out an important feature of vague concepts, and one that may be difficult to capture on the set-theoretic approach. Of course, Cook's response can still be given; just as our formalisation can introduce useful artefacts, it can abstract away from certain features of the discourse we want to study. But it would be interesting to see how far Sainsbury's observation can be pushed; if the set theoretic framework really does just lack the resources to model the feature he draws attention to, that provides some support for Sainsbury's conclusion that we've a bad picture of how vague concepts classify.

Labels: , ,


This page is powered by Blogger. Isn't yours?