Let’s get something out of the way first. I’m not dealing here with the everyday distinction between intrinsic and extrinsic value. Dollar bills have little intrinsic value, but you can buy intrinsically valuably things with them such as good food, and that gives them extrinsic value. Cool, and not too mysterious, at least as long as we don’t dive into the metaphysics. (The moment you start asking whether it’s really the food that is intrinsically valuable or rather the positive subjective experience of the food, that is the moment you take the plunge.)

So, where is value located? Perhaps it is located within the things that common sense tells us are valuable, things like flowers, playful lambs, Picasso paintings, stormy love affairs. One problem with this hypothesis is that it raises all the nasty questions we started out with. Where is this value located in these things? Can we take it out? Can we detect it? Is value an extra ingredient of the things, or is it some kind of structure of the thing? What ingredient or structure is shared by flowers, paintings and love affairs? And what if somebody were to doubt whether this ingredient or structure was really, well, valuable? It’s not just that the answers to these questions are elusive. It’s that the questions seem strange enough that we suspect the theory that leads to them must be wrong.

Well, there seems an obvious move we need to be making here: go subjective. Value is a matter of *being valued*. It’s because *we* value flowers, Picasso paintings and so on that they have value. This would still be intrinsic value (we don’t value the Picasso as a means to some end); but it wouldn’t be *inherent* value. The value of anything is bestowed upon it by the act of valuing or the relation of being valued. If something isn’t valued, it’s not valuable; and vice versa.

This is quite close to the standard economic conception of value according to which things are worth what people want to pay for them. Close, but not identical: unlike the economic conception, our theory can deal with things that you value but can’t pay money for without destroying them, e.g., love, the right to vote.

One problem with the economic conception — from the perspective of our metaphysical project — is that it is in fact parasitic on a non-economic conception of value. If food, shelter, our time, and so on, were not genuinely valuable, then economic value wouldn’t be value in a real sense. It’s the fact that it can be exchanged for labour, food, and so on that makes dollars and euros different from Monopoly money. (If you forget that, you end up believing in the value of NFTs.) Some examples of things that become valuable because they are valued seem to set up the same type of problem. Jimmy’s *Minecraft* world is valuable because he values it, sure; but he values it because he spent labours of love creating it. The loving labour is crucial. Without it, the valuing wouldn’t make sense. We might go as far as to say that it couldn’t be real valuing. (Surely valuing is not a random act of will; I can’t just sit on my chair and value Alpha Centauri.)

More fundamentally, our current theory is vulnerable to the objection that it’s possible to value things that are absolutely not valuable (a million neo-Nazis can’t make an anti-Semitic murder into something valuable, even if they all value it intensely). Conversely, we may not value things that are in fact valuable. Our conception of value seems to require a conceptual gap between the act of valuing and the value itself. So value cannot simply be located in the subject.

But now we seem to be caught in bind. It doesn’t make much sense to locate the value in the object. But it apparently also doesn’t make much sense to locate the value in the subject. So where is value located? Nowhere? Are we driven into nihilism?

There’s a popular argument for the existence of God that starts here. If there’s value, but it’s not grounded in either the subject or the object, then it must have a transcendent ground: God. So we must choose between nihilism and theism.

I don’t think this argument works. If a thing is not valuable, then there is no way to give it value from the outside. Just as the neo-Nazis in my example above could not bestow value on an evil act by valuing it, so God cannot bestow value on valueless things by commanding them, or wanting them, or being favourably disposed towards them. (This is a version of the Euthyphro dilemma, though perhaps not quite Plato’s.) Either God is so strongly implicated in the world that the very hypothesis of valueless things makes no sense, or the hypothesis of valueless things makes sense, but then God cannot make a difference. The issue here is often confused by setting up a dichotomy between naturalism (or even scientism) on the one hand and a religious philosophy of the other hand. But if you accept the naturalist description of the world, then nothing outside the world is going to save value.

So where *is* value located? Well, let us first ask, what *is* inherently valuable? I’m going to say something that I hope is relatively non-contentious: the paradigmatically valuable thing is human flourishing. (Also non-human flourishing, but I’ll focus on the human case.)

Some people may say that this answer is too easy. It doesn’t define value in terms that don’t presuppose value. Of course not! You can never get to value from outside the sphere of value. You cannot understand value unless you’re already within value. This is why all naturalist theories are doomed to failure.

Supposing that human flourishing is paradigmatically valuable, what then should we say about the objects we mentioned earlier? This: human flourishing may involve roses, lambs, Picasso paintings, and what not. It would simply be a mistake to ask whether those objects are inherently valuable or have merely instrumental value. A Picasso painting floating around in empty space doesn’t have ‘value’. But neither is the painting merely an instrument for the purpose of human flourishing, as if the flourishing were only extrinsically related to the painting. The painting (the flower, the love affair) is *involved in* human flourishing. And although what flourishes may be the human being, the flourishing human being will be the human being she is because of and in relation to the things that are involved in her flourishing.

In other words: we were misled by the old subject/object-dichotomy. Value is not located in the objects and value is not located in the subjects, if we understand subjects and objects as having being independently of each other. Value *happens* wherever there’s flourishing, and flourishing is always a dynamic interplay between subjects (plural) and objects.

It seems to me that this is the correct approach to value. But it only makes sense if we are willing to commit to a metaphysics that many may think is objectionable. For suppose that we are *unwilling* to say that subjects and objects both ontologically depend on each other. In particular, suppose that we claim that objects have being quite independently of the subjects — after all, humans might have never evolved, and so on. Then we cannot escape the conclusion that value is quite inessential to the objects, since those very same objects could also exist without value. Value is thus something we subjectively *project onto* the objects. And that means that we are back in subjectivism, and nihilism becomes inescapable.

Clearly, we who are not nihilists (and if you think there’s a difference between good and bad arguments, or justified and unjustified conclusions, then you’re not a nihilist) must be willing to say that subjects and objects both ontologically depend on each other. What’s more, for our theory of value to make sense, value must already be at work in this very dependence. Perhaps the best thing to say is this: to be an object *is* to be involved in the potential flourishing of subjects.

That sounds more innovative than it is. If we restrict flourishing to epistemic flourishing, then the claim becomes: to be an object *is* to be involved in the potential attainment of knowledge. And that is just Kant’s transcendental idealism. But we can go beyond Kant to take other forms of flourishing on board. (We thus arrive at an ethics that is more authentically Kantian than Kant’s own ethics.) Where Kant says (though not in these words) that the form of the world is the form of knowledge, we will say that the form of the world is the form of value — all kinds of value. And that is the point at which we see that value cannot be *located* at some point *in* the world. The world, in virtue of being a world, is a world *of * value.

I’m going to set myself a bunch of more or less clearly defined goals. I don’t need to meet all of them. But it would be nice to meet some of them. And at the end of the year, I’d like to have a reading list I can be proud of.

I ended up reading 54 books, and it’s certainly a list I’m proud of. Full details below. But first, let’s look at the goals I set myself.

**Read at least 20 books written or edited by women**. Reached! I read 20 books written by women, and one more edited by a woman. This was a rewarding goal to pursue since it made me pick up some interesting books that I might not otherwise have read. (Particularly thinking about the Virginia Woolf and Iris Murdoch here.) There were also friends who intentionally have me books by female authors for my birthday.**Read more German***.***Read a good number of books by or about Kant**. Not that many: O’Shea, Gardner, and the*Critique of Pure Reason*itself. On the other hand, I did teach my Kant course and publish those 60+ ant videos, so I certainly reached some Kant-related goals.**Finally read John Crowley’s**. Reached! Brilliant, absolutely worth reading, and I’m glad that after*Aegypt*quartet*all those years*I finally got to the end.**Read some massive books**. I read*some*massive books. Perhaps the Crowley quartet counts. Certainly the*Critique*, the Bolland biography, and most of all Conant’s*The Logical Alien*. But this is something I want to pursue with perhaps even more vigour next year.**Read all the books my wife gives me for Christmas.**Reached! My wife gave me Ray Monk’s biography of Wittgenstein, Adrian Moore’s book on the infinite, Thomas Nagel’s*The View from Nowhere*and Edith Brugmans’*Moreel scepticisme*. And I read all of them.**Read and/or return the books I’ve borrowed**. Let’s not talk about this goal, shall we? It’s too morally painful.**Read some books from the list made by David Bentley Hart**. I read Sei Shōnagon, Lady Murasaki and*As I Crossed a Bridge of Dreams.*Not a massive amount, but still: reached.

For next year, I’m going to be even more relaxed, in part because the last two years suggest to me that reading lots of books has become part of my normal routine again, and so I can afford to be more relaxed about it. The two things I want are these: (1) read more French books; (2) read more massive books. I’m probably *not* going to combine those goals by reading massive French books… that sounds like a bridge too far at the moment! And so now, finally, the list:

- John Crowley,
*The Solitudes* - William Shakespeare,
*Macbeth* - Edith Brugmans,
*Moreel scepticisme* - John Crowley,
*Love & Sleep* - Takasue’s daughter,
*As I Crossed a Bridge of Dreams* - John Crowley,
*Dæmonomania* - Murasaki Shikibu,
*The Diary of Lady Murasaki* - Jane Austen,
*Northanger Abbey* - Ray Monk,
*Ludwig Wittgenstein: the Duty of Genius* - John Crowley,
*Endless Things* - Ludwig Wittgenstein,
*Tractatus Logico-**P**hilosophicus* - James R. O’Shea (ed.),
*Kant’s Critique of Pure Reason: A Critical Guide* - Anoniem,
*Karel ende Elegast* - Immanuel Kant,
*Critique of Pure Reason* - Sebastian Gardner,
*Kant and the Critique of Pure Reason* - M. Vasalis,
*De oude kustlijn* - Alfred Tennyson,
*In Memoriam* - Richard Rorty,
*Contingency, Irony, and Solidarity* - Vladimir Nabokov,
*Despair* - Willem Otterspeer
*, Bolland: een biografie* - Dorothy L. Sayers,
*Whose Body?* - Bas Heijne,
*Leugen & waarheid* - Ali Smith,
*Autumn* - David Lewis,
*On the Plurality of Worlds* - Dorinda Outram,
*The Enlightenment* - Ali Smith,
*Winter* - Bas Heijne,
*Angst en schoonheid* - Susanna Clarke,
*Piranesi* - Graham Harman,
*Object-Oriented Ontology* - Georges Simenon,
*Het gebeier van Bicêtre* - Georges Simenon,
*La pipe de Maigret* - Sei Shōnagon,
*The Pillow Book* - Georges Simenon,
*De danseres van Le Gai-Moulin* - Virginia Woolf,
*Jacob’s Room* - Bernard Shaw,
*Androcles and the Lion* - Gerrit van de Linde
*, De gedichten van den schoolmeester* - Lorraine Daston,
*Tegen de natuur in* - Seneca,
*Medea**&**Phaedra**&**Trojaanse vrouwen* - Herman de Dijn,
*Hoe o**verleven we de vrijheid?* - Jennifer Morton,
*Moving Up without Losing you Way* - Arkady Martine,
*A Memory Called Empire* - Arkady Martine,
*A Desolation Called Peace* - Sempé-Goscinny,
*Le petit Nicolas* - Eugéne Ionesco,
*La cantatrice chauve & Exercises de conversation […]* - Spinoza,
*Korte verhandeling over God, de mens en zijn geluk* - Ellis Peters,
*Un bénédictin pa**s**ordinaire* - Thomas Nagel,
*The View from Nowhere* - Iris Murdoch,
*The Sovereignty of Good* - Iris Murdoch,
*Sartre* - Philipa Perry,
*The Book You Wish Your Parents Had Read* - Ludwig Wittgenstein,
*Über Gewissheit* - Sofia Miguens / James Conant,
*The Logical Alien* - A. W. Moore,
*The Infinite (3rd edition)* - Rudy Rucker,
*Infinity and the Mind*

This first aspect of our finitude as knowers is closely related to a second aspect: our fallibility. We may believe we know *p* and yet fail to know it, perhaps because *p* is false, or perhaps for some other reason. It may seem, on the face of it, that this second type of finitude is independent of the first type; that there is a finitude in terms of scope and another, separate, finitude in terms of fallibility. For it may seem that the following two are at least intelligible scenarios:

- A knower whose knowledge is limited in scope, but who is infallible. The traditional example would be the stoic sage, who assents to only those thoughts that he can be absolutely certain about.
- A knower whose purported knowledge is unlimited in scope, such that no further enquiry is possible, but who is nevertheless fallible. This being is godlike in the sense of being able to grasp the entire richness of a world at once, but fails to be godlike in that they may be wrong about whether the world they
*grasp*is the same as the world that is*real*.

There are, however, good reasons to doubt the intelligibility of both of these scenarios. Infallible knowledge of the type to stoic sage is said to have seems to be possible, if at all, only for isolated items of experience that have no implications beyond the mere having of the experience. We might hope to infallibly know *that we seem to see a mug*, but not *that we see a mug* or *that there actually is a mug*. But our grasp of such isolated items depends on our prior grasp of the full-blooded kind of experience that *can* be either veridical or misleading. If one divorces the supposedly infallible elements of our experience from the fallible ones, the infallible elements lose their content and are no longer candidates for assent or truth. (This theme is discussed in my earlier post, Kantian and Cartesian scepticism.) To make the same point in a different way, consider what it takes for something to be an item of knowledge. A necessary condition of this is surely integration with the rest of our knowledge; where integration involves relations of fitting or failing to fit that have epistemic importance. But the items that the sage might know infallibly are such that they cannot be undermined by other items that he might come to know. Hence, these items do not stand in relations of fitting and failing to fit to each other, and therefore they are not truly items of knowledge at all.

What about the second scenario? Let us first zoom in on the fact that knowledge is essentially self-conscious. To be a knower is to be aware of oneself as a knower; it is, among other things, to be responsible for one’s beliefs and to be (at least a little) confident about (at least some of) one’s beliefs. To self-consciously be a finite knower is to be aware of one’s finitude as a knower. Now one can only be self-conscious of one’s finitude in the second sense of fallibility if one is also self-conscious of one’s finitude in the first sense of always having more to learn; for to be self-conscious of one’s fallibility is to be aware that one might need to revise one’s beliefs; and this requires the awareness that new items of knowledge might emerge that may fail to fit one’s current beliefs. Conversely, to be aware that enquiry is not over, that new items of knowledge may appear, implies that one is aware of one’s fallibility. Here we see why the second scenario at the very least does not make sense *from the inside*. For we are asked to imagine a being for whom enquiry is at an end, but who is nevertheless fallible. This can never be anyone’s self-conscious conception of themselves. (Whether it might be a true third-person description of someone is something we will touch on below.)

The previous paragraph has also revealed to us a third aspect of our finitude as knowers, one which is again not truly different from the other two but forms an indissoluble whole with them: temporality. *To be a finite knower is to be engaged in a project of enquiry that unfolds in time and is teleologically oriented towards knowledge.*

The crucial Kantian insight… well. I suppose there is a tendency in all of us who have been impressed by Kant to talk about *the* crucial Kantian insight, as if there were only *one* of them, and to then present a rather wide variety of insights as the single core idea from which all of Kant’s critical philosophy emerges. And yet perhaps we are not entirely wrong to do so. For perhaps all these insights are at bottom the same insight. Anyway, *one* of those crucial Kantian insights, one that has as much right to be called *the* Kantian insight as any other, is that we must not understand the finite knower as an imperfect version of an infinite knower, but that we must — that we can only — understand the finite knower from the finite perspective of the finite knower themselves. *To be a finite knower is not to have a limited amount of that which God has an unlimited amount of.*** **Our *only* purchase on the concept of knowledge is that which we can glean from our self-understanding; and this means that any conception of knowledge — or of belief, justification, and so forth — we can have is essentially one from the finite perspective.

(And this is one of the reasons why we can’t make sense of the idea suggested in the second scenario, the idea of someone who has a finished belief system, who has no further opportunities for revision, but who is nevertheless wrong. To cast the claim that we can’t make sense of this as a dubious ‘coherentist’ claim that leads to ‘relativism’ and cannot be accepted by anyone who cares about ‘correspondence with reality’ is to fail to see the point.)

It is easy to miss the Kantian insight when we focus on the first aspect of finitude, which in fact seems to invite our thinking of finitude as a sort of subset of infinity: God has all the knowledge, we only have some of it. It is perhaps also easy to miss it when we focus on the second aspect; for we might perhaps talk about degrees of certainty and then claim that we finite knowers just have *less* certainty than God — less than God has, yes, but nevertheless *the same kind of stuff*. (This can’t be right, though. God cannot be *certain* in any sense of the word that is even putatively applicable to us.) But it’s much easier to keep the critical insight in focus if we consider the third aspect of finitude: temporality. For the idea that knowledge is the teleological end point of a temporal process of enquiry simply has no equivalent in the realm of the divine. And so it is useful to emphasise, again and again, that knowledge, belief, justification, evidence, and so on, always need to be understood against the background on an unfolding process of finite enquiry.

Conversely, we may expect any epistemology that does not take the Kantian insight on board to neglect the temporal aspects of knowledge and to conceive of the central epistemological notions apart from the process of enquiry. And this is precisely what happens in mainstream epistemology. In order to develop our insights, we will now look at the famous JTB analysis of knowledge. (At the end of our post, we will also make some remarks about Bayesian epistemology.)

*S* knows that *p* if and only if (1) *S* believes that *p*; (2) *S* is justified in believing that *p*; and (3) *p* is true. There are many facets of this analysis that are worth commenting on. One is the essentially third-person nature of the knowledge ascription. Another is the idea that belief is in some sense a *component* of knowledge; that *knowing* *p* and *merely believing p* share a common factor, which is the belief. I will return to both of these. But for now I want to focus on the fact that time makes no appearance in this analysis; that knowledge, belief and justification are presented as static states of affairs, apparently unconnected to a past and a future, or at least not *essentially* so connected.

Presumably a belief is justified if and only if one has good reasons for holding it — if one has those reasons right now, at the moment one holds the belief. But this misses out on a crucial dimension of our epistemic responsibility. Suppose that one believes *p,* say, that medicine X is a cure for disease Y. Suppose further that one bases this belief on all the available medical evidence; that the evidence is indeed overwhelmingly in favour of the truth of *p*; and that *p* is true. Is this enough to conclude that one is epistemically responsible in believing that *p*? Not at all. For one’s belief in *p* to be epistemically responsible, one also has to have certain commitments for the future. One must be committed to changing one’s belief if contrary evidence comes in; perhaps one must even have a willingness to seek out such evidence if there are plausible sources of it. We should never say of someone: “he is epistemically responsible in believing that *p* and whatever further evidence comes in, no matter how negative for *p*, he will keep believing that *p*.” Such a person is precisely not epistemically responsible.

But isn’t it the case that this person would be perfectly justified *now* and would simply *lose* that justification if and when this further evidence turns up? Or isn’t it perhaps the case that while the person is *justified* in their belief, they lack something else, perhaps *responsiveness*? Insofar as this is a merely terminological quibble, not much hinges on how we answer these questions. I take it that ‘justification’ has usually been used as the name for the ‘being epistemically responsible’ element of enquiry; but of course one can make a different terminological choice. What is not a quibble and not a matter of choice is the importance of keeping the temporal dimension of enquiry in view. To use justification as a term that can be evaluated at an isolated instant is to run a grave risk of failing to do precisely that.

The point becomes even clearer when we turn to the ‘belief’ component of the JTB analysis. Consider the following statement: “I believe that *p* but I will give up this belief ten seconds from now without any reason.” This speaker does not believe that *p*, not, at least, in the full-blooded sense of the term. Or consider: “I believe that *p* and whatever new evidence comes in, I will continue to believe that *p*.” This speaker also does not believe that *p* in the full-blooded sense of the term. Perhaps they are *fanatically attached to p*, but this locution precisely indicates that with regard to *p*, they have given up the quest for knowledge. So a belief is not to be thought of as a static state, but rather as a stage in the ongoing process of rational enquiry.

This is not to say that there are *no* contexts in which beliefs can come loose from the process of enquiry. We understand perfectly well what someone means who says: “Right now I believe that Rosemary truly loves me, but I know that the doubts will come back and that tomorrow I will no longer believe it.” But here we have a special use of the verb believe, what I think James Conant would call a *self-alienated use* of the verb. (See *The Logical Alien*, pp. 700-730 and thereabouts, where Conant discusses disjunctivism and Sellars’s and McDowell’s tie shops.) It is intelligible enough, but only as a special, derived, degenerated form of the normal kind of believing. It requires the speaker to take a step back from their beliefs, to regard themselves almost from a third-person perspective; it requires them to acknowledge that they do not believe in the *full* sense of the word.

One objection to what I said above about belief and justification is that the two stories were altogether too similar. If justification requires us to be responsible actors in the quest for knowledge; and if belief also requires us to be responsible actors in the quest for knowledge; then I might seem to be committed to the claim that there is no such thing as an unjustified belief. Well, yes! An unjustified belief is defective; not just defective in some external sense, but defective *as a belief*. As an analogy, consider a broken chair. Is a broken chair still a chair? There is a sense in which it is, but also a sense in which it is not. Since a chair is something that can be used for sitting on, and since a broken chair cannot be so used, it is no longer *quite* a chair. Of course we can still call it a chair, given our usual practices of making and using chairs-in-the-full-sense. It’s the same with beliefs. An unjustified belief is a belief in an attenuated sense, which can still be called a belief because it is similar in certain relevant respects to full-blooded beliefs. But while it makes sense to imagine people all of whose beliefs are justified, it makes no sense to imagine people all of whose beliefs are unjustified — such people would not be recognisable as epistemic agents. (They would therefore not be people either.)

This brings us to the second interesting aspect of the JTB analysis: its third-person nature. Of course the analysis is formulated in a third-person idiom; but that might be thought to be a mere artefact of representation, merely a way to indicate generality. However, the analysis is also third-person in a more fundamental way. It requires us to think about epistemic states in a way that *cannot* be done in the first-person. To state that knowledge is justified true belief is to contrast knowledge with other types of belief: unjustified beliefs and false beliefs. But one cannot say: “I believe that *p* and *p* is false.” And one also cannot say, if one is using the verb ‘to believe’ in the full-blooded, non-self-alienated sense: “I believe that *p* and I am not justified in believing that *p*“. To say one of these things is to become entrapped in Moorean paradoxes. This is certainly curious: why would the analysis of knowledge depend on assertions that are never intelligible from a first-person perspective? And there’s another puzzle. Given that we can never assert such Moorean sentences, is there any difference of meaning between the first-person assertions “I believe *p*” and “I know *p*“?

Take the sentence: “I believe *p* but I do not know *p*.” This sentence is not obviously paradoxical. We say such things all the time; e.g., when we say “I believe there’s still some alcohol-free beer in the fridge, but I don’t know for sure.” But if we just plug JTB into the above sentence, it seems that anyone who claims to believe but not know something, is someone who claims this: “I believe *p* and either *p* is false or I am not justified in believing that *p*“. This would be a Moorean paradox. It is also clearly not what someone who asserts that they believe but do not know *p* is asserting. So what *are* they asserting?

To understand this, let us look at the third interesting aspect of the JTB analysis. I can state it in three roughly equivalent ways: (1) it analyses knowledge in terms of belief, rather than the other way around; (2) it takes an item of knowledge to be a belief to which something optional has been added; (3) it claims that an episode of knowing that *p* and an episode of merely believing that *p* have a common factor, namely, the belief that *p*. Now we have already seen a reason to be suspicious of (2). A justified true belief is not like a small red car; a car that, in addition, has some properties that are in no way essential to cars. Something is only a belief in the full sense of the term if it is justified; if it is taken up into a process of enquiry for which arriving at truth is a criterion of success. So a justified true belief, knowledge, is not one among many kinds of beliefs; it is *the* central kind. All other kinds of belief are *defective* kinds. And so it is, contra (1), knowledge rather than belief that is the fundamental concept. And, contra (3), an episode of knowing and an episode of merely believing do not have as common factor a belief; rather, an episode of merely believing is a defective episode of knowing, it is something that would be an episode of knowing if only it didn’t have the defects that it has. (The theory propounded here is exactly analogous to disjunctivism about perception.)

Given this analysis, what is the point of saying “I believe *p*” rather than “I know *p*“? To say that one believes *p* is to keep some distance from one’s positive cognitive attitude towards *p*. I am not talking about merely acknowledging fallibility here. One can say that one knows *p* even if one believes that one is fallible about *p*. I know (I do not merely believe) that there is a mug of tea on my desk; but of course I *could* be mistaken about it. However, I seem to be in as good a position as I can be to assert that there is a mug of tea on my desk. There are no further plausible avenues for enquiry; there is no need for further enquiry; with respect to this little fact, the search for knowledge is effectively over. In other words, I have a positive cognitive attitude towards there being a mug of tea on my desk *and I fully identify with that attitude*.

It is crucial for the search for knowledge that one can also take up a positive cognitive attitude without fully identifying with it. This is what we express with the verb *believe*. I believe that cold nuclear fusion is impossible, but I’m willing to seriously consider experiments done by competent scientists. I believe that there is still milk in the fridge, but I do plan to check before I go to the shops. I believe that everything I write in this blog post is true, but I’m eager to hear counterarguments.

When I say that I believe *p*, I say that I have that positive cognitive attitude towards *p* that* *I would express by the phrase “I know *p*” *if* I were to fully identify with it. When I say (or imply) that I believe but do not know *p*, I am not committing a Moorean paradox; rather, I am stating that although I *do* have a positive cognitive attitude towards *p*, I *do not* fully identify with that positive cognitive attitude.

There is, then, a kind of self-alienation in the very nature of a finite knower. To be an enquirer requires us to be, first, the person who is taking up the positive attitudes (which makes it true to say that *I* have these beliefs); to be, second, the person who takes a step back from the current state of the enquiry to critically compare it with the ultimate goal of unified knowledge and to thereby not fully identify with the current state; and, finally, to be the person who unifies the other two persons into the one unified cognitive agent that we must be. (There is perhaps something not merely superficially Hegelian about this scheme.) This tripartite nature of the knower is what allows us to intelligibly say “I believe *p* but I do not know *p*.” The first I is the I that believes; the second I is the I that does not identify with the belief; and the third moment, the moment of unity, allows the references of the two I’s to nevertheless be the same. (One could also phrase this in temporal terms — we are at the same time the I of the present, the I of the future, and the unity of the two.)

Thus we see again that a belief that is not embraced as knowledge is defective *as a belief*. It is a belief that is not fully believed; it is believed, but only partially, only provisionally; not in the full-blooded sense of the verb.

We can return to the analysis we made earlier of the speaker of the following sentence: “Right now I believe that Rosemary truly loves me, but I know that the doubts will come back and that tomorrow I will no longer believe it.” When we first analysed it, we said that the speaker was self-alienated. We can now see that the speaker is actually doubly self-alienated. For the speaker is not only alienated from the first I, the I that has beliefs; but also from the second I, the I that distances itself from their current beliefs by looking forward to the results of further enquiry. This is a kind of self-alienation that is no longer proper to the project of enquiry, but that entails a failure in the carrying out of that project; it is alienation from one’s own nature as a finite knower.

This blog post is already far too long. Nevertheless, as a coda, I want to make a few remarks about Bayesian epistemology. Does the Bayesian not have an attractive alternative to my analysis of the difference between belief and knowledge? For the Bayesian, with her degrees of belief, can say that to know *p* is to have *P(p)=1*; while to believe *p* is to have *0.5 < P(p)*. Thus, to say that one believes *p* but does not know *p* is simply to state that one has *0.5 < P(p) < 1*. No complicated story about self-alienation is needed.

In effect, where I say that a mere belief is a *less-than-full identification with a fully positive cognitive attitude*, such a Bayesian says that a mere belief is *full identification with a less-than-fully positive cognitive attitude*. But this means that the Bayesian cannot get the crucial fact into view that there is something *defective* about mere belief; that the goal of belief is knowledge, and that belief that falls short of that goal calls out for further enquiry.

Consider the Bayesian attitude towards having the fully positive attitude of *P(p)=1*. Given Bayesian updating, the cognitive attitude expressed by such a formula is not responsive to further evidence; and for this reason, many Bayesians hold it to be irrational to ever take up such a fully positive cognitive attitude. Thus, knowledge is no longer the goal of rational enquiry, but something to be avoided! This is not a surprising insight about knowledge; it is the unfortunate result of confusing a claim to knowledge with an embrace of fanaticism.

One option for the Bayesian is to relax her criterion for self-ascription of knowledge; perhaps to, who knows, having a degree of belief higher then *0.9*. But apart from being arbitrary, such a proposal also fails to address the fundamental problem, which is that the Bayesian conception of rationality simply does not incorporate a teleological relation to knowledge. What is rational for the Bayesian is to update according to the updating rules and accept the degrees of belief that follow. There is no sense in which particular degrees of belief are better than others; there is, in particular, no rule of rationality that requires us to seek *high* degrees of belief. This may look like a benefit. The Bayesian looks only at the evidence! If the evidence fails to allow for high degrees of belief, so be it! Long live science!

But this means that the Bayesian will not be able to see anything wrong with the cognitive situation in which all our beliefs about things that matter to us either continually hover around the value of 0.5, maximal uncertainty, or fluctuate wildly through time. In fact, however, such a situation would entail a complete breakdown of rationality — a situation of utter inductive scepticism, in which enquiry would not be so much as possible. One problem with the Bayesian scheme is that it promises to provide us with rational cognitive attitudes *no matter what evidence will be incoming*. Whereas the truth of the matter is that in a sufficiently chaotic world, *taking up rational cognitive attitudes is impossible*.

We can make substantially the same point by looking at identification and self-alienation. According to my account, the defectiveness of having mere beliefs is evident from the first-person point of view by a lack of full identification; one has positive cognitive attitudes with which one cannot fully identify, and which are therefore not in the full sense the attitudes that they ought to be. There is a tension, a defect, and it calls out for resolution through further enquiry. According to the kind of Bayesian we are considering, mere beliefs are identifications with attitudes that are not entirely positive. There’s no tension in that. Having a mere belief is like being luke-warm about a book, or liking-but-not-loving a certain dish. There is nothing about mere belief that *calls for* further enquiry. And to represent belief in such a way is surely to misrepresent the nature of finite knowers. It is to fail to grasp the fact that finite knowers are essentially embarked on a project.

A central doctrinal and methodological principle of empiricism was the thesis that all of our ideas, or at least all of our simple ideas, are derived from corresponding (simple) impressions. This view implicitly denied the distinctness of the faculties that Kant thought it so important to keep distinct, for the empiricists held in effect that the inner states of sensibility were already ideas and hence that sensibility was already the faculty of understanding. That is, they held that all it takes for there to be an idea of an object, for there to be before my mind, an object to reflect upon, is for one’s sensibility to be in a particular state. (p. 154)

But in fact there is no contradiction between what I wrote and what Hamawaki writes. What I would like to do in this blog post is revisit the theme of the unity of the cognitive faculties, but this time through the lens of Hamawaki’s themes: Cartesian and Kantian scepticism, and the relation between Kant and the empiricists.

Let us start with Cartesian scepticism in its most well-known form, scepticism about the external world. The argument for such scepticism goes roughly like this:

- All that we are immediately perceptually aware of are our ideas.
- We cannot distinguish a state where our ideas more or less accurately reflect an external reality and a state where they do not reflect an external reality at all.
- Therefore, perception gives us no reason to believe in an external world.

This kind of scepticism is called Cartesian not because Descartes embraced it, but because he developed it in his *Meditations* and made it play a crucial philosophical role there. Descartes does not accept the argument, but he accepts the *force* of the argument; he presents it as a powerful argument which can only be defeated by proving that there is a God and that this God is not a deceiver.

To understand the Kantian way of looking at this argument, we should start by noticing that Cartesian external world scepticism requires two ingredients. First, it should be clear on reflection that all that we are immediately perceptually aware of are our ideas (not external objects). Second, our perceptual awareness should be such that it *seems* to be awareness of external objects. Without the first, there’s no reason to worry: we can claim that perception just puts us in touch with the world. Without the second, there is also no reason to worry, since in this case there would be no tendency in us to believe anything false. It is only because we *seem* to be in touch with a world, but *are not*, that a horrible scepticism looms over us. Perhaps we should go further and say that the two ingredients form an inseparable package; it is only when we embrace both of them that we can really *understand* the distinction between perceptual ideas ‘in the mind’ and external objects ‘outside’ in the way that Descartes understands that distinction.

Now the Kantian move here is to point out that these two ingredients, both of them essential to Cartesian scepticism, cannot in fact be coherently combined. How could it be true both that it is clear to us (after a little reflection) that we are only aware of ideas, *and* that it *seems* to us that we are aware of external objects? How could one and the same conscious episode present itself to consciousness as cut of from the world *and* as an awareness of the very world it presents itself as cut off from? The whole set-up seems impossible. If our ideas are *merely* ideas, then the notion of external objects simply drops out of our cognitive economy; but if the notion of external objects drops out, then there is no intelligible sense left in which our ideas are *merely* ideas, no sense in which they can be considered incomplete or falling short of the goal they themselves set. And thus Cartesian scepticism founders in incoherence.

But in so foundering, Cartesian scepticism opens the door to what we may call Kantian scepticism. (As with Descartes, Kant is not himself a Kantian sceptic.) Descartes was worried that our perceptual ideas might give an inaccurate representation of the world. But now it seems that they do not represent the world at all; not because there is no world to represent, but because our ideas are not vehicles of representation. They are just what they are; themselves; no more and no less. It is not that we might make a mistake in judging our perceptual ideas to be correct. Our perceptual ideas are not proper objects for judgement at all. So the new worry is that we seem to be incapable of world-directed thought.

Suppose Descartes, or some other rationalist, said: “Forget about perception, our grasp of the world is intellectual. I can have an idea of extended substance independent of perception.” Let’s grant this for the sake of argument. But what is now the content of the claim that these extended substances are *real*, that in grasping the idea of extended substance we are grasping *the world*? Making this claim requires the distinction between ideas and world that perception was supposed to give us; it requires something that makes sense of judging that an idea is true or false. This is why Kant states that objectively valid cognition requires objects to be given to us in intuition. It’s not that we need objects to be given to us in the sense that we need evidence before we can arrive at true judgements. It is that without a faculty of intuition, judgement cannot even be attempted, since there could be no thought that was object-directed. (Perhaps the only rationalist strategy for avoiding this Kantian conclusion is the — Spinozist? — claim that having a clear idea, judging the idea to be true, and the idea being true, are all at bottom identical. I will not here consider whether this might work.)

We can achieve a better grasp of the same point by shifting our attention to the empiricists and to Kant’s insistence on the distinction between the passivity of sensibility and the spontaneity of the understanding. For the empiricists, the fundamental case of having an idea is, as Hamawaki points out, having an impression. This is a passive affair. It is something that happens to us. We are not *responsible* for it. In this, an empiricist impression is, Kant would say, radically distinct from a judgement. For a judgement can be either justified or not justified; and in making (or not making) a judgement we are taking up a responsibility. It is in this sense that judgement is an *act*, that it is *spontaneous*. This should not be confused with the clearly incorrect claim that judgement is a choice; that it would also have been possible to make the opposite judgement. In judging that there is a table in front of me, I am entering the space of reasons, as Sellars would say; but this usually does not involve, and maybe never involves, a moment of choice. (A point well-known from criticisms of Pascal’s Wager.)

The radical empiricist view is that judgement or belief is of the same kind as an impression; for instance, in Hume’s claim that a belief is an especially vivid idea linked to a current impression. This is simply to give up on judgement and reasons and criticism; something that Hume is aware of, although one doubts that he thoroughly appreciates the force of his own dismissal of Reason in favour of Custom.

But perhaps a better model of cognition is available, a model that leaves the basic empiricist idea about impressions intact. It is a two-step model that proceeds as follows. First, we get impressions from sensibility, and perhaps also from the imagination when it recombines the original impressions into new ideas. Second, we judge those impression and ideas; either assenting to them or not. (This is essentially the Stoic model.) It may seem that this is what Hamawaki was describing as the Kantian position. We need sensibility to gives us impressions, and a separate faculty, the understanding, to judge those impressions. The empiricists were right as far as sensibility is concerned, but they left out an important second step.

This is completely wrong as a reading of Kant. (To be clear: it is not Hamawaki’s reading of Kant.) For to think of cognition as being a two-step affair, to have what Conant calls a *layer-cake conception of human mindedness*, is precisely to invite Kantian scepticism. For given that we have two entirely separate faculties, sensibility and understanding, how could be it the case that what sensibility presents to us is the right kind of item to be subjected to judgement? This is utterly mysterious; a wonderful pre-established harmony if it turns out to be the case, but hardly to be expected and at bottom impossible to understand. And of course our look at Cartesian scepticism suggests that the two separate faculties will *not* be able to harmoniously work together. Judgement requires that which is judged to have objective significance; to be *about* something, to be a representation of something that cannot be reduced to the representation itself. But impressions, as the empiricist conceives of them, are not representations. *They do not point beyond themselves.* They cannot be true or false, veridical or misleading, or anything else that might make them a proper subject of judgement. When judgement comes onto the stage *after* sensibility has been filled with content, *it is already too late*. Not even God-who-is-not-a-deceiver can now create the required relation between the contents of sensibility and the powers of judgement. So although Kant might hold, against Hume, that we must distinguish between sensibility and understanding, he certainly does not hold that sensibility and understanding are *separate* powers. Rather, they are two ‘structural moments’ within the single unified power of thought.

The Kantian solution is to claim that the understanding is already at work in sensibility. Our perceptions *do* represent objects; that is, they present things as falling under certain kinds of rules; and they do this because our power of rules, the understanding, is already at work in even our most simple perceptions. Here is a concrete example: when you see a mug, you are representing it to yourself as something that can be touched, handled, used in certain ways, that will remain in existence even when you are not looking, and that will respond in mug-like ways to events that befall it. And, crucially, this is the *central, basic* kind of seeing. It is not *based on* some simpler kind of seeing that is merely awareness of what later philosophers might call ‘raw sense data’. For the point we made above is that if we start from something as minimal as raw sense data, we will never be able to get to those richer kinds of experience that we actually have and without which human cognition would not be possible.

Here is a more abstract example. Kant’s point in the Second Analogy is that for our experience to be such that event *A* is represented as happening later than event *B*, is (in part) for it to be such that *B* and *A* are represented as connected by causal laws. Unless we were already conceiving of things as subject to the rules of causation, we could not have an experience of succession, but only a succession of experiences.

In conclusion. Kant tells us that it would be wrong to base human cognition on sensibility alone; wrong to base it on the understanding alone; but also wrong to base it on two separate faculties, sensibility and understanding, since if these faculties start out as separate, nothing could ever bring them together again. Rather, the core applications of sensibility require the understanding, and the core applications of the understanding require sensibility, and both of them make sense only as necessary moments within a unified faculty of cognition that strives for objective knowledge of the world.

(I say *core* applications because we can also investigate certain *limit* applications, *edge* cases, in which we attempt to get one of these moments in view while ignoring the other. If one attempts to see not a mug but merely one’s momentary visual field, the ‘bare sense data’, then insofar as one succeeds one is making a limit use of the sensibility. If one is engaged in what Kant calls general logic, e.g., the formal study of syllogisms irrespective of all content, then one is making a limit use of the understanding. But, as Conant argues at length in his *Reply to Boyle*, these are indeed limit cases for Kant, intelligible only against the background of the full-blooded use of those capacities together. And the reason they must be conceived of as limit cases is the story I’ve been telling above.)

It’s crazy to me how confident K[ant] is in his ability to discern discrete cognitive faculties just by reasoning them out. He keeps plowing ahead, constructing a mind-numbingly complex account out of more or less thin air.

I suspect this is a common reading experience. Kant seems to start out in a quite parsimonious way by distinguishing just two cognitive powers, sensibility and the understanding. But before long the understanding has fallen apart into the understanding, the power of judgement and reason; at least two kinds of imagination have been added; and who knows whether the ‘hidden art’ that gives us the schematised versions of the categories is yet some further cognitive power? And so we are now looking at at least six, and maybe more, cognitive powers, and the entire theoretical edifice is starting to seem both baroque and unmotivated.

How many cognitive powers are there, according to Kant? Two? Six? I want to suggest that there is an important sense in which this question is misleading; a sense in which asking this question is to miss what is most central and interesting about Kant’s conception of cognition. For what *is* most central and interesting is the idea that our cognitive powers form a *unity*, a *functional, goal-directed unity*; and that while it will be necessary, when doing philosophy, to distinguish different powers or faculties, we should never lose sight of the fact that those power are intelligible *only* within the context of the essential unity of our cognition. And so the answer to the question ‘how many cognitive powers are there?’ must be in some sense ‘one’, and in some sense ‘as many as are useful in the attempt to understand our own understanding’. We do not ‘discover’ that the understanding actually consists of three powers, the understanding, the power of judgment, and reason. Rather, it turns out to be philosophically useful to distinguish between the grasp of concepts, the bringing together of concepts in judgments, and the bringing together of judgments in arguments. But none of these abilities would, for Kant, make sense independently of the others. It is not just that arguments require judgments and judgments require concepts, which most philosophers would agree with; it is also that concepts only make sense as the constituents of judgments, and that judging only makes sense within the greater project of linking judgments together into a single coherent body of knowledge. All of the faculties presuppose all of the others. Kant is not doing armchair cognitive psychology and discovering lots of hitherto unknown faculties. He is merely working out what it means to be finite cognitive agent; what it means to be the kind of thinker who strives for objective knowledge by applying general concepts to objects that are given to them.

Perhaps it is best to illustrate the fundamental role of unity within Kant’s thought by looking at his famous remark that ties together sensibility (through which objects are given to us, something English Kant calls *intuition* and original Kant calls *Anschauung*) and the understanding (which is the faculty of applying concepts). Kant writes:

Thoughts without content are empty, intuitions without concepts are blind.

This remark clearly brings the two faculties together in a strong way: each is useless (’empty’ or ‘blind’) without the other. But if that is all we say, we are still underestimating how close the tie is between sensibility and the understanding. For it suggests that we can at least *understand* the nature of sensibility and the understanding apart from their relation to each other. But recall that for Kant our cognitive powers form a functional unity; they are essentially defined by their aim, which is objectively valid cognition. Now without sensibility, the understanding cannot come into contact with any objects. Without the understanding, sensibility cannot give rise to cognition. So neither can reach the aim that is essential to them without the other. In fact, and it is easy to overlook this, while sensibility is defined as a power that brings us into contact with objects, *it cannot do even this much without the contribution of the understanding*; for an object is essentially something that falls under the categories of the understanding. And vice versa the understanding, which is supposed to apply concepts, cannot do that in any serious sense of the term if nothing is given to it in sensibility. So even these two cognitive faculties, the distinction between which is foundational for the entire story of the *Critique*, are not (as James Conant would phrase it) ‘self-standingly intelligible’. Our understanding of each presupposes the other.

Kant’s *Critique* can only be understood by keeping in mind that his subject is the finite knower, someone whose cognition has both an active and a passive pole; passive in that we are affected by objects, active in that we judge and reason. These poles cannot be understood apart from each other, and they also cannot be understood apart from their *telos*, their aim, namely, unified knowledge of the world. Everything Kant says about our cognitive faculties is an attempt to think through this basic situation. We first distinguish between sensibility and the understanding in order to talk about the passive and the active pole of cognition. We then realise that we must be able to actively work with what has been passively given to us, and we call the power to do this the imagination. And so on.

Behind all the distinctions that Kant makes, there always lies a fundamental unity. What is transcendental idealism? I think the following would be a good one sentence attempt to define it:

The unity of the subject, the unity of the world, and the unity that is the goal of inquiry, are one and the same unity; one and the same in the very strong sense that each of them is intelligible only when the other two are thought along with it.

This was and remains an absolutely radical idea. It undermines, among other things, the Cartesian claim that we can grasp the unity of the self apart from any grasp of the world; and the metaphysical realist’s claim that one can grasp the unity of the world independently of a grasp of our modes of inquiry. I think Kant is absolutely right. And it is because of that that I think the *Critique of Pure Reason* remains one of the most important philosophical texts to study; and also because of that that I am increasingly willing to call myself a transcendental idealist.

Why would anyone think that “a property is a class of things across possible worlds” is a meaningful and enlightening thing to say? This is not a rhetorical question.

And then I got some good replies and queries by Adam F. Patterson and Arturo Javier-Castellanos. But as I started mentally composing a reply to them, I realised it would make more sense to write it out as a blog post. And so here we are.

The claim that properties are sets of individuals, in this world and other worlds, seems to me somewhere on the border between the false and the unintelligible. Take the property of being bald. We can of course write true sentences starting with the words “being bald is…”; for instance “being bald is having no hair on the top of one’s head” or “being bald is a property shared by Peter Adamson and Michel Foucault”. But the claim “being bald is the set of all bald people” seems, on the face of it, as false (or maybe unintelligible) as “the number three is my girlfriend” and “honesty is purple”. The set of all bald people is a bunch of people. Being bald is clearly not a bunch of people.

There are other ways to make the same point. Properties can be perceived. I can see that someone is bald. But can I see that someone is a member of the set of bald people? The only sense in which I can see that is surely by seeing that the person is bald and then (if I happen to be thinking about set theory) concluding that this person is a member of the set of bald people. Quite in general, it seems that there is an explanatory asymmetry between having a property and belong to the set of things that have that property. That *X is P* explains that *X* is a member of the set of things that are *P*. Not the other way around. You belong to the set of bald people because you are bald; you’re not bald because you belong to the set of bald people. Such an explanatory asymmetry suggests that Lewis’s identity claim can’t be true.

What’s more, suppose that properties are these sets stretched out across all the possible worlds. Then denying that there are other possible worlds would amount to denying that there are properties. But surely that can’t be right? There clearly are properties in this world, and they don’t seem to depend in any sense on the existence of other possible worlds. (Or on the existence of sets, for that matter. Surely the existence of properties is far more obvious, far less problematic, than the existence of sets. I myself firmly believe that there are properties, but I do not believe that there are any sets. Maybe they ‘exist in ZFC’ or something like that; but they’re not *real*.) And that is as it should be. Our grasp of the world is grounded in our very local experiences. It had better not be the case that those experiences themselves require a grasp of the world as a whole, let alone of the totality of all possible worlds. (And if you want to avoid that, maybe you had better avoid adopting Leibniz’s God’s-point-of-view metaphysics while removing God!)

Having said that, we arrive at the difficult point. All of the above seems obvious. So obvious that it can’t be the case that Lewis has failed to recognise it. Instead, it must be the case — or at least we should assume it to be the case — that Lewis would smile through all of the above and then explain that I’ve approached the entire question from the wrong direction; there there is something wrong in a fundamental way with my approach to the question of properties. But what? Here’s something that Arturo Javier-Castellanos wrote:

Classes are technical I guess, but unlike properties, we know how to individuate them, so if you can analyze properties in terms of things+classes, that looks like progress

and here is the Stanford Encyclopedia article on *Properties*:

Quine (1957 [1969: 23]) famously claimed that there should be no entity without identity. His paradigmatic case concerns sets: two of them are identical iff they have exactly the same members. Since then it has been customary in ontology to search for identity conditions for given categories of entities and to rule out categories for want of identity conditions (against this, see Lowe 1989). Quine started this trend precisely by arguing against properties and this has strictly intertwined the issues of which properties there are and of their identity conditions.

So perhaps here is what Lewis would say to me: those so-called obvious judgements of yours are all nice and dandy around the kitchen table, but when we start doing philosophy we should be ready to revise our understanding even of something as seemingly basic as properties. And the motivation to do so is that we want our entities to have well-behaved identity conditions; and properties, as Quine has shown, don’t seem to have those. But here I, Lewis, come to the rescue with my suggestion that properties are sets; and sets *do* have clear identity conditions. If you don’t want to accept that, you had better have an alternative story.

But why would we believe that anything that exists must have clear identity conditions? I’m certain that I exist, but the question whether I am the same person as Victor Gijsbers at four years old is, I’d say, rather hard to answer. Maybe there is no answer. So it seems I can be clear about something’s existence without being clear about something’s identity conditions and even without being clear about whether there *are* any identity conditions.

Let’s look at an example of a genuine question about the identity of properties. Is *being hot* the same property as *having high mean molecular energy*? Great question. How do we go about answering that? Presumably by a combination of linguistic analysis (or conceptual choice) and empirical research. We might want to distinguish between a phenomenological feeling of hotness and a power to cause certain thermometer readings. We might want to distinguish between that power (which could perhaps be generated in many very different ways) and the *typical* physical state (if any) that underlies that power. If we empirically research whether there is a such a typical physical state, we find that, yes, it consists in having high mean molecular energy. And so there is a sense of ‘being hot’ in which it denotes the same property that is denoted by the phrase ‘having high mean molecular energy’.

At no point in the answering of this question did we have to talk about sets or other possible worlds. Indeed, Lewis’s identity criterion is of *literally no help at all* in answering the question. Is *being hot* the same property as *having high mean molecular energy*? Well, Lewis might say, are the hot individuals across all the possible worlds exactly the same as the individuals with high mean molecular energy? That doesn’t help. We can’t go there and check. *The only way we could answer Lewis’s reformulation of the question is by first answering the original question*; once we know that A and B are the same property, then, I guess, can we say that all individuals across all possible worlds that are A are also B and vice versa. But surely we can never do it the other way around.

But if Lewis’s identity criterion cannot be used to answer real questions about identity, then what is it for? And how could it motivate a rejection of the obvious truths from which I started? So I end up still being in a state of confusion; unable to see why a theory like his might seem to make sense.

(And of course my confusion runs beyond properties. People in this part of philosophy also claim things like “propositions are sets of possible worlds”. That is just as obviously false. A proposition can be true, it can be something you believe. But a set of worlds can’t be true or false, and you can’t believe a set any more than you can believe a table. I am sure there are many more examples.)

]]>I choose that last phrase with care. Once there may have been a coherent conception of philosophy as the discipline whose task it is to *analyse concepts*, to take the pre-existing concepts of the sciences and clarify them, a discipline which therefore could be called ‘analytic philosophy’. But that is clearly not what today’s practitioners are engaged in. When a contemporary philosopher defends an A-theory or B-theory of time, she is not just analysing the scientific or everyday concept of time. Indeed, one of the most basic questions she has to grapple with is precisely the relation between science and the everyday, the question whether there is, for instance, an epistemic hierarchy between them. How *could* that be a question of conceptual analysis?

There once may have been a coherent conception of philosophy as the discipline which analyses concepts. But there are reasons for suspicion. This idea has all the characteristics of a foundational myth, suggesting a cohesion and clarity of purpose that may never have existed even in the past and serving now only as the focal point of an unproductive nostalgia. It is certain that already at the very beginning of analytic philosophy an instability was present; that already in the opening moves of the game, the very possibility of the game was being questioned. Schlick’s 1930 article *Die Wende der Philosophie*, surely as good a starting point for this tradition as any, already makes the (Tractarian) point that the insight of philosophy cannot be expressed and cannot be a form of knowledge. It ends on the note that when the new philosophy is successful, there will no longer be philosophical questions. Instead everyone will end up discussing all questions “philosophically, that means: meaningfully and clearly.”

It would be interesting to trace the suicidal tendencies of philosophy; to investigate how and why, again and again, philosophers can see the purpose of philosophy fulfilled only in the death of philosophy — a theme that we find in thinkers as different from each other as Wittgenstein, Schlick, Heidegger, Quine and Rorty. Is it that once we give up the idea of finding the Platonic Truth, there is then nothing else that philosophy can be? That there is only the therapeutic project of overcoming the urge of overcoming the urge of overcoming the urge of overcoming the urge…

Clearly one may doubt that Schlick’s *positive* vision makes much sense. How could there be a class of people, the philosophers, who have been trained to be good at *meaningful and clear* thinking, quite independent of context and subject matter, and who help the scientists achieve these same levels of meaningfulness and clarity? Schlick would not doubt answer: formal logic! But few among us still believe that the cure to the ills of the sciences lies in formalising scientific theories. There are to be sure still some desperate attempts to see philosophers as people who can help science apply Bayesian schemes of inference — but this is surely to walk away from philosophy and turn yourself into a particular kind of statistician.

Liam Bright writes:

For what I think is gone, and is not coming back, is any hope that from all this will emerge a well-validated and rational-consensus-generating theory of grand topics of interest. We can, and we will, keep generating puzzles for any particular answer given, we will never persuade our colleagues who disagree, we will never finally settle what to say about the simple cases in order to be able to move on to the grand problems of philosophy. My anecdotal impression is that junior philosophers are hyper aware of these bleak prospects for anything like creation of a shared scientific paradigm.

If my analysis above is at all right, then the prospects for such a shared paradigm have *always* been extremely bleak, even at the very inception of analytic philosophy. If philosophy does not have its own subject matter, its own truths, if it is in fact not a body of doctrine but a method for clarifying the sciences, then it is not even the kind of field in which there *could* be a paradigm. This is not to say that there was no *desire* to be like the sciences, to turn philosophy into a science and give it a paradigm. There surely was, and I think the most obvious example of this is Quine’s naturalism, the point of which is to allow philosophy to share in the paradigms (and hence the reliability, progress and prestige) of the sciences. But, again, this is merely a disguised suicide attempt. Philosophy can only achieve a shared scientific paradigm by turning itself into a science; and it can only do that by no longer being philosophy, by giving up, among many other things, the ability to ask about the nature and the status of science.

A paradigm — Kuhn is admirably clear about this — achieves it unifying social purpose only by imposing severe limitations; by setting aside a whole realm of questions as questions that cannot be asked. *This is the very antithesis of philosophy.* For what is philosophy? It is the attempt to “understand how things, in the broadest possible sense of the term, hang together, in the broadest possible sense of the term” (Sellars). It is the attempt to always take a step backwards; the wish to question every presupposition; the eternal inability to take for granted the ideas that have been bequeathed to us; the desire to think everything through for ourselves. While the individual philosopher may desire to achieve the perfect understanding that will be the inescapable paradigm for all later thinkers (this desire is, of course, the violence of metaphysics that people like Heidegger, Derrida and Vattimo warn against), nevertheless the *method* of philosophy is always and necessarily pre-paradigmatic; or rather, since we will never arrive at a paradigm, a-paradigmatic and even anti-paradigmatic.

So when I read Bright’s paragraph, I call out: good! This means that philosophy is still *alive*! And if that means the end of analytic philosophy, then analytic philosophy is something we are well rid of.

To be clear, I love analytic philosophy. I read much analytic philosophy. I sometimes write analytic philosophy. But while I do not object to analytic philosophy, I do object to analytic philosoph*ers*. I object to any philosopher identifying herself with a specific way of doing philosophy, with a specific subset of authors, and of course, also, with opposition to another way of doing philosophy and another set of authors. If philosophy is anti-paradigmatic, then it is a grave mistake to join a clique. It’s fine (and obviously necessary from a practical point of view) that some people have read more David Lewis and other people have read more Jacques Derrida, but it is not fine to turn the fact that we are always limited into a justification for self-limitation. There should be no analytic philosophers, just as there should be no Continental philosophers; and indeed no ethicists and political philosophers and philosophers of science either. Any philosophical problem is all philosophical problems. You will have known nothing if you have not known everything.

Again, I love analytic philosophy. But it has an original sin, and that original sin is the idea that philosophy could be like a science. The idea was never consistently adhered to (even in Schlick, Carnap or Neurath) and it has never stopped philosophers from doing very non-scientific things. But there is one crucial respect in which it has had an enormous and disastrous practical influence. It has made analytic philosophy, and thereby philosophy in most of the academic world, eager to embrace the institutional trapping of modern science. Perhaps we would have been dragged into a world of short journal articles, selective peer review, research projects, and increasing specialisation anyway; but the very least we could have done was kick and scream the whole time. Instead, we went with a smile. This is what we *wanted*.

For I do share one part of Bright’s pessimism. His most basic message is that philosophy requires a rebirth that isn’t happening; that there is something lifeless, something belated, something tired in academic philosophy. That for all the stuff that is happening, we nevertheless seem to be stuck in a rut. I agree. But I believe that this has nothing to do with the need for a new paradigm and everything with the way that philosophy is practised in modern academia.

What do you need to do to get into and remain in academia? You’ve got to show (again and again and again) that you are good at writing and publishing a very specific kind of text: the 8000 word article suitable for the peer-reviewed journal. This is kind of text requires one to choose a very specific topic; to delve into the pre-existing literature on that topic; to formulate a new argument or new position concerning the topic; and then to have it deemed relevant and acceptable by other people who have been writing on the same topic.

One hardly needs to spell out the obvious. This kind of publication is very good for the specialist and very bad for the generalist; it is very good for making a small contribution to an existing debate and very bad for trying to formulate new questions and new debates; it is very good for philosophy that relies on argumentation, which always requires a pre-existing context, and very bad for philosophy that relies on the imagination; it is very good for philosophy that speaks in the way that people expect you to speak and that can thus be judged by ready-to-hand standards, while being very bad for philosophy that speaks in unexpected, weird ways that nobody knows how to judge. It is, in other words, a way of writing that puts every possible obstacle in the way of rejuvenation, rebirth, originality, idiosyncrasy, having a strong sense that any philosophical problem is all philosophical problems, and having the ability, necessary to any innovator, to ignore or misinterpret her predecessors. Even when our students come to us full of newness and unforeseen sparks, we make them into scholars and specialists. And we make *ourselves* into scholars and specialists. Perhaps with the idea that later on, once we have tenure, once we have published enough, once our reputation is secure, *then* they will write the great and original works we know we are here to produce. But will we ever recapture the grand ambitions we started out with? Will we even desire to?

Lest I be misunderstood, let me say that I love scholars and specialists. It’s an honest and useful job. Perhaps, in the current scene, it is the only honest way of being a philosopher. I also love, to some extent, the academic journal article. Certainly many great ones have been written. But it is a disaster for philosophy that we require the young philosopher to become productive immediately, and then productive in a very specific, scholarly/specialist way. Think of all the texts that cannot be written. *The Birth of Tragedy* would not survive a referee coming from classics. The *Tractatus* would have been dismissed even by the greatest specialist of the day, Frege. Even a much more academic work like *Being and Time* would, first of all, not be suitable for cutting up into article-length pieces; and, second, would immediately be dismissed as ignoring most of the existing literature on “all these topics”. The possibility space of philosophical publishing has been immensely impoverished.

If we want philosophy to be more vigorous, more interesting, in one word, *better*, then we need to throw open the gates. We should allow people to get away with all kinds of things they are not now getting away with: writing dialogues and poems and jokes and books in strange undefinable genres; writing about topics and authors that they do not have specialist knowledge about but do have some unique, idiosyncratic perspective on; writing texts that make large, vague, visionary claims that cannot be immediately cashed out in terms of theses; writing texts that do not contain argument but work in different rhetorical registers. We need to be quick with encouragement and helpful ideas, slow with criticism and dismissal. We need to get rid of the most hostile of reading practices, the anonymous referee with his ‘decline’ or ‘accept’. If that means we need to get rid of the ridiculous system of artificial scarcity that is the ‘top journal’, well, that is surely a sacrifice we can make without falling into despair. We need to accept that much may be written that is bad, but that the gems will make it worth it. We need to be careful to *also* reward good scholarship and specialisation, because philosophy most certainly needs that *as well*. Most of all, we must stop dreaming about some paradigm, some final set of methods or answers, some way of doing philosophy that we can all get behind; and accept that the most interesting and fruitful philosophy will result from a practice that is splintered, that allows for small groups doing their own things, as long as this is not isolation, as long as it is balanced by an eagerness to engage with other groups, hopefully in highly unexpected ways.

Marx wrote:

“For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”

May we read Kant in the morning, write about formal logic in the afternoon, compose a Platonic dialogue about love in the evening, and talk with our friends about the *Tale of Genji* after dinner. *That* is the life, my friends; the life, that is, of the mind; the life that is philosophy.

The answer is yes. It’s now the 21st of December, and I have read 63 books. (To be entirely honest, I need to read 35 more pages before I’ll have finished number 63, but I don’t think that will be a problem. And I’m going to count any books read after that as my head start for 2021!) That’s a massive increase over last year, and I’m very happy with it.

My reading was much more diverse too. In 2019, I read 26 books of philosophy and 1 book of fiction. (Jules Verne. When I was feeling ill.) This year, 31 books — half of the list — were books of philosophy, while the rest were books of fiction, poetry, plays, history, psychology, all that good stuff. I’m really glad to have refound my passion for fiction; it’s one of my oldest loves!

I’m putting the entire list below. But first, what about 2021? Should I do a 50 book challenge again? Here’s one reason against it. While the challenge really worked to motivate me — seeing the number of books climb steadily towards the goal was very satisfying — there’s also an obvious negative side to it: you’re less likely to pick up a massive tome. I know it’s weird, but I really found myself choosing shorter books over longer ones. And while that was fine — and while I did end up reading, say, the pretty thick *Black Leopard, Red Wolf* — it’s a good reason to shake things up a bit for next year.

I’ve though about doing a page number challenge: read 12.000 pages. Something like that. But that starts to sound like accountancy. So instead, I’m going to do something much more freeform. I’m going to set myself a bunch of more or less clearly defined goals. I don’t need to meet all of them. But it would be nice to meet some of them. And at the end of the year, I’d like to have a reading list I can be proud of.

**Read at least 20 books written or edited by women**. Of the 63 books I read in 2020, only 7 were written or edited by women. Getting more gender balance sounds like a good goal, and a new way of looking at my book collection.**Read more German.**All the books I read in 2020 were in Dutch or in English, with the sole exception of Wittgenstein’s*Tractatus*. (Which I read in the original German.) Let’s read some more German books!**Read a good number of books by or about Kant**. I’ve been collecting books about Kant, but many of them remain unread. Time to change that. (Also, I’m teaching Kant next semester.)**Finally read John Crowley’s**. Everyone has these books they’ve been meaning to read for a long long time. And for a long long time I’ve been the guy who really enjoyed the first two books of this series but didn’t read the whole thing. Let’s change that.*Aegypt*quartet**Read some massive books**. I mean really big ones! Proust. Pessoa. Pynchon. Ariosto.*The Tale of Genji*.*The Divine Comedy*. Montaigne’s*Essays*. Friedman’s book on Kant’s metaphysics of natural science. The*Enneads*. Plutarch’s*Lives*. Moore’s*The Evolution of Modern Metapysics*. These are just examples. But a couple of really big ones, that would be good.**Read all the books my wife gives me for Christmas.**I don’t know which ones they’ll be, but they’ll be from a list I’ve made myself and I think I should read them!**Read and/or return the books I’ve borrowed**. Okay. That’s pretty self-explanatory. I’m not a*terrible*offender in this regard, but there’s a handful of books on my special borrowed books shelf that… yeah, have been there*way*too long. Don’t tell Dirk-Jan van Vliet about this goal, or he may remember that I still have two of his books here.**Read some books from the list made by David Bentley Hart**. Which you can find here. Most of it are things I’ve never heard of, but Hart’s book on hell (*That All Shall be Saved*) impressed me, and this list sounds like it could be full of wonderful discoveries. I don’t own*any*of the books, except for Sei Shōnagon’s*The Pillow Book,*which is mentioned at the end. So I suppose that is more or less mandatory. (Also, written by a woman, so it helps with the first goal!)

So far for the future. As for the past, here is the list of books I have read in 2020:

- Juffer et al,
*18 x 18: pleegkinderen op de drempel* - Kenneth Clatterbaugh,
*The Causation Debate in Modern Philosophy 1637-1739* - Henri Bergson,
*Creative Evolution* - Dave Morris & Jamie Thomson,
*Can you Brexit?* - Henri Lipmanowicz & Keith McCandless,
*The Surprising Power of Liberating Structures* - Matthew Walker,
*Why we Sleep* - Nelson Goodman,
*Fact, Fiction and Forecast* - Paul Horwich,
*Asymmetries in Time* - T. S. Eliot,
*Four Quartets* - Martin Heidegger,
*The Essence of Human Freedom* - Emanuel Rutten,
*Contra Kant* - Immanuel Kant,
*Critique of Pure Reason* - Henry Allison,
*Kant’s Transcendental Idealism* - Sebastian Gardner,
*Kant and the Critique of Pure Reason* - Immanuel Kant,
*Prolegomena* - Pieter Thyssen,
*The Block Universe* - Angela Coventry,
*Hume:**A**Guid**e**for the Perplexed* - Immanuel Kant,
*Metaphysical Foundations of Natural Science* - Cees Nooteboom,
*Philip en de anderen* - Vonne van der Meer,
*Winter in Gloster Huis* - Patricia Duncker,
*Hallucinating Foucault* - David Bentley Hart,
*That All Shall Be Saved* - Harry Frankfurt,
*On Bullshit* - Immanuel Kant,
*Theoretical Philosophy after 1781* - Alexander McCall Smith,
*The No. 1 Ladies’ Detective Agency* - Thomas Harrison,
*Great Empires of the Ancient World* - Ted Sider,
*Four-Dimensionalism* - Charles Dickens,
*Great Expectations* - Peter Watts,
*Blindsight* - Aldous Huxley,
*The Genius and the Goddess* - August Strindberg,
*Three Plays (The Father / Miss Julia / Easter)* - Roger Zelazny,
*Lord of Light* - René ten Bos,
*Extinctie* - Vladimir Nabokov,
*Invitation to a Beheading* - Ibn Tufayl,
*Hayy ibn Yaqzan* - Susan Haack,
*Philosophy of Logics* - Aaron A. Reed,
*Subcutanean (seed #30330)* - E. J. Lowe,
*Locke* - Gene Wolfe,
*Soldier**in**the Mist* - Boëthius
*, De vertroosting van de filosofie* - Alexander McCall Smith,
*T**ears of the Giraffe* - George Berkeley,
*An Essay Towards a New Theory of Vision* - Samuel Johnson,
*The History of Rasselas, Prince of Abissinia* - Michael Pye,
*The Edge of the World* - Marlon James,
*Black Leopard, Red Wolf* - Alexander McCall Smith,
*Morality for Beautiful Girls* - Robert van Gulik,
*Halssnoer en Kalebas* - Alexander McCall Smith,
*The Kalahari Typing School for Men* - Margaret Wilson,
*Descartes* - Brian McGuiness,
*Wittgenstein: A Life* - Ludwig Wittgenstein,
*Tractatus Logico-Philosophicus* - David Hume,
*An Enquiry Concerning Human Understanding* - Michael Morris,
*Wittgenstein and the Tractatus* - Elizabeth Anscombe,
*Introduction to Wittgenstein’s Tractatus* - Ludwig Wittgenstein,
*Over kleur* - Walther Heissig (ed.),
*Mongoolse sprookjes* - Michèl de Jong & Drs. P,
*Kijkvoer en leesgenot* - William Shakespeare,
*Romeo and Juliet* - Milan Kundera,
*De romankunst* - John Dewey,
*Experience and Education* - Cornelis Verhoeven,
*Vergeet de zweep niet* - Aaron A. Reed,
*Subcutanean**(seed #303**2**3)* - Donald Loose,
*Over vriendschap*

This is sad. It is sad, first, because proofs are often more fun and more beautiful than calculations. And, second, because you can’t really understand what mathematics *is* if you don’t know about proofs. It’s like having had physics classes and being able to use Ohm’s Law and the Ideal Gas Law and so on to calculate things… and yet having no idea what an *experiment* is and how physicists find out about laws. For proofs stand to mathematics as experiments stand to the physical sciences: they are how we know that our statements are true.

The mathematics text book gives us the Pythagorean theorem: “In every right triangle with right sides *a* and *b* and hypotenuse *c*, it is true that *a² + b² = c²*.” Nice. But how do we *know* this is true? What we *could* do, is this: draw a variety of right triangles, measure their sides, perform the calculation, and see whether the outcomes are approximately correct. We *could* do that, but if we did, we would not be doing mathematics. The Pythagorean theorem is not something that holds approximately. It is also not something that we happen to have some good evidence for, but which could be proved false by new evidence (the discovery of a right triangle that doesn’t fit the formula). On the contrary. We can *prove* the theorem. That is, we can show that it’s true, for absolutely every right triangle, and in a way that can withstand all criticism and that leaves no room for further doubt or further ‘experimentation’.

In this way, it must be said, mathematical proof is very different from, say, Sherlock Holmes ‘proving’ that the butler did it. There could always be new evidence showing that the butler was innocent. Mathematical proof is not like that.

So how does mathematical proof work? There are many different types of proof and I want to look at some examples. But first, four bits of terminology about numbers. Mathematicians love careful use of terminology, because they allow us to see that our proofs are valid.

- The
**natural numbers**are just the ‘whole’ numbers starting from zero: 0, 1, 2, 3, 4, 5, 6, … (So neither -2 nor 0.5 are natural numbers.) - A natural number is a
**square** - We say that one number,
*a*, can be**divided by**another number,*b*, just in case*a/b*is a natural number. So 12 can be divided by 3, because 12/3 = 4. It can’t be divided by 5, because 12/5=2.4, which is not a natural number. Another way of thinking about this is that*a*can be divided by*b*just in case you could equally divide*a*cookies over*b*children without having to break any. (Of the cookies.) - A
**prime number**(or simply a**prime**) is a natural number unequal to 1 that can only be divided by 1 and by itself. So 5 is a prime, because you can only divide it by 1 and by 5.

Given this bit of terminology, we can make the following claim:

All natural numbers smaller than 6 are either prime or square.

How do we prove this? That’s easy! We just look at all the natural numbers smaller than 6. Let’s go. 0 is the square of 0. 1 is the square of 1. 2 is prime. 3 is prime. 4 is the square of 2. And 5 is prime. So the mathematical claim we made was true. We’ve proven it beyond a doubt.

It was possible to do so because (1) we could easily check the truth for each of the numbers, and (2) there was just a limited amount of numbers to check. And of course we can be sure that we didn’t *forget* any numbers. We *know* what the natural numbers smaller than 6 are; it’s impossible that somebody makes the surprising discovery of a new natural number between 2 and 3! (*Why* this is impossible is a difficult question for the philosophy of mathematics. We will not go into it.)

To be honest, though, this proof is pretty disappointing. It’s an example of what mathematicians call a *proof by exhaustion*, or what we might also call a *brute-force proof*. We just tried every case. There was no real insight involved, and the resulting proof lacks what mathematicians like to call *elegance*. So let’s move on to something better!

(I *did* have a reason for talking about this particular mathematical claim, though you might not think it a *good* reason. See, I really *hate* the kind of test where you are given a list of numbers and have to give the next number. “0, 1, 2, 3, 4, 5″… what’s the next number? Of course you’re supposed to think that these are the natural numbers and the next number is 6. But it could just as well be a list of all the squares and primes, in which case the next number is 7. With sufficient ingenuity, you can make the case for *any* number being the next number. So these tests don’t test whether you can do maths, or whether you are smart. They test your ability to conform to expectations. Okay, that was my rant. Back to proofs.)

Let’s look at the following mathematical claim:

8 is not a square

This may look trivial, but it is actually a claim about *all* the natural numbers, telling us that for any natural number *n*, the number *n² *is not 8. How can we know that this is true? Surely we can’t exhaust all the natural numbers? No, we can’t. But let’s see what happens if we *try*.

0² = 0, 1² = 1, 2² = 4, 3² = 9, 4²=16…

At this point something dawns on us. If we go on, the numbers will just get bigger and bigger. And since we have already passed our target number of 8, we can be sure that every number we haven’t checked yet will yield a square that is bigger than 8. Hence, we have done enough; we can be sure that there is indeed no number that squares to 8.

I’m willing to call this a proof. But it’s pretty informal. I *claim* that the numbers will just keep getting bigger, and it’s kind of obvious… but do I really *know* that the numbers will keep getting bigger? Surely, if we are to be mathematicians, we can do better than this. We can redo the proof in a slightly more formal, slightly more careful, and therefore more robust way. How do we do that? If we think about it, the idea of our proof is the following: we show that 0, 1 and 2 don’t square to 8; then we point at that 3 squared is already too big; and then we claim that this means that for all the bigger numbers, the square will also be to big. In formulas:

- STEP 1: we show that 8 is not the square of 0, 1 or 2.
- STEP 2: we show that
*3² > 8*. - STEP 3: we show that if
*3² > 8*, then*n² > 8*for all*n > 3*.

The first step is a very small proof by exhaustion. Step 2 and 3 together constitute what mathematicians call a *proof by induction*. A proof by induction works like this: you show that something is true for a particular number *a* [STEP 2 above], and then you show that if it’s true for some number *n*, then it is also true for *n+1*. Together, this shows that it’s true for *a* and *every* number bigger than *a* [STEP 3 above].

How does that work in this example? Showing that *3² > 8 *is easy, since it’s clear that *9 > 8*. But now we also want to show that *if n² > 8, then (n+1)² > 8*. How do we do that? Well, we first assume that *n² > 8*. Then we calculate that *(n+1)² = n² + 2n + 1*. But *2n + 1* is always bigger than 0. So *(n+1)² > n² > 8*. This completes our proof by induction.

This may have been a *bit* more interesting than our first proof, but we’re still not really doing anything amazing, are we? I mean, we kind of already *knew* that 8 wasn’t a square. It’s sort of obvious. What the proof does, though, is clarify *why* it’s obvious. And indeed it often turns out, when we try to prove something, that the obvious isn’t *that* obvious. Suppose somebody had said: “Clearly 8 isn’t a square. The square root of 8 is 2.8284…, which is not a natural number. So 8 is not the square of a natural number.” Sounds good. But there’s an unspoken assumption there, which is that *no two numbers can have the same square*. And this assumption has not been proven and may not be true. In fact, it’s false: 8 has another square root, which is -2.8284… Clearly, mathematicians have to be very careful about stating their assumptions, because things can go wrong if they don’t.

Still — we want to prove something a bit more interesting. Something that is not obvious. Maybe we would like to prove that there infinitely many prime numbers? That’s not obvious! It seems like there could also be 50 of them, or 5000, or maybe 10⁵⁰. But no, there are in fact infinitely many, and you and I are going to prove so in this very blog post.

But before we’re there, let’s mess around a bit and think about dividing one number by another. In fact, I’ll give you a problem to solve… quick, give me a *number that is bigger than 2, but that can’t be divided by 2*.

Maybe you said… 3? Good choice! 3 = 2 + 1, and that means that if you try to divide it by 2, you’re going to be left with that nasty + 1 that can’t be divided by 3. In school, you may have called that 1 the “remainder” of the division. Thinking about remainders is actually going to help us solving some other problems… quick, give me number that is bigger than 346, but that can’t be divided by 346!

What about… 347? Yes, brilliant! Clearly, 347 can’t be divided by 346. Because it is 346 + 1, and if you try to divide it by 346 you’re going to end up with a remainder of 1. In other words, 347/346 = 1 + 1/346, and the nasty fraction at the end ensures that this is not a natural number.

In fact, it’s clear that for every number *n* (with *n > 1*), the number *n + 1* cannot be divided by *n*. There will always be that remainder of 1.

Now we’re going to make things a bit more interesting. I now want a number that is bigger than 2 and bigger than 3 and that cannot be divided by either 2 or 3. How do I construct such a number? Well, here’s an option: multiply 2 and 3, then add 1. So we do (2 * 3) + 1, and of course that means we end up with 7.

It is clear from how we wrote the number (2 * 3) + 1 that you cannot divide it by 2 and cannot divide it by 3. Dividing by 2 gives 3… and a remainder of 1. Dividing by 3 gives 2… and a remainder of 1. And this is true in general. If I have two numbers *a* and *b*, and I want to construct a bigger number that is not divisible by either *a* or *b*, then the number (*a * b) + 1* will do the job. Because divide it by *a* or divide it by *b*, and you’ll always have that remainder of 1.

In fact, we can generalise this insight even further. (Mathematicians love to generalise: take an insight about something specific, and show that it applies more generally.) Give me any amount of numbers bigger than 1, let’s call them *a, b, c, …, z*. Then we can construct a bigger number that cannot be divided by any of those numbers; it is the number *(a * b * c * … * z) + 1*. There will always be that pesky remained of 1. But this is nice! Without really meaning to, we have given a proof of the following theorem:

Let

a, b, c, …, zbe any list of natural numbers bigger than 1. Then there exists at least one number bigger than any of them that cannot be divided by any of them.

The proof we have given of this theorem is what mathematicians call a *constructive proof*: we have actually given a recipe to *construct* (we could also say “find”) a number that has the sought-for property. And what’s also great is that our proof is completely general. It works for any list of natural numbers, no matter how many, no matter which numbers they are.

We now have almost all the ingredients we need to give a famous proof of that very important mathematical claim:

There is no biggest prime number.

At first sight, this may seem really hard to prove. We can of course make a list of prime numbers: 2, 3, 5, 7, 11, 13, … So far so easy. We can even write a computer programme that will keep calculating more. But how can we prove that this list will never end? How can we show that there is always a bigger prime to be found, that our computer programme will never stop coming up with new primes?

Here’s what we’re going to do. We’re going to give a *proof by contradiction.* How does that work? Well, we are going to assume that there IS a biggest prime number. And then we are going to show that this leads to a contradiction. If the assumption leads to a contradiction, it must be false; and so we can conclude that there is in fact no largest prime.

Sounds hard? With the work we’ve already done, it’s actually not that difficult. But we need to lay just a *little* more groundwork. To be precise, we need to prove this:

Any number

n > 1is divisible by at least one prime.

To prove this, we start by considering two cases. Either *n* is prime, or *n* is not prime. (Clearly, these are the only two cases. We call them “jointly exhaustive”, because together they exhaust all possibilities.) And now we will prove the theorem first for the one case, and then for the other. This is often a good way of approaching a proof.

Suppose that *n* is prime. This is the easy case! Every number is divisible by itself; you can always divide *n* cookies over *n* children, simply by giving 1 cookie to each child. So if *n* is prime, it is automatically divisible by a prime number, namely, itself.

Second case. Suppose that *n* is not prime. If *n* is not prime, we can write it as *a * b*, with *a* and *b* natural numbers bigger than 1. (That is what it *is* to be not prime: it means you are divisible by a number different from 1 and yourself. And so you can be written as a multiplication of two different numbers. For instance, 20 = 4 * 5.)

Now again we will distinguish between two cases. First case: *a* or *b* is prime. Second case: neither *a* nor *b* is prime. Take the first case first, so *a* or *b* is prime. Well, *n* is divisible by *a *and by *b*, so it’s divisible by a prime, and our job is done.

Second case. Suppose that *a* and *b* are both not prime. Then they themselves can be written as a product of two smaller numbers; so, for instance, we can write *a* as *c * d* and we can write *b* as *e * f*. So we now have *n = a * b = c * d * e * f*. And you can see how we proceed! One of *c, d, e, f* may be prime, in which case we know that *n* is divisible by a prime. Or none of them is prime, in which case we can go on writing *n* as a product of even smaller numbers. We can keep doing this; at every stage we either reach a prime or we keep going. And since the numbers become smaller and smaller, this process *must* stop at some point; it can’t go into infinite. And *that* is the point at which we must have reached primes. So, any non-prime number can be written as the product of primes; and therefore, any non-prime number is divisible by at least one prime.

That’s a good proof, but we can perhaps make things clearer by giving a concrete example. Take the number 72. It can be written as 72 = 6 * 12. Neither of those are prime. But 6 can be written as 2 * 3, and 12 can be written as 3 * 4. So 72 = 2 * 3 * 3 * 4. We could stop there, because some primes have appeared; but maybe we notice that 4 can be written as 2 * 2 and we take it one step further: 72 = 2 * 3 * 3 * 2 * 2. It can be written as a product of only primes. Clearly, this is a process we can perform for any non-prime number, and we always up with a bunch of primes.

*Now* we can give the famous proof that there is no biggest prime number. We’ll do it as a proof by contradiction: we start from the assumption that there *is* a biggest prime and we show that this assumption leads to a contradiction. So the assumption has to be false.

Suppose that there is a biggest prime number, which we’ll call *N*. Then we can make a list of all the prime numbers from small to large, and it will look like this: 2, 3, 5, 7, …, *N*. Now we construct the number *P* as follows*:*

*P = (2 * 3 * 5 * 7 * … * N) + 1*

In other words, *P*, is one plus the product of all the primes. Now *P* must be divisible by at least one prime. (We proved above that all numbers bigger than 1 are divisible by at least one prime.) But we have also already seen that the number *P* cannot be divided by any of the numbers *2, 3, 5, 7, …, N*, since there will always be a remainder of 1. However, our assumption is precisely that those are *all* the primes, so it follows that *P* is not divisible by any prime. But it has to be divisible by a prime. Contradiction! Since the assumption that there is a biggest prime number leads to a contradiction, the assumption must be false. There is no biggest prime number. There are infinitely many prime numbers.

To be clear, this *proof by contradiction* does not tell us how to find a prime number bigger than *N*. It only tells us that there must be one. (In fact, that there must be infinitely many.) If we look at the number *P*, we see that either *it* must be a prime, or some number between *N* and *P* must be a prime. So we know a range in which to search, but we don’t know where the prime can be found. (In practice, there will usually be *many* primes between *N* and *P*.)

Is there something strange about this? That we can know that there *is* a prime bigger than *N* without knowing which number it is? Maybe; but on the other hand, we may also know that *someone* murdered Mr. Black without knowing who it was. So perhaps it’s not so strange after all.

This completes my overview of proofs. But you may wonder whether mathematicians can always find the proofs they want. And the answer is no. There are situations where we just don’t know whether and how a certain mathematical claim can be proved. For instance, there is Goldbach’s conjecture:

Every even number greater than 2 is the sum of two primes.

It’s easy to find positive instances for this claim. 4 = 2 +2; 6 = 3 + 3; 8 = 3 + 5; 10 = 5 + 5; 12 = 5 + 7; and so on. In fact, Goldbach’s conjecture has been shown to be true for all the numbers up to 4 × 10^{18}. But we have no proof of it, though many mathematicians have tried; and so we don’t know whether it holds for all numbers or not. In fact, we can’t be sure that we’ll ever be able to know; it could be the case that there is no possible proof that Goldbach’s conjecture is true and also not possible proof that it is false. Maybe it can be neither proved nor disproved.

Indeed, strange as it may sound, there are mathematical claims that can be *proved to be unprovable*: we can prove that there is no possible proof that the claim is true and no possible proof that the claim is false. An example is the continuum hypothesis. But talking about that would be taking us too far afield.

That was it: a primer on mathematical proofs. I hope you enjoyed it!

]]>What are the natural numbers? Of course, they’re 0, 1, 2, 3, and so forth. But what *is*, say, 0? And what do the words “and so forth” actually do? One customary way of thinking about this is that the natural numbers are defined by Peano’s axioms. 0 is a natural number; every natural number *n* has a successor *S(n)*; *m=n *if and only if *S(m) = S(n)*; 0 is the successor of no natural number; and a few more axioms having to do with equality and induction. Nice! But can these axioms really *define* the natural numbers?

Or, to ask what may be the same question in a slightly different way: can these axioms pick out the natural numbers among all other things? Are they true *only* of the natural numbers? Not at all. As Russell pointed out, they are just as true about the “natural numbers bigger than 99” (with the succession relation being the customary ‘+1’). Or about the even numbers (with the succession relation being the customary ‘+2’). Or about all numbers of the form 1/(2^n) (with our customary 1 being the zero element and the succession relation being the customary division by 2). So the Peano axioms do not, in fact, define the naturals.

There are several way to try and get around this. First, one could claim that “natural numbers bigger than 99” doesn’t fit the axioms, since it’s first element is 100 and not, as the axioms require, 0. But this is to overlook the fact that it is an open question whether the ‘0’ of the axioms is the same as the ‘0’ or our everyday language. Or rather, if that is *not* an open question, then the axioms presuppose a way to identify 0 and hence cannot be said to define the natural numbers. Also, the first element of our second construction (even numbers) actually *does* start with the customary 0.

Second, one could suggest that the Peano axioms by themselves may be powerless to define the natural numbers, but they become capable of doing so once we add the axioms for addition. After all, the axiom *a + 0 = a* is true for the naturals, but not true for the naturals bigger than 99; since 100 + 100 = 100 is false (and 100 is the zero element in this construction).

But how do we know that 100 + 100 is false? Of course, this requires us to identify the ‘+’ sign with our usual addition on the natural numbers. But this just throws us back on the original problem. If we presuppose the ability to perform this identification, then we presuppose an ability to simply *state* that the axioms are to be understood ‘in the usual way’, and if it is necessary for us to do say, then the axioms clearly fail to define their intended domain. On the other hand, if we take the axioms as defining the ‘+’ operation, then we clearly have to say that *this* ‘+’ is actually our usual (+ *x* – 100). Or, in the particularly nice case of the (1/n^2) progression, that the ‘+’ sign is our usual multiplication.

Third, one could suggest that the natural numbers just *are* what the Peano axioms define; and that the very question of whether they succeed at this definition must therefore be mistaken. According to this way of thinking, it make no sense to ask whether the axioms are *really* about the natural with our customary addition, or whether they are *really* about the sequence (1/n^2) with our customary multiplication. For this precisely presupposes a grasp of these different domains that is *independent* of the axioms; and that is what any kind of ‘formalist’ conception of mathematics sets out to deny.

Well and good. But there is a price to pay for this formalist move. For we surely *do* have a pre-axiomatic grasp of the natural numbers: we count things. The formalist move forces us to say that whatever the mathematician is talking about, she is not talking about *that*. If there are three apples on the table, and we ask the mathematician how many apples there are on the table, then she can either speak in the language of everyday and say “three”; *or* she can speak in the language of mathematics, but then the answer has to be “could be anything, or possibly nothing, I first need to see the entire formal structure of your counting procedures.” But this answer is a disaster. For we can never show enough of our counting procedures simply by performing them: no amount of performances could ever exclude non-standard interpretations or even failures to obey the rules of arithmetic at all. (It may always turn out that, say, the successor of 167842646703 is 0!) So instead we will have to give the *general rules* of our counting procedures. But this will either be a formalist mathematical system — the Peano axioms again — which do not themselves describe counting; or we will have to find some way of linking the rules to our practice, which brings us back to stage zero of this entire argument.

One *could* completely cut the link between mathematics and the practice of counting, I suppose. But this would make mathematics devoid of any use; and, perhaps even more tellingly, it would leave us in desperate need for a *science of counting* — forcing us to reinvent mathematics under a different name.

So — so much for formalism?

]]>