People don't think philosophy is useful. Why is this? Macmillan Dictionary has two definitions of philosophy. The first, "the study of theories about the meanings of things such as life, knowledge, and beliefs", is what people think when they hear "philosophy". They think of boring and antiquated texts on the nature of what is sacred. They think of white-bearded men debating over the trivialities of pointless and vague quandaries. They think of glassy-eyed teenagers wondering aloud, "What if, like, our universe is just inside an atom in another universe?" These are not philosophy. Or, at least, these are not the heart of philosophy, but, rather, its side-effects. The second definition offered by Macmillan is more true to what philosophy is at its core: "a system of beliefs that influences someone's decisions and behaviour." Much like the purpose of the brain is to control the body's movement, the purpose of philosophy is to act as a tool for making decisions in your life. In trying to do so, philosophy might delve into complex and counter-intuitive ethical systems, philosophers might write long, tedious and precisely-worded treatises, and strange hypothetical situations or worlds might be concocted and considered, but philosophy is not a collection of books, or a series of musings, or a set of questions, or even a group of people. Philosophy is a process, a methodology, and, ultimately, a mode of thinking about things and examining situations. As I've previously bemoaned, not having philosophical foundations for decision-making or at least a sense of what things you value and what goals you wish to achieve (both of which philosophy can help determine) can lead to terrible, contradictory choices that are without sense and without purpose.
In the above video, Jonathan Blow, a wildly popular figure in the gaming world, talks about game design and what makes a game worth playing. He finds several things to be valuable: not only "fun", as traditionally emphasized in gaming, but also the ability to make one think, or to evoke an emotional response. This is exactly what has made Jonathan Blow so popular, at least from my perspective: he's taken the time to philosophize about video games, and examined the gaming world through the philosophical lens he's constructed. Far beyond that, he's used his philosophical framework to inform the design and development of his indie game hit, Braid. He decided, from a philosophical and ethical perspective, that games shouldn't try to "trick" the player but, instead, should respect the player's intelligence and time. The decisions he made in creating Braid reflected this considered viewpoint and (at least in part) made it the excellent and critically lauded game that it is. Regardless of whether you agree or disagree with his views, Jonathan Blow has become an influential and well-respected developer because he has internalized and applied the spirit of philosophy. And this, furthermore, is what I encourage you to do. Next time you need to make a decision, spend some time considering what you think has inherent value, how your goals reflect that, and how (if at all) your options appeal to those values. You might not end up making the right decision, but you'll at least make a well-reasoned and justifiable decision.
A pervasive matter of debate in the field of philosophy is that of what constitutes prudential value; this is to say, what things in life are good in themselves (non-instrumentally good)? Let's look at something like eating your vegetables. Most people wouldn't say that eating vegetables is good in itself, but rather that it's good because it serves another purpose, or is instrumentally good. In this case, eating your vegetables is good because it increases your overall health. Is being healthy non-instrumentally good? Although you could argue that being healthy is in and of itself a good thing, it again would seem that a more reasonable explanation is that being healthy is good in that it causes some other effect. For example, being healthy could be said to increase your overall happiness. We can then ask another question: is happiness non-instrumentally good? It would seem to be the viewpoint of many people that being happy is good in and of itself (and, conversely, that pain is bad in and of itself). After all, I can't imagine that most people want to become happy to further some other end. Moreover, if we were to compare two people, it would not be unreasonable to say that, all else being equal, the one who had experienced more happiness and/or less pain in life was better off than the other (indeed, this might seem intuitive). This viewpoint is one theory of prudential value, known as hedonism, which has gone on to be the foundation for a number of influential philosophical works, including (perhaps most famously) John Stuart Mill's Utilitarianism. Although hedonism is a subject well worth covering, I won't be doing so today. Instead, I'll be presenting a theory of prudential value known as "Desire Fulfillment Theory", or "DFT". DFT's take on why health is good for you is that you have a desire to be healthy, which being healthy fulfills (rather self-explanatory, isn't it?). A more formalized definition of DFT might be as follows:
Something is good for someone if and only if (and because) it fulfills their desires.Again, we can provide an example of how this might work: Al and Bob are identical brothers that have led identical lives. Both of them want a candy-cane. Al is given one, fulfilling his candy-cane desire, whereas Bob is not. DFT would then say that Al is (however slightly) better off than Bob. Make sense so far?
Something is good for someone if and only if (and because) if fulfills the desires they would have in some idealized condition C.So, if we again take the case of the addict, in an "idealized condition" they would not be addicted, would be well informed of all the pros and cons of abusing whatever substance, and would (presumably) not have a desire to take that substance (or even a desire to not take that substance). Hooray! Are we done?
Something is good for someone if and only if (and because) it fulfills a self-regarding desire that they would have in some idealized condition CObviously, this solves the problem of Charlie but, as you might have guessed, raises some questions. What counts as self-regarding? Let's say that Charlie hopes that his mother has a good day. Will the fulfillment of this desire be sufficiently self-regarding that Charlie could benefit from it? If so, how far out from Charlie can we go before his desire no longer counts? His distant uncle? A cousin of a cousin? On the other hand, should we say that the only desires that could benefit Charlie are those which directly benefit himself, it would seem like the fulfillment of a desire that his own son lead a good life would, if fulfilled, make Charlie no better off, which would seem to be a clear violation of our intuitions. This same question applies to the "idealized condition C" under which we've been operating since IDFT: what exactly is the condition? Is it tantamount to omniscience? If our desires are supposed to convey some level of abstraction from our actual lives (e.g. so that we don't make decisions based on addictions), how far removed are the desires supposed to be? John Rawls offers an amusing reductio ad absurdum in A Theory of Justice, quoted here as re-stated by Roger Crisp in the Stanford Encyclopedia of Philosophy:
"Imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns of Harvard. […] This case is another example of philosophical ‘bedrock’. Some will believe that, if she really is informed, and not suffering from some neurosis, then the life of grass-counting will be the best for her."Overall, the problem seems to be that, with the addition of each modification of DFT, we are moving away from actual human desires and, in some sense, rejecting their validity. Instead, it would seem that SRIDFT's theory of prudential value is something along the lines of "It's good for you to get what you want, and what you want should be what's good for you." This statement not only fails to really provide a framework for what has prudential value and what does not, but it's almost tautological! So is this it, then? Should we abandon DFT entirely?
According to second-order desire fulfillment theories (2DF), A is bad for me if and only if I desire that A not occur, and my desire that A not occur is either endorsed (the stronger variant) or not disendorsed (the weaker variant). A desire is "endorsed" if and only if I have a desire to have that desire. A desire is "disendorsed" if and only if I have a desire not to have that desire.The inverse of this might be "the fulfillment of a desire is good for one if and only if the desire is either endorsed or not disendorsed." Ergo, if I want to go skydiving and either A) want to want to go skydiving or B) don't have a desire not to want to skydive, then skydiving would be a good thing for me. But say I was an addict: although I most certainly want whatever I'm addicted to, I don't want to want it. I want not to want it. In this way 2DFT brings out a more "idealized" version of my desires without having to resort to IDFT. We can also look at the case of Charlie yet again: he may have a passing desire for the old man he saw to have a good day, but does he truly want to want for him to have a good day? Is his desire strong enough to explicitly provoke this second-order desire, as the stronger variant of 2DFT would require? It would seem unlikely. Thus, we can also avoid the pitfalls of SRIDFT. Most importantly, we have stopped short in their tracks any objections about vague hypotheticals and defining the good in terms of the good: second-order desires are real, human things that have been well-explored in the field of psychology (readers who find second-order desires intriguing may want to investigate the works of Harry Frankfurt and Richard Holton, especially Holton's excellent work on akrasia (a.k.a. weakness of will)).