1. Reconstruct Nozick’s main argument in premise-conclusion form. What role does the thought experiment play in his argument?
1. If all we value is (subjective, personal, sensuous) experience, then we’re willing to spend our lives in an experience machine.
2. We are not willing to spend our lives in an experience machine.
3. Therefore, experience is not the only thing we value.
Nozick’s thought-experiment, summarized in the first sentence of the fourth paragraph, works as a kind of reductio ad absurdum. He posits a situation in which what we think we want (if we’re advocates of ‘value = experience’) is entirely achieved; Nozick tries to show that we still wouldn’t be satisfied. Consequently, there must be something we want (something that we value) separate from experience.
2. After you lay out his argument, choose one premise to criticize. Lay out your counterargument, that one of Nozick’s premises is false, in premise-conclusion form.
Nozick premises that everyone, or at least most people, are unwilling to spend their lives in an experience machine: “We learn that something matters to us in addition to experience by imagining an experience machine and then realizing we would not use it.” My counterargument to this premise:
1. If (most) everyone were unwilling to spend their lives in an experience machine, we would probably find lots of evidence for people fleeing things like the experience machine, and little evidence of people seeking things like the experience machine.
2. However, what we actually find (in instances of proto-virtual reality, e.g. books, movies, the Web, video games, recreational drug use, etc.) is just the opposite: lots of evidence of people seeking things like the experience machine, and little evidence of people fleeing things like the experience machine.
3. Therefore, it is not probable that (most) everyone is unwilling to spend their lives in an experience machine.
It seems to me that Nozick’s argument has the appearance of an ‘intuition pump’ (Daniel Dennett’s term for a thought experiment which makes explicit formerly-implicit intuitions), but actually relies heavily on the (implicit) intuition of his reader. When we read a description of someone entering the experience machine, our experience as readers is viewing them from the outside. Nozick uses dystopian language to imply a disturbing image: we’re “in the tank” and “plugged in.” Of course, what people would actually experience in the experience- machine would be quite a bit more fantastic: Nozick doesn’t talk about e.g. the experience of riding a dragon into battle, or of an orgasm which is as intense and as long as the user desires.
Moreover, Nozick seems to me to rely on a naive understanding of what counts as virtual reality, which prevents him from noticing that most people already spend their entire lives plugged into a kind of virtual reality. I mean, generally, that we live inside language and technology; and specifically, that we live with bank accounts and nations and ideology and international news and, nowadays, the Web and soon-to-be Google Glass.
In short: I think Nozick rigs his thought experiment via narrative technique, and that Nozick ignores the fact that we already live inside lesser-versions of the experience-machine.
3. Suppose you had the opportunity to have someone else make all of your decisions for you for the rest of your life. Suppose further that this person knows you so incredibly well that her decisions are guaranteed to make you happier in the long term than you would be if you made your own decisions. Would you accept such an arrangement? Why or why not? What does this case tell us about the plausibility of hedonism?
I would not accept this offer without certain added guarantees, e.g. a clause limiting the harm I might cause to other people while being happily controlled. In other words, I have values which prevent me from accepting this offer of controlled-happiness (i.e. unmitigated hedonism, sans freedom). The function of these values in my life is to be strategies for dealing with the fact that I am not controlled and my happiness is not maximized. I suggest that my actual values are a kind of emotional survival mechanism: while I was not born with values like loyalty to friends, concern for justice, and compassion for other sentient beings, I learned them largely as strategies for pursuing my own happiness. Part of why these values are able to serve as strategies for happiness is that there appears to be (in actual reality) a reciprocal relationship between being a good person and being a happy person. I need values so that I can be good, so that I can be happy. [Footnote: This relationship between goodness and happiness is part of why I am suspicious of Hobbes and Glaucon’s egoistic arguments. I don’t suggest that people should not be selfish/hedonistic, but I do suggest that there is no fundamental conflict between selfishness and being good.]
So far as the value of my own freedom per se, I guess my view is that freedom is important not because it’s good in itself but because it’s just a fact about human life that we’re stuck with it. My thoughts are somewhat confused on this issue: mostly, I want to be suspicious of naively presuming that freedom is always, intrinsically good.
4. “Imagine a universe consisting on one sentient being only, who falsely believes that there are other sentient beings and that they are undergoing exquisite torment. So far from being distressed by the thought, he takes a great delight in these imagined sufferings. Is this better or worse than a universe containing no sentient being at all? … I suggest, as against Moore, that the universe containing the deluded sadist is the preferable one.” –JJC Smart.
Make an argument in response to the above question. That is, argue either for the conclusion that the world with the deluded sadist is better, or for the conclusion that the world with no sentient being is better. First, reconstruct your argument in premise-conclusion form. Next, write a paragraph in defense of your argument.
1. A happy person who is also unvirtuous is better than no happy person at all.
2. The deluded-sadist universe has a happy person.
3. Therefore, the deluded-sadist universe is better than a universe with no happy person at all.
There appears to be common consent between duty, consequentialist, and virtue ethicists that ethics is only meaningful with reference to sentient beings; despite their differences, none of these positions really makes any sense without persons who do, feel, and know. From a utilitarian view, it seems obvious that this thought-experiment has only one sentient being, and his utility is greater in the universe where he happily exists than in the universe where he doesn’t. Regarding the Kantian view, it’s not really clear to me how duty ethics could work in a universe without at least two people in it (except perhaps that the sadist has duties to himself, in which sense duty-ethics seem to become indistinguishable from virtue-ethics). And from a virtue-ethics view, it could be argued that the total virtue of the sadist is negative and so it would be better for no one to exist; however, it seems to me that a stronger virtuist position holds that the possibility of a virtuous person (into whom the sadist might grow) is better than no person at all.