Tuesday, December 28, 2010

Current Thinking on Value

How many distinct meanings does the word 'value' have? Right now, I think the answer is four, but I have the feeling this list is both incomplete and not as compact as it could be. I'd like to make sense of all uses of 'value' in as few basic meanings as possible. Here's what I have now:

Value as Attitude
Mary values her alone time.
Pavel values books printed in the year he was born.
These sentences focus on a particular attitude Mary or Pavel have toward something. If we can shift the word order and still talk about this particular type of value, we get:
Mary's alone time is valued by Mary.
  or maybe
Mary's alone time has value to Mary.
  or
Mary's alone time is valuable to Mary.
  or even
Mary's alone time has value.
In the last example, "to Mary" is not explicitly mentioned but may be discernible by context. Now what's interesting here is that we can make statements which superficially sound like things have the property of "being valuable" in themselves, but really these things "have value" purely because someone has a particular attitude toward them.

This is the sort of value people have in mind when they say "value requires a valuer" or that there can't be "value without a valuer."

Value, the Facilitating Relationship
Sobriety has value for avoiding automobile accidents.
Grass has value for slowing erosion.
No attitude necessary. This sort of value could hold true in a world without any sentient beings whatsoever.

I'm tempted to count valueattitude as a subclass of valuefacilitating relationship with personal desire fulfillment (or similar) as the thing being facilitated, but consider this asymmetry:
Carlos values the painting Starry Night by Van Gogh.
The painting Starry Night has value for exciting Carlos' visual taste.

The painting Ecstasy by Parrish has value for exciting Carlos' visual taste.
Carlos has never seen Ecstasy and so he does not value it.
We might say that Ecstasy has value for Carlos but not value to Carlos.

Non-relational Value

Does a painting — or anything — have value "in itself" without any explicit or implicit reference to a personal attitude, facilitated outcome, etc? This notion is often called "intrinsic value" (though that same term is also confusingly used to mean end-value as opposed to means-value).

This is the sort of value that comes up, for example, in environmentalist arguments that a lush forest which no conscious being ever knows about would still have value just as itself.

I strongly suspect non-relational value is not only fictional, but a confused idea.

Quantitative Value

Math, basically. I don't know how to fit "X has greater value than Y when X is 5 and Y is negative" into any of the above categories. Maybe it came out of quantifying the strength of valueattitude in the form of monetary prices?

At any rate, I'm still ruminating on these categories and will likely revise my view in coming months.

Sunday, December 19, 2010

Contrast Classes

If we don't take anything for granted, it's possible to doubt just about everything. Descartes tried to discard all assumptions and found the only thing he couldn't doubt was that he, the doubter, must exist.1 This fact would hold even if all of his memories and sense experience were somehow being faked (as happens in some science fiction).

Normally, we don't question such basic assumptions about the world. If I ask you whether you drive a white car, I don't expect you to answer, "I'm not sure. I might drive a white car or I might be accessing an artificial memory of a white car." Fake memories were outside the scope of the question.

Walter Sinnott-Armstrong suggests that all beliefs are justified (or not) relative to some set of alternative beliefs.2 For example, the belief that you are wearing a cotton shirt may be justified out of the following contrast class:
{ cotton shirt, silk shirt, no shirt }
...but not this wider contrast class:
{ cotton shirt, silk shirt, perfectly faked sensory experience of wearing a cotton shirt }
...and maybe not this contrast class:
{ cotton shirt, imitation cotton shirt }
Similarly, you are easily justified in believing you hold a $5 bill in your hand as opposed to a $1 bill or a $20 bill, but you probably aren't justified in believing you hold a genuine $5 bill as opposed to a very expert counterfeit of a $5 bill. According to Sinnott-Armstrong:
"Someone, S, is justified in believing a proposition, P, out of a contrast class, C, when and only when S is able to rule out all other members of C but is not able to rule out P."3
See how that applies to the paper money example? You would be justified in believing you hold a $5 bill if — by simple inspection — you rule out the possibility that it's a $1 bill or a $20 bill, leaving only the possibility it is a $5 bill. However, you wouldn't be justified if counterfeits are also under consideration and you lack the skill to rule that possibility out.

Note: Other theories of what makes one belief justified (or more justified) compared to another can still benefit from limiting the scope to a particular contrast class without sticking to Sinnott-Armstrong's definition of "ruling out" everything else.

How This Helps

The notion of contrast classes captures what we intuitively mean by saying a person is 'justified' in a belief, even when we're not counting scenarios of extreme deception. We can acknowledge that Descartes' thought experiment has merit when we're considering things at that level, but explain why it isn't usually relevant.

Some disagreements about whether a belief is justified can be explained by differences in contrast classes. If Claude boards a train which has two stops left (A and B), his friend Edward may be justified in believing Claude will leave the train at Stop B after noticing Claude remained on board at Stop A. However, Claude's other friend, Jaquelin, may disagree with Edward because she knows Claude is prone to jumping off trains between stops. Edward had the class { Stop A, Stop B } in mind while Jaquelin had the class { Stop A, Stop B, between stops } in mind. It's important to note that Edward's belief was, in fact, justified relative to the first contrast class...just not the second.

Appropriately Weak

This understanding of the meaning of 'justified' is negative. Alternatives are ruled out, but the last remaining possibility — the proposition that is justified relative to the class — is not necessarily ruled in as correct. Something outside the class could be correct instead.

This shouldn't come as a surprise. Philosophers often talk of "justified, true beliefs" which would be redundant if all justified beliefs were true. I think what we want from 'justified' is not a guide to truth so much as an account of due diligence. Contrast classes are a way to specify which beliefs were discarded (or found to be less justified) on the way to a justified belief.

Justified Actions

Finally, contrast classes make sense of calling a particular action 'justified' when the other available options are worse, even if the same action would not be justified if a better alternative were available.

1. Meditations II.1-3
2. Sinnott-Armstrong, W. (2006). Moral Skepticisms. New York: Oxford University Press, p. 84.
3. Ibid. p. 86.

Wednesday, December 15, 2010

On "Oughts and Ends"

In his paper "Oughts and Ends,"1 Stephen Finlay explains how normative ought-statements can be broken down into non-normative components. In other words: how we can understand statements like "You ought to X" or "You ought not X" without necessarily basing them on a prior 'ought.'

Background

In the early eighteenth century, David Hume pointed out that people making moral arguments often jump from is-statements to ought-statements without justifying the sudden change.2 A modern day example might be an observation that urban expansion is likely to drive a particular species to extinction, followed by "therefore we ought to create a nature reserve." But there's a missing step! To make the logic work, it seems something like "We ought to avoid causing extinctions" is needed.

The is-ought problem comes from worrying that ought-statements might always require a prior ought-statement, or that some ought-statements are mysterious brute facts.

Demystifying 'Ought'

Finlay's approach is to defend a plausible interpretation of ought-statements which has the nice side effect of dissolving the is-ought problem (to my satisfaction, anyway).

Must, Should, Could, Shouldn't, Can't

Let's talk about these five other words first! These are all called modal auxiliary verbs because they change the way (or mode) in which the main verb is meant to be understood. For example, they can add information about probability:
"All mortals must die."
"Jones is a punctual guy, so he should arrive any time now."
"The new candidate could win this election."
"According to the weather report, it shouldn't rain tomorrow."
"Dogs can't recite epic poetry."
Or they might add a normative tone to the main verb:
"Drivers must stop at red lights."
"Mary should study for her math final."
"You could take the subway or hire a cab."
"Defendants shouldn't represent themselves in court."
"Congressional candidates can't make jokes like that in public!"
Notice how the first set of sentences passively report on the probability of things, while the second set have that distinctively action-guiding feel of normativity. Isn't it a little curious how a whole range of modal verbs just happen to have both probabilistic and normative forms? Or, wait, maybe this isn't a coincidence at all!

Watch what happens when we explicitly mention goals (or ends) for the second set:
"[In order that they avoid violating the law], drivers must stop at red lights."
"[In order that she passes the test], Mary should study for her math final."
"[In order that you make it to your appointment downtown], you could take the subway or hire a cab."
"[In order that they are adequately represented], defendants shouldn't represent themselves in court."
"[In order that they be elected], Congressional candidates can't make jokes like that in public!"
Can you see how the modal verbs are tied into probability again? Drivers who don't stop at red lights certainly violate the law. Mary has the best chance of passing her math final if she studies for it. Either the subway or a cab are ways for you to most probably make it to your appointment downtown. It's unlikely defendants will be adequately represented if they represent themselves. And there's no chance Congressional candidates will be elected if they make those kind of jokes in public.

So maybe these five modal verbs always have to do with probability, and optionally relate to some goal. When they do relate to a goal (or end), they gain normative tone! This is Finlay's end-relational theory. It accomplishes, as he puts it:
"a straightforward analysis of instrumental normative language, unifying the language of ordinary modality and normativity, and providing a univocal semantics for two isomorphic sets of terms."
Back to 'Ought'

The previous section was a bit of a trick. It turns out we've already been discussing 'ought'! It works about the same as 'should.'
"Jones is a punctual guy, so he ought to arrive any time now."
"Mary ought to study for her math final."
"[In order that she passes the test], Mary ought to study for her math final."
The first sentence uses 'ought' in a probabilistic, non-normative way. The second uses a normative 'ought.' And the third sentence reveals how the normative 'ought' was formed by relating probability to an end.

The Many Flavors of 'Ought'

If 'ought' is end-relative, then the full meaning of 'ought' varies from instance to instance, as far as ends may vary. A critic may claim this is a terrible disadvantage compared to another theory which assigns 'ought' a single, full meaning all the time. "However," Finlay says,
"although the end-relational theory recognizes a multiplicity of ways 'ought' is relativized, it also gives a universal semantics for 'ought' itself. 'Ought,' on this view, is no more semantically ambiguous than attributives like 'real', comparatives like 'big', or indexicals like 'here', which despite their complex interaction with context are not difficult to interpret."
Some...Maybe All?

If you're ready to acknowledge that some 'ought's gain their normativity in the manner described by the end-relational theory, progress has been made in demystifying normative 'ought'! Finlay takes things one step further by suggesting normative 'ought' always presupposes ends. He claims the end-relational theory can account for the empirical data and shows how some likely objections fail to show otherwise.

A critic might, for example, try to categorize the end-relational theory as an instrumental theory of 'ought' in the sense that "if you want to do X, you ought to do Y." This would be an inadequate account because we very often use 'ought' in a way that goes against a particular agent's desires. But the end-relational formula of "in order that X, you ought to do Y" avoids this limitation by not binding the meaning of 'ought' to a particular agent's desires.

The end-relational formula may sound too weak to provide practical guidance. If you tell me that I ought to do Y, but I realize the end it would serve is X — and I don't care about X — then I could shrug and go about my business. But it may be the case that X is especially important to me, so you could guide my behavior through an explicitly relativized 'ought.' From the paper:
"'In order that I don‘t kill you, you must come with me' may be end-relational, but this 'must' is not lightly ignored."
Alternatively, you may influence my attitudes or behavior by taking advantage of (exploiting) the elliptical nature of ought-statements. If you forcefully tell me what I 'ought' to do without making any end apparent, I might go along with you! In short, Finlay believes it is "plausible that categorical uses of 'ought' express demands and attitudes just as expressivists claim." 

Hume Revisited

(Going beyond the scope of the paper here...)

As I understand Hume, he was pointing out that something is missing between (I) and (III) in cases like this:
I. Urban expansion is likely to drive a particular species to extinction.
II. ???
III. Therefore, we ought to create a nature reserve.
And as I understand the is-ought problem, the worry is that (II) must either contain an infinite regress of ought-statements or a regress far enough back to hit a brute fact ought-statement. But what about something like this instead:
I. Urban expansion is likely to drive a particular species to extinction.
III. Therefore, [in order that biodiversity is maintained] we ought to create a nature reserve.
There is no separate (II). Instead, we understand the 'ought' in (III) to presuppose an end which is most likely to be brought about by creating a nature reserve. It's a qualified 'ought' brought to life from probability and a goal. The force this goal has on us relies — as Hume would no doubt agree — not ultimately on reason but on desire.


1. http://www-rcf.usc.edu/%7Efinlay/OughtsandEnds.pdf (Preprint PDF)
2. A Treatise of Human Nature III.i.i (last paragraph)

Tuesday, December 7, 2010

What Is Morality, Anyway? (Pt. 4)

...Part 1
...Part 2
...Part 3

If I'm right that moral language refers to a conventional but incoherent set of goals, why bother with it? Instead of arguing about whether a war is wrong or right, we could talk about how the war promotes or opposes specific goals, e.g: "This war greatly increases human suffering" or "This war addresses deep inequality." Discarding moral language would clear up much confusion.

A Federation of Goals

Then again, clear communication might not be the main point of using moral language. Suppose I only care about reducing suffering, not inequality. Maybe I buy slaves and treat them well enough that they're happier as my slaves than they would be on their own. Meanwhile, you care about equality issues beyond any consideration about suffering. If you point out that keeping slaves is bad for human equality, my response would be: "So what?"

Now, suppose you start talking about both suffering and inequality in unified terms. You also use shared terms to praise people who relieve suffering or inequality. Through the power of association, you may start to influence my view of slavery. This is especially effective if lots of people use the unified terminology to associate the goal I didn't care about with the goal I did care about.

I'm suggesting moral language gets its punch from artificially combining the psychological importance of several goals under one way of speaking. Think of it as a union or federation of goals.

The Concern

There's a persistent worry that understanding the nature of ethics will undermine the practice of ethics. While the view I'm expressing here does challenge the existence of any uniquely moral facts, I think it's the goals behind morality we're really committed to in the first place.

"Genocide is wrong" isn't some brute fact; we call genocide wrong because it entails so much harm and injustice...and these facts don't change if we take away the word "wrong."

When people say "Good folks donate to the needy" what are they doing?

Usually either prescribing charity directly or promoting an implied goal and recommending charity as a way to advance that goal. The Expressivists and Prescriptivists are largely right about what's going on in the "doing" part of moral discourse. I just think truth claims play a role too. For example, since moral goals are fairly conventional, a disinterested person (e.g. an observant psychopath) could be stating the fact that charity advances conventional moral goals. 

How This Helps

I hope this view can shed light on what's really going on in moral disputes. If we can sort out which goals are at stake when one person says "good!" and the other says "bad!", it may turn out they're debating facts relative to the same goal (e.g. whether a social program really is better for a particular goal) or expressing a commitment to different goals (e.g. the social program is better for a short-term goal but not for a long-term goal). Either way, the debate can be put in new and hopefully more fruitful terms.

In future posts, I plan to analyze specific moral disputes from this perspective. I'll also be looking at how goal-relative moral naturalism might handle problems in the philosophical literature.

Wednesday, December 1, 2010

What Is Morality, Anyway? (Pt. 3)

...Part 1
...Part 2

The first post in this series began:
When people say "Genocide is wrong" or "Good folks donate to the needy," what are they claiming and what are they doing?
I asked this question to introduce metaethics, but even asking a question which focuses on language acts shows bias toward the answer I already had in mind. Instead, I could have asked a question focused on the nature of goodness, the force of obligation, the process of gaining moral knowledge, etc. These are all valid approaches to metaethics, and all of them quietly set up an advantage for certain kinds of answers. I should also mention that I didn't come up with any of the following ideas. In future posts, I do plan to discuss individual papers which struck me as on-target and shaped my thinking. Now, with disclaimers out of the way, it's high time I answer my own questions.

What is morality?

Morality is the social practice of using a particular set of linguistic expressions to make goal-relative truth claims and influence others to put their attitudes and behavior in line with those goals.

When people say "Genocide is wrong" what are they claiming?

They are claiming it is the case that genocide hinders moral goals.

Is this a true claim? Without some goal specification, it can't be evaluated as true or false. If I phone you to say I'm moving at 60 mph, then — strictly speaking — you can't know what I'm claiming without first knowing that I'm moving 60 mph relative to the highway I'm driving on (since velocity is always relative). Of course we usually quickly (and correctly) gather from context that 60 mph is speed relative to the road.

Same for moral claims. Strictly speaking, "Genocide is wrong" can't be evaluated without specifying the goal. This might sound impractical and stilted. What would you think if you asked someone, "Is it wrong to kill and eat babies?" and she replied, "Hold on! I can't answer that yet. Can you specify a moral goal first?" You'd probably feel less comfortable hiring her as a babysitter! But it would also sound impractical and stilted to constantly ask people "velocity relative to what?" when they mention their speed. Most of the time, we have a good enough idea from context and we can answer accordingly. We know a person who claims genocide is wrong probably means something along these lines:
  • Genocide hurts people without a justifying reason.
  • Genocide treats one group unfairly for the convenience of another.
  • Genocide involves disgusting actions.
  • Genocide is just plain wrong, or involves things which are just plain wrong.
  • Any combination of the above.
These all imply corresponding moral goals like harm avoidance, fairness, disgust avoidance, and doing moral good. Most people would probably say genocide conflicts with all of these goals, which makes its moral wrongness very clear.

When people say "Good folks donate to the needy" what are they claiming?

If we look behind the obvious social prodding, there's definitely a claim in there that donating to the needy promotes moral goals. Which goals? Ones like alleviating harm, reducing deep inequality, obeying God, or doing moral good. Again, many people would probably say donating to the needy promotes all of these goals, which makes its moral goodness very clear.

Incoherence

Moral dilemmas occur when an action will advance some moral goals at the expense of other moral goals. It's conceivable for a claim like "Genocide is wrong" to be true relative to the goal of social equality, but false relative to the goal of maximizing happiness. "Child sacrifice is wrong" might be true relative to the goal of preserving human life, but false relative to the goal of keeping oaths.1 "War is wrong" may be true for the goal of minimizing harm, but false for the goal of reducing deep inequality.

Terms like 'good,' 'bad,' 'right,' and 'wrong' are incoherent because they've been overloaded with implied goals which sometimes conflict with each other. This is why philosophers have had such a hard time coming up with a single, intuitively-satisfying procedure for evaluating moral claims.

Error

One effective method for drawing out the moral goals someone has in mind is to challenge their claim by saying something like, "Genocide is not wrong!" and see how they respond. (Just be careful with this tactic if you ever plan to run for office.)

But what if the moral goal they have in mind really is just: promoting moral good? This would be the "brute moral facts" approach of non-naturalism, as discussed in part two of this series. My answer is simple: there's no such thing as just plain moral good for anyone to promote. It's a mistaken idea with nothing behind it, like the once-widespread idea of absolute velocity. This is an element of error theory in my view.2

Nothing Personal

Let me emphasize: these are all impersonal goals. I could correctly state that fighting a war is wrongharm-avoidance or rightequality, even if neither you nor I care about avoiding harm or improving equality in the world.

I'm not sure if facts about whether an action promotes or hinders an impersonal goal counts as a form of moral realism, but I am leaning toward "yes" since such facts would still hold in a world in which everyone's attitudes are completely different from our own. On the other hand, their moral language would most likely be hooked up to much different impersonal goals. So my goal-relative view of morality does seem to be perched on the fence between realism and anti-realism.

...Part 4


1. I have in mind the story of Jephthah in Judges 11.
2. See part two of this series for an explanation of "error theory."