Saturday, November 27, 2010

What Is Morality, Anyway? (Pt. 2)

...Part 1

Last time I discussed whether moral judgments include truth claims and, if so, how these could be claims strictly about the attitudes of individuals or groups. Now I'd like to explore ways judgments could make claims about more than attitudes.

What makes a moral judgment true? Answer #2: Brute moral facts.

What makes it true that '1 + 1 = 2'? It's not my own or anyone else's attitude that '1 + 1 = 2'. According to Theists, what makes it true that God exists? Nothing! Both are examples of brute facts. They're true in a "they just are" way, and any further attempt to explain how they're true isn't going to find some underlying, other kind of fact that makes them true. Perhaps basic moral truths work the same way and we grasp them as clearly as we understand that '1 + 1 = 2'. The view that true moral judgments are not based on any other kinds of facts — including facts about the natural world accessible to science — is called non-naturalism. It is also a very strong version of moral realism because moral facts would still exist even if there were no minds in existence at all.

What makes a moral judgment true? Answer #3: Underlying "natural" facts.

You may have already guessed this view is called moral naturalism. The idea is that there are underlying, other kinds of facts which make moral judgments true.1 Does this mean we could do away with moral language entirely and stick to making truth claims about these underlying facts (whatever they are)? Maybe not. For one thing, moral language serves a social function beyond the truth claim component. But even if we limit the question strictly to the truth claim, it's worth noticing the debate about emergent properties in Physics; it may be that even when we can describe a thing in terms of lower level components, we lose something significant by doing so. The whole may be more than the sum of its parts...or at least it's useful to talk about the whole in daily conversation.

Naturalism can be another form of moral realism (depending on how moral realism is defined). The truth of moral judgments don't change just because attitudes change. Given a particular understanding of which non-moral facts underlie moral truth claims, it's often possible — at least in theory — for scientific investigation to improve our moral knowledge. However, the selection of which natural facts we humans have linked up to moral language may be a matter of convention, not something that was true about the world before we discovered it.

What makes a moral judgment true? Answer #4: No moral judgments are true.

This is not really a separate answer. An error theory is what results when a person accepts another answer about what makes moral judgments true, but also believes these truth conditions are never met. Examples of error theories:
  • Person A accepts divine subjectivism, but is an atheist.
  • Person B accepts cultural subjectivism, but considers culture to be an incoherent concept.
  • Person C accepts non-naturalism, but thinks a realm of moral facts independent of actual people's concerns is absurd.
The usual way out of error theory is to change to a metaethical view in which moral judgments have obtainable truth conditions. However, some people don't consider such views to represent genuine morality. This can result in someone denying moral language while affirming what substantially amounts to a view other folks do label "morality."

...Part 3
...Part 4


1. Technically, attitudes could fit in this category, but they are conventionally excluded. Other mental facts — such as the experience of pain — can count. Supernatural facts can also count. Who promised philosophical terms would make sense?

Thursday, November 25, 2010

What Is Morality, Anyway? (Pt. 1)

When people say "Genocide is wrong" or "Good folks donate to the needy," what are they claiming and what are they doing?

Metaethics is the branch of philosophy that steps back from the debate over the correct procedure for generating moral judgments1 and asks about the nature of moral judgments (and related concepts like goodness and obligation).

Note: I've read several summaries of metaethical positions and each arrangement differs significantly. This post represents my own attempt to get a handle on things. Criticism welcome.

Do moral judgments include truth claims?

In a previous post about propositions2, I explained how not everything we say is appropriately labeled 'True' or 'False.' We can command, recommend, express feelings, etc. without claiming that something is the case (or isn't the case). Expressivism and Prescriptivism are two views which deny the appropriateness of labeling moral judgments as 'True' or 'False.' Instead, they characterize moral judgments as expressions of emotion or, additionally, as personal demands that other people act a certain way (prescriptions).

An expressivist might interpret "Genocide is wrong" as "Genocide? Yuck!" Meanwhile, a prescriptivist might interpret "Good folks donate to the needy" as "Hey you, donate to the needy!" According to these non-cognitivist views of morality, moral judgments may look like truth claims but this is just for rhetorical effect.

Other views of morality still allow emotional expression and demands to play a part in moral language, but affirm that moral judgments also make claims which can be true or false. Philosophers who defend these cognitivist views of morality point out how some features of moral language are hard to explain if we deny any place for truth claims. I find these cognitivist arguments convincing. And according to the PhilPapers survey3 of philosophers, cognitivism (judgments include truth claims) is much more popular than non-cognitivism (truth claims not included).

What makes a moral judgment true? Answer #1: Attitudes.

It's possible to slightly tweak an expressivist interpretation of "Genocide is wrong" from "Genocide? Yuk!" to "It's the case that I react to genocide with a 'Yuk!'" The new formulation counts as a truth claim, but not a very interesting one. Another person could say "Genocide is not wrong" and not be disputing the first truth claim any more than when one person says "I like chocolate cake" and the other says "I don't like chocolate cake." So according to individual subjectivism, moral judgments are practically all true. Moral disputes may look like truth disputes, but actually they're more like disputes over which bands are good; we talk like we're disputing facts but most of us realize we're really just expressing our personal taste in music.

Cultural subjectivism would interpret "Genocide is wrong" as something like "It's the case that our culture reacts to genocide with a 'Yuk!'" This allows for false moral judgments within a culture. If a 21st century American or European man says, "Slavery is wrong," he would be making a truth claim which turns out to be true. But if the attitudes of so-called "western culture" were to change back to the way attitudes used to be, the claim "Slavery is wrong" from anyone in the changed culture would be false.

Divine subjectivism would interpret "Genocide is wrong" as something like "It's the case that God reacts to genocide with a 'Yuk!'" This is a lot like individual subjectivism, except only the attitudes of one individual count for making a judgment true. This view has something important going for it: the truth of moral judgments is no longer grounded in human attitudes; cultures can be incorrect about the morality of genocide and slavery. However, if morality is grounded in divine attitudes, there is no way to say one set of divine attitudes would be better than another. This is a problem if divine attitudes can change, or if we want to compare the morality of two imaginable Gods, or if we want to say there's something about genocide and slavery that make them wrong besides divine attitudes.

I find all of these attitude-grounded interpretations of moral truth claims dissatisfying. When I say "Genocide is wrong," I don't just mean that I, my culture, or God has negative attitudes about genocide. I'm claiming that genocide would fail a moral evaluation even if I, my culture, or God had positive attitudes about it. Philosophers have proposed a number of ways moral judgments might include truth claims which don't (at least directly) depend on attitudes.

...Part 2
...Part 3
...Part 4


1. http://wordsideasandthings.blogspot.com/2010/11/intuitions-and-algorithms.html
2. http://wordsideasandthings.blogspot.com/2010/11/lingo-propositions.html
3. http://philpapers.org/surveys/results.pl

Tuesday, November 23, 2010

Varieties of Justification

Beliefs are justified when they are held for a good reason. Well, then...what constitutes a good reason for holding a belief? In his book Moral Skepticisms, Walter Sinnott-Armstrong makes the following point1:

When someone asks us whether a belief is justified, we often find ourselves wanting to answer both ‘‘Yes’’ and ‘‘No,’’ even when all other facts are settled. This ambivalence is a signal that we need to distinguish different ways in which a belief may be said to be justified. These distinctions are often overlooked, but the failure to draw them creates countless confusions in moral epistemology and in everyday life. Let’s try to do better.

His distinctions are a series of dichotomies. I'm not sure this is the best way to divide up types of justification, but it's certainly better than using an ambiguous term when more precision would be helpful.

Instrumentally vs. Epistemically Justified

In philosophy lingo, "instrumental" has to do with whatever it takes to get something done, often to the exclusion of other concerns. Think about how tools or instruments help get things done, without regard for whether it's something that should be done from a broader perspective.

Suppose an Atheist is married to a Theist and this is a source of conflict and unhappiness. The Atheist wishes she could manage to believe in God because her family life would be much improved. One day, an oddly credible stranger offers her a pill which will chemically alter the way she thinks so that she will believe God exists. She takes the pill, which works as advertised. Her new belief would be instrumentally justified because it's held for a reason that's good for the instrumental goal of having a happier family life.

The problem with instrumentally good reasons is that they are totally independent of truth. Sinnott-Armstrong's own example was a drug to make a person believe there are aardvarks on Mars for the goal of winning ten million dollars. Most people use "justified" to mean good reasons that have something to do with improving a belief's chance to be true; they intend epistemic justification. Epistemology is the philosophical study of knowledge, which requires beliefs to be true...not just useful.

Permissively vs. Positively Justified

I'm reluctant to call permissive justification a form of justification at all. A permissively justified belief is one that is not held for a bad reason, but not necessarily for any good reason either. If I have no reason to believe an external world exists outside my mind and no reason to believe the opposite, I would be permissively justified in believing either side. Why not just say it's permissible — but not justified — to believe something so long as it's not believed for a bad reason?

Positively justified beliefs are those I indicated in the first sentence of this post: beliefs held for a good reason (for some reason and not a bad reason).

Slightly vs. Adequately Justified

A belief is slightly justified when there is some good reason in favor of it being true, but not enough — given the context — to nullify good reasons in favor of something else being true instead.

Suppose a fingerprint is found at a crime scene. If it's matched to a man who isn't likely to have ever been to the scene otherwise, the fingerprint is a good reason to think he is the perpetrator. However, it might not be a good enough reason if there's also reason to think the perpetrator is very smart and the print was on something which could have been brought in to throw off the police. For a very serious crime like murder, the print may not adequately justify the belief the print-owner committed the crime compared to the belief he is being framed.

Slight and adequate justification could probably be split up into several finer-grained distinctions.

Personally vs. Impersonally Justified ...and Wholly Justified

The basic idea here is that a person can be personally justified if she does all the things we can reasonably expect a person to do in order to have good reasons for her beliefs. Unfortunately and despite our best efforts, we are always somewhat vulnerable to false or limited information.

Suppose a practical joker went around to every clock and electronic device in the house and set them all one hour early, hoping to get a laugh when you leave for work early. You might be personally justified in believing it's time to go to work, but not impersonally justified. The third-person narrative of the facts shows a problem with your belief.

Sinnott-Armstrong gets into Gettier problems, which complicate matters. For this post I'll just say beliefs are wholly justified when they're both personally and impersonally justified.


1. Oxford University Press, 2006. Pg. 63.

Monday, November 22, 2010

Lingo: Propositions

Mark each sentence True or False:

__ "Where are my shoes?"
__ "Ouch!"
__ "Beware the Jabberwock, my son!"

Seems inappropriate to call any of these true or false, right? What about this:

__ "He is the tallest man in town."

Out of context, neither answer would fit. If we knew who "he" is and which town we're talking about, then we'd have an assertion which is either true or false (even if we don't know which), in other words: a proposition. Example propositions:

__ "There were more cows than horses in the United States on November 1, 2010."
__ "Gold is an element."
__ "Mental states are fully determined by brain states."

Think of propositions as declarations that something is the case (or is not the case). Or think of them as statements of fact as opposed to emotional expressions, advice, questions, commands, etc. There's a fair amount of controversy about what exactly counts as a proposition, but this is the general idea.

Sunday, November 21, 2010

Comparing Worlds

Imagine a parallel world which is exactly like ours, except Mars has an Earth-like atmosphere of nitrogen, oxygen, and argon. There's no more (or less) life on Mars than there is in our world, but it could much more easily support transplanted life from Earth.

Which is the better world?

My first inclination is to say, "The world in which Mars has an Earth-like atmosphere is the better world!" After all, the Earth keeps getting more crowded. We could benefit from a nearby planet with breathable air to colonize. But wait! The question wasn't: "Which world is better for humans?" If the two questions were identical, it wouldn't be possible to say one human-less world is better than any other human-less world.

Two other questions which aren't identical to the original:

"Which world is better for conscious beings?" (Including dogs, aliens, etc.)
"Which world is better for living beings?" (Including plants, bacteria, etc.)

Or variations of this sort:

"Which world is better for allowing complex structures?"
"Which world is better for maximizing happiness?"

Even when we can answer these more specific questions, the answers may conflict. A better world for humans might be a worse world for wolves, buffaloes, and whales. A better world for maximizing happiness might be a worse world for maximizing social equality. Can the original question be answered without changing it into something more specific? How could a world be simply better rather than better for some things or ideas?

I've become suspicious of this "simply better" idea. It might be a mistake which comes from so often hearing "good" and "better" without an explicit qualification. We usually communicate accurately anyway because the intended meaning is clear from context. But when the meaning isn't clear, I think it's most appropriate to ask "better how?" If that can't be answered, I would assume the other person is merely expressing a preference rather than making a claim.

So my answer to "Which is the better world?" is "Better how?"

Wednesday, November 17, 2010

Intuitions and Algorithms

Sentence A — My dog chased the ice cream truck.

Sentence B — My chased dog the ice cream truck.

Native English speakers immediately recognize Sentence A as valid and Sentence B as invalid. But if you ask them what exactly is wrong with Sentence B, their answers won't be so quick or unified; we can't necessarily explain how our intuitions give the results they do. Noam Chomsky referred to this intuitive ability as linguistic competence, then set out to discover explicit rules which generate the same answers. A more concrete way to think about this is:

What would it take to write a computer program that makes the same "valid" / "not valid" judgments as native English speakers?

Since computers don't operate on intuition, they have to be told how to process candidate sentences in a very explicit, step-by-step way until a solution is reached. Programmers call this process an algorithm. There are algorithms to sort numbers, compare dates (usually of the calendar sort), and simulate physics in video games. If anyone ever writes an algorithm which kicks out the same linguistic answers as intuition, we still might not know how the intuition works in our minds (there may be multiple ways to generate the same answers), but we would at least know what all the relevant factors are and how they interact with each other. Plus, non-native English speakers could ask a computer for any number of validity judgments without the computer getting annoyed. A computer could even be set up to generate millions of random but valid sentences.


There is another kind of intuitive judgment that can be hard to explain: moral judgments.

Situation A — Bill sees Charlene poison David's drink, but declines to tell David. David dies.

Situation B — Bill poisons David's drink. David dies.

Most people will immediately judge Bill's inaction in Situation A as crummy, but Bill's action in Situation B as significantly worse. Why? As with the language judgment at the beginning of this post, the answer to "Why?" won't come as quickly and it won't be as unified.

Imagine if we had a morality algorithm: a step-by-step procedure for making moral judgments that match up with our intuitions. We would know what all the morally relevant factors are and how they interact with each other. We could even program a computer to calculate moral judgments for any given situation.

Wait a minute! What about conflicts between moral intuitions? There are a few ways to handle that.
  • It could be the case that everyone's moral intuition works the same, but we disagree about morally relevant facts. We would just need to make sure we put the true non-moral facts into the computer. For example, we might need to determine the truth of some religious claims.
  • Once we see the morality algorithm works great for most of our deeply held intuitions and tuning it further doesn't seem to help, we might start to trust the algorithm over our own intuitions for the remaining cases. (Philosophers will recognize this state as reflective equilibrium.)
  • If our moral intuitions do sometimes operate in fundamentally opposed ways even when processing the same beliefs about non-moral facts, it might make sense to talk about different moralities. The algorithm could still accurately predict an individual's judgments if it were given the additional inputs of which moralities are under consideration and how they interact with each other.
As I will explain in a forthcoming post,1 I think the last option is the only realistic one. It would require something more like a set of algorithms depending on which kind[s] of morality an individual is using to draw conclusions.

1. http://wordsideasandthings.blogspot.com/2010/11/what-is-morality-anyway-pt-1.html

    Sunday, November 14, 2010

    Consequence-Based Morality

    What makes a particular action good or bad, morally speaking? Consider lying. Is lying "just wrong" with no further explanation for what makes it wrong? Is lying always wrong? If not, then what makes it sometimes permissible or even right?

    One answer is that lying is usually wrong because it usually has bad consequences, but sometimes lying isn't wrong because any bad consequences are outweighed by good consequences. This makes sense of the intuition that it's wrong to lie on tax forms, but not wrong to lie about hiding Jews in the attic.

    This view of moral judgments is called consequentialism. While many philosophers and non-philosophers are attracted to the basic idea of consequences determining right and wrong, the devil is in the details.

    What counts as a good consequence (or bad consequence)?

    If we use consequences to determine moral judgments, the consequences themselves must not ultimately call out for moral evaluation. Otherwise, we'd be stuck in a loop! There are two ways out of the loop:
    • The consequences which ultimately determine the morality of our actions must be so unquestionably morally good (or bad) that there's no need to justify them. Pleasure and pain have long been put forward as moral basics. From Plato's Protagoras: "Then you think that pain is an evil and pleasure is a good: and even pleasure you deem an evil, when it robs you of greater pleasures than it gives, or causes pains greater than the pleasure." In other words, the only time we do question the goodness of pleasure is when its consequence is less pleasure or more pain overall.
    • The determining consequences could be morally neutral in themselves. It strikes me as a little weird to call pain "morally bad" or pleasure "morally good." Sure, we all have strong motivation to seek pleasure and avoid pain for ourselves, but is this a moral motivation? I don't avoid pain for myself because pain is morally bad! It's possible to characterize morally good acts as those which bring about more pleasure than pain overall (and morally bad acts as predominantly causing pain) without putting a moral label on pleasure and pain themselves.
    Philosophers have used other consequences in place of pleasure and pain: well-being and suffering, equality and inequality, beauty and ugliness, high information and low information (in a technical sense), desire satisfaction and desire thwarting, companionship and alienation, and others. These are things people have considered always good to increase (or decrease) all other things being equal. Which of these — if any — are appropriate determiners of moral judgments and how we're even supposed to decide between them are perennial hot topics.

    Consequences for whom?

    Suppose we do settle on pleasure versus pain as the consequences by which true moral judgments are determined. One particularly self-centered way of judging actions is whether they ultimately bring about mostly pleasure or mostly pain for one's self: ethical egoism. As flimsy as that sounds, it is enough to evaluate some desired actions as morally bad and some undesired actions as morally good. For example, I may want to drink whiskey at every opportunity, but if this would bring about more pain than pleasure for me in the long run, it's wrong for me to drink whiskey at every opportunity. Or maybe I find exercise completely unpleasant, but if regular exercise would bring me more pleasure than pain in the long run, it's right for me to exercise.

    Morality that's purely self-regarding strikes a lot of people as a contradiction. The Golden Rule isn't, "Do unto others...if it helps you out." Most forms of consequence-based morality take into account consequences to other people. The trouble is figuring out how to do this without extremely counter-intuitive results. Jeremy Bentham's classical utilitarianism draws an analogy between the way individuals seek to increase their happiness over their pain to a community of individuals increasing their group happiness over their group pain. This sounds reasonable at first, but it has troublesome implications like it being better for a few people to be in extreme pain if it means a small increase of happiness for enough other people. Modern utilitarians try to find new ways of counting up consequences that don't have such immoral-seeming results, or in some cases they'll question whether our gut reactions are justified.

    What we should and shouldn't expect.

    Consequence-based morality has a hard time justifying natural rights, as opposed to legally granted rights. Jeremy Bentham famously called natural rights "nonsense upon stilts" (insult standards have fallen in the modern age!). This is because rights are only good so long as they have good consequences and we can always make up a special circumstance in which respecting a right has bad consequences.

    Another basic concept in other views of morality which isn't handled as easily by consequentialism is giving people what they deserve. If a particular action will benefit a habitually bad person at the expense of a habitually good person — and the benefit is slightly greater than the expense — then it would be a good action, all else being equal.

    It may normally lead to better consequences if we respect rights and give people what they deserve, but these can't be fundamental moral elements if consequences are the only things that ultimately determine right and wrong.