What’s so Special about Morality?

25 02 2010

It might seem a facetious question to ask: what’s so special about morality? But it’s a question I think worth asking in a serious way before embarking on any in-depth discussion of the nature of morality. And one that I don’t think is asked seriously often enough.

Plenty of discussions in ethics begin by stating that the starting question is, as Bertrand Russell puts it: “what sort of actions ought men to perform” (Russell, 1910). So it often begins, with ‘ought’ as the starting point.

But not any old ‘ought’. This is no simple ‘should’ of prudence or expedience. No ‘should’, in the sense that one ‘should’ do something, but is not obliged to do so. This is a bare knuckled ‘ought’, implying a certain obligation, nay, inescapability.

Richard Joyce (2006) calls this feature of morality practical clout, a term I’m happy to adopt to describe the apparent gravity of ‘ought’ (although, if it was up to me, I would have called it practical mojo).

I’ve cobbled together a list of some of the ‘speical’ features of the moral ought – these aren’t stated as being necessarily ought-worthy, nor would they be agreed upon by all thinkers, but they do appear enough throughout the literature to bear a mention:

  • Impartial – morality is beyond self-interest
  • Universal – morality applies to everyone (or everyone within a particular group, or every moral agent, depending on your bent)
  • Objective – something makes moral propositions true other than our subjective whim or personal preferences
  • Justifiable – a moral norm is defendable, there are reasons for accepting it, it’s not just arbitrary
  • Overriding – moral norms are more important than norms of custom or individual preferences, they override them when there’s an apparent conflict
  • Non-negotiable – moral norms are binding, obligatory, you can’t just weasel out of them if you don’t want to obey them
  • Action-guiding – moral norms steer behaviour, they’re not just matters of theoretical interest, where one might proclaim a moral principle but feel not inclination to follow it in practice
  • Imperatives – imbued with the force of necessity rather than contingency
  • Other-focused – moral norms are not just about individuals in isolation but are about the interaction between individuals
  • Harm – many moral norms are centred around preventing harm to others
  • Fairness – many moral norms are also centred around preventing injustice
  • Altruism – many moral thinkers believe that even if morality doesn’t spring from altruism, it’s compatible with altruism and the Golden Rule
  • Happiness – moral norms either directly or indirectly lead to happiness or wellbeing
  • Rational – moral norms come from rational agents, they’re justifiable using reason
  • Concerns ‘value’ – as opposed to ‘fact’

I’m sure this isn’t an exhaustive list, but I think it captures a decent proportion of the features that many thinkers believe makes morality ‘special’.

But I have my doubts about some of these features. I’ve read altogether too many papers in ethics that take some (or all) of these things as being so obvious as to be beyond question. And I hardly believe that’s a healthy way of beginning an ethical enquiry.

Now, I don’t doubt that many of these features are intuitively appealing. But it’s the very fact that they seem obvious in everyday experience that ‘ought’ to raise an eyebrow.

Compounding this concern is my growing conviction that morality is best understood as an evolved device for encouraging a social animal to behave itself in a social context. And if that’s the case, then many of these obvious ‘special’ features of morality are illusory.

It might have been useful to treat morality as if it was inescapable, rational, non-negotiable etc, but that doesn’t necessarily mean that it is. In fact, moral norms might be entirely prudential ‘shoulds’ driven by instrumental values that simply carry a greater emotional weight because of the gravity of the implications of ‘moral’ decisions compared to non-moral decisions.

That doesn’t really make morality special, but it makes it no less important.

Advertisements

Actions

Information

11 responses

25 02 2010
John Wilkins

I fully agree. While I don’t want to licence the is-ought fallacy, it seems to me that we can treat moral obligation as an evolved disposition to follow norms, and norms as evolved social rules, which roughly track prior success (but also track socioeconomic exigencies that may themselves not be terribly fitness enhancing, but are great from their own evolutionary perspective).

26 02 2010
Tim Dean

I hear ya. Although I’d suggest the is-ought fallacy is only a problem if the ‘ought’ is elevated to some special realm above the ‘is’. If there are no intrinsic values, only instrumental ones, then the ought loses much of its specialness. Evolution, anti-realism, game theory and a social contract, and all your is’s are all there is, and that includes your oughts.

26 02 2010
James Gray

Have you been reading the Normative Web? He talks about how our common sense intuitions about morality are relevant and it sounds like you are thinking about that here.

John Wilkins,

By “is-ought fallacy” do you mean naturalistic fallacy? It’s not always clear how to get an ought from an is, but it’s not necessarily fallacious to do so.

28 02 2010
Tim Dean

Haven’t read The Normative Web. Have read *about* it though. Looks intriguing (had I not a reading list as long as both my arms for my own studies, I’d give it a spin). However, from what I’ve read about it, it represents – not surprisingly – pretty much the diametric opposite view to mine.

Cuneo might be right that if epistemic facts exist, then moral facts exist. But I have deep troubles with his rendition of epistemology, so I’d say the kinds of epistemic facts he’s talking about don’t exist. That doesn’t mean knowledge of sorts can’t be had (i.e. knowledge-how), but trying to link abstract propositional facts to the world in perfect correspondence is folly. I’m happy for it to be anti-realism all the way down.

And I don’t subscribe to Foot’s characterisation of the bounds placed on what counts as moral. Seems like the tail wagging the dog to me. What we don’t need is a restrictive definition of morality before we investigate what morality is. What we need is a theory of morality that moves with and adapts to what we discover about morality. Like how some biologists are giving up ‘defining’ life and instead are ‘theorising’ about life is a more loose and vague way.

On the naturalist fallacy, there are a few flavours to choose from (all unpalatable to me). There’s Hume’s is-ought, but that only precludes deducing an ought from a list of is-es. Not a problem if you’re not looking to find deductive oughts.

Then there’s Moore’s naturalistic fallacy, which seems to me to be deeply problematic, not least because it begs the question. If I don’t think the good is an irreducible non-natural property, then it’s not really an issue. In fact, I don’t believe there is any real ‘good’ of this type, only instrumental goods, and they can frolic with natural properties/facts freely.

Then there’s the ‘it’s natural so it’s good’ fallacy. That one is pervasive in the general public, but it’s pretty trivial to tear down. Arsenic is natural, after all…

I’m still working on my chapter dealing with the various naturalistic fallacies. I’ll post more about it when I’m done.

28 02 2010
James Gray

On the naturalist fallacy, there are a few flavours to choose from (all unpalatable to me). There’s Hume’s is-ought, but that only precludes deducing an ought from a list of is-es. Not a problem if you’re not looking to find deductive oughts.

I asked about the is-ought fallacy precisely because it sounds different from what I think of as the naturalistic fallacy:

Just because it happens (is the case) also means it should be the case.

Then there’s Moore’s naturalistic fallacy, which seems to me to be deeply problematic, not least because it begs the question. If I don’t think the good is an irreducible non-natural property, then it’s not really an issue. In fact, I don’t believe there is any real ‘good’ of this type, only instrumental goods, and they can frolic with natural properties/facts freely.

Moore’s open question argument makes sense, but it is merely a challenge. Hume’s is-ought problem is also a challenge. It is possible to show that morality is reducible (good means pleasure) and therefore arrive an an ought from an is (non-moral fact) in that sense.

Moore’s open question argument has not somehow proven a timeless feature of rationality itself that “morality is irreducible. Instead, he merely pointed out an unargued assumption. His question was previously ignored and it can show a fallacy of assumption. Some people assume reduction is possible before establishing that fact. Such an assumption is precisely a form of begging the question. We need to know the pros and cons to reductive moral concepts.

I’m not convinced that you only believe in “instrumental goods”. Desire itself seems to imply a non-instrumental good whether or not moral realism is true. (Even Hume admitted that pain and pleasure are “ultimate ends.” The fact that you desire X is what seems to make it worth achieving (all things equal). Instrumental goods themselves need to be judged. This is why Aristotle’s final ends are relevant, even if you don’t believe in intrinsic value. The question is: Do I desire it because it’s good or is it good because I desire it? Either way “happiness” is a good.

1 03 2010
Tim Dean

On values. I would tilt things in favour of saying “it (appears to be) good because I desire it”. I’d also say “it’s sweet because I like it”, not the other way around (Dan Dennett has a nice riff on this that you may have seen on TED: http://www.ted.com/talks/lang/eng/dan_dennett_cute_sexy_sweet_funny.html).

‘Desirable’ simply means ‘attract’. Likewise, pleasure means ‘attract’. Pain means ‘avoid’. There’s valence here, but that’s explainable without calling upon non-instrumental values.

Also, pleasure and pain are heuristics, and there are very good evolutionary reasons for why they are wired the way they are – because the things that trigger these ‘attract’ or ‘avoid’ responses tend to be (instrumentally) good or bad for our survival. These mechanisms aren’t perfect, though. As I said, they’re heuristics.

So, if you’re a hedonist, and take pleasure as the ‘ultimate end’, then you’re really a closet evolutionary ethicist. Being a hedonist means you’re promoting those things that our pleasure-seeking heuristics are promoting, which are those things we’ve evolved to seek out because they aided fitness.

My argument is that there are no ultimate ends. Unless you want to make the ultimate end fitness. And I don’t think we’d want to – nor would we have to. That’s the chewy nihilism at the centre of this gobstopper. But, there are still very good reasons to still want to be nice to each other.

1 03 2010
James Gray

‘Desirable’ simply means ‘attract’. Likewise, pleasure means ‘attract’. Pain means ‘avoid’. There’s valence here, but that’s explainable without calling upon non-instrumental values.

No, we desire pain “for its own sake.” Instrumental values are never good for their own sake. I wrote about final ends several times including in my essay on anti-realism.

Also, pleasure and pain are heuristics, and there are very good evolutionary reasons for why they are wired the way they are – because the things that trigger these ‘attract’ or ‘avoid’ responses tend to be (instrumentally) good or bad for our survival. These mechanisms aren’t perfect, though. As I said, they’re heuristics.

Pain is instrumentally valuable. It’s “good” in the sense that it helps us survive. That is to miss my point entirely. My point is that we see pain as “bad” in the sense that we hate pain “for its own sake.”

I have already discussed these sorts of issues to death. I’m not sure what is going on here.

So, if you’re a hedonist, and take pleasure as the ‘ultimate end’, then you’re really a closet evolutionary ethicist. Being a hedonist means you’re promoting those things that our pleasure-seeking heuristics are promoting, which are those things we’ve evolved to seek out because they aided fitness.

I have no idea what this means. We can decide that pain is bad from common sense. I already argued this idea to death. If you don’t like my arguments, then you will have to tell me where they are faulty.

And yes, evolution could have a great deal to do with how we experience the world. Even moral realists agree with that. But we would decide pain is bad with or without evolution. We can certainly hate pain “for its own sake” whether or not evolution is the cause. When I have a headache I don’t care about survival. I just want it to go away.

My argument is that there are no ultimate ends. Unless you want to make the ultimate end fitness. And I don’t think we’d want to – nor would we have to. That’s the chewy nihilism at the centre of this gobstopper. But, there are still very good reasons to still want to be nice to each other.

Where is your argument? I think you are reading “ultimate ends” to mean “intrinsic values.” If you read my anti-realist perspective essay, you would see that I find “final ends” to be essential for making sense out of anti-realism (and realism alike.) You are missing out on an essential category for morality that no one really disputes. We can value something for its own sake. We can like pleasure without having to say “I would stop wanting pleasure if it wasn’t good for survival!” Just the opposite. Sometimes we are willing to die to stop feeling pain. Sometimes were are willing to harm ourselves for pleasure.

It is absurd to think everything is an instrumental end because then we are always left with the question, “So what? It has to be good for something else!” You would end up in an infinite regress. Everything is useful, but for what purpose? Absolutely no purpose would matter. It is instrumentally valuable for me to use a gun to murder people, but that doesn’t mean it’s “good.” An action is only good when it leads to something we value for its own sake.

I am only talking about the fact that anti-realists and realists alike agree that we can value something for its own sake. Whether or not intrinsic values exist is irrelevant to the point entirely.

1 03 2010
Tim Dean

I’ll re-read your posts and consider them more deeply, then respond to them more comprehensively in another post.

In brief, I don’t think it’s useful to call the valence of pleasure and pain either good or bad ‘for its own sake’.

And yes, I believe instrumental ends do end with nothing. But that’s only a problem if you’re seeking intrinsic values, or moral facts, or you try to elevate ‘ought’ to be some ‘special’ thing above prudence. Because I think prudence is all you need for morality.

A gun can be instrumentally valuable to use in murdering people. But it’s also prudent to enter in to a social contract with people and agree not to murder them if they don’t murder you back. That’s all you need. I think Joshua Greene argues this well in his dissertation.

1 03 2010
James Gray

It’s not prudent to do anything unless it “satisfies desires” or something of the sort. Otherwise murder is perfectly fine. You are basically saying that more desires would be satisfied by entering into a social contract and we value our desires in general. You can’t make sense out of morality without referring to desires.

To say something is good for its own sake can mean nothing more than “I personally desire it for its own sake.” I can tell that you don’t want to say it’s “good for its own sake” but that’s only because you are assuming that “good” is given an irreducible meaning. If you don’t like Moore’s naturalistic fallacy then we should have no problem reducing “good” to something else, such as, “maximizes desire satisfaction.”

If you are against desire satisfaction, then you need to explain why. So far you have only mentioned why one desire could be ignored (pain) in favor of another (not going to prison/freedom in general).

One common perspective is that “good” just means “would be approvd by fully informed and rational agents.” We might not know for sure what sorts of behavior would be approved or disapproved of, so we can “test out” various ethical systems to see what works best. My point here is that “approval” and “disapproval” is either about desires or intrinsic values. The anti-realist answer tends to be “desires.”

1 03 2010
Tim Dean

Heh. And we’re at it again…

First, I want to divorce pleasure and pain from morality. At least sever any necessary link between them.

Oughts, in my picture, are agreed conventions that encourage social behaviour. There’s nothing ‘necessary’ about that. But if you *didn’t* buy in to the social contract, I’d suggest you wouldn’t survive long. Nothing bad about that, but those who do survive carry the genes and social conventions (and morality) in to the future. And here we are.

Pleasure and pain are evolved mechanisms to encourage/discourage behaviour that promotes fitness – including some social behaviours.

Talk of desire is complicated because there’s the psychological (proximate) desire and the rational (ultimate) desire. So, yes, morality can be thought of as (ultimate) desire satisfaction because entering in to the social contract is a prudent way of me satisfying my other (ultimate) desires (not foolproof, but on average, very effective). But that moral system might not end up satisfying my psychological desires. It’s messy, this talk of desires.

And, sure, I can talk about wanting something ‘for its own sake’, but I don’t believe there’s anything intrinsic about that ‘sake’. So I end up either as an error theorist, a fictionalist or a nihilist. Maybe a universal prescriptivist or subjectivist. I’m not sure how I’d divvy these up in detail yet in my picture.

And I don’t put much stock in the ‘fully informed rational agents’ thing. I don’t know (and will probably never know) what fully informed rational agents would approve/disapprove of, and it might be that they’d all value the same core thing, but I’d still suggest that in the real world there’s no one right answer to moral problems, even if you are a fully informed Vulcan. Even Vulcans are vulnerable to defection in the Prisoner’s Dilemma.

So, morality. It’s an offshoot of the evolved psychological mechanisms that result from us being social animals. These mechanisms promote fitness. But unless you want to make fitness your ‘final end’, and I don’t think you need to or should do, then you’re left hanging. But you’re only ‘left hanging’ if you assume there *is* something objective, real and necessary about morality. And I don’t think you need any of those things to have a corker of a moral system that works pretty well. And it might even make people happy.

It’s like chasing qualia thinking that until you find it and ground it somewhere, consciousness is impossible. Or chasing aesthetics worrying that nothing is beautiful.

1 03 2010
James Gray

I agree that we can’t know what fully rational and informed agents would want, but that’s why morality is so difficult. This definition is impractical, but it could be correct.

In order to relate desires and final (ultimate) ends to intrinsic values and anti-realism, here are a couple of the relevant posts:

Is there a Meaning of Life? (What is “intrinsic value?”)
http://ethicalrealism.wordpress.com/2009/12/29/is-there-a-meaning-of-life/

A Moral Anti-Realist Perspective
http://ethicalrealism.wordpress.com/2009/09/22/a-moral-anti-realist-perspective/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: