Rethinking Moral Universality

6 02 2010

It’s a popular notion: that morality is somehow universal. If murder is wrong, then it’s wrong for everyone. Or  more pointedly, if murder is wrong, then regardless of what I might think on the matter, it’s wrong for me.

There certainly seems to be something about morality that sets it apart from other nominally prescriptive things, like etiquette (chewing with your mouth open is wrong) or instrumental value (using a hammer to drive in a screw is wrong). These appear to be more contingent, dependent on the situation at hand, the culture of the day and the objective that’s set to be achieved.

Not so morality. Murder is wrong. That’s all there is to it. It’s not a matter of convention, nor a matter of instrumental expedience. It’s just plain wrong.

But the pickle in the ointment is the seemingly intractable problem of getting everyone to agree on the universal moral norms that apparently apply to everyone. Moral disagreement is a real bugger for the notion of universality.

There are a few responses to the problem of moral disagreement, many of which are summed up tidily in a paper by John M. Doris and Alexandra Plakias called ‘How to Argue about Disagreement: Evaluative Diversity and Moral Realism’, which appears in volume two of the wonderful Moral Psychology series, edited by Walter Sinnott-Armstrong.

One common response is to deny that there is, in fact, any moral disagreement. Many moral realists contend that all moral disagreements would dissolve if the agents occupy optimal conditions, i.e. they’re furnished with all the relevant facts and have the rational capacity to understand them.

I don’t know about you, but this response trouble me. If we’re dependent on agents occupying optimal conditions to resolve moral disagreements, then we’re pretty much screwed when it comes to building a real-world moral system. There’s no guarantee than anyone ever has, or ever will, occupy the optimal conditions, which might, in real terms, relegate moral facts to some unknowable realm about which we can only speculate. That might be acceptable to some, but to me it just reinforces the notion that we need to develop a practical system of morality that actually works in the real world.

Another reason I’m wary of this notion of universality of norms is that I think it misconstrues the notion of universality in the first place. This is because it focuses on norms, the rules and principles themselves. However, this isn’t the only way to think of universality.

If it turns out that the norms are just strategies for solving a deeper problem – such as the problem of how to get large unrelated groups of individuals to live and work together without screwing each other over – then we can see that there might be many strategies for solving this problem. However, no one strategy is going to be best in every circumstance.

As such you’d expect there to be a variety of norms – or strategies – for solving the problem, and you’d expect that many of the norms would be in conflict. This is because sometimes it’s better to forgive and sometimes it’s better to punish and sometimes it’s better to make an example of someone, for example.

So, when we’re confronted with moral disagreement, instead of looking upon the conflicting norms and shaking our heads with resignation, we should look at the problem the norms are tying to solve. It might turn out that one norm is more effective at solving that problem in the circumstances on hand. Or it might turn out that both norms are effective in different environments, and we lack the information to know which environment we’re in, so we must be agnostic as to which one to prefer. We could also reconcile those who hold the opposing norms as being true by reminding them that they’re both trying to achieve the same end – or solve the same problem – but they both have different strategies for doing so.

I’m not suggesting this will be easy. The notion that norms are universal is deeply rooted in our psychology. It takes a step to see that it’s not the norms that are universal, but the problems they’re trying to solve. Even then, the problems may not be objective, as such, but they are likely to be fewer than the norms that arise to solve them. Still, I think this approach has a lot more going for it than hoping agents will hit upon the optimal conditions and discover that one set of norms rules above all others. For one, I think it might actually work.

Advertisements

Actions

Information

19 responses

6 02 2010
scaryreasoner

It’s completely obvious that there is no universal morality, so obvious, I’m surprised it’s worth talking about. It’s obviously a cultural thing.

You can talk about different cultures of humans, but, as you go back in time, there is a continuous chain of living organisms, and the further you go back, the less human the beings you encounter will be, though never will you encounter a discontinuity. Fit morality into that idea, the idea that all living things evolved from a common ancestor, and that your ancestors, and your culture extend back in time without discontinuity, and suddenly you have to think about, for example, the morality of the Black Widow spider, the females of which *kill and eat* the males immediately after copulation.

*DUH*

*OF COURSE* there’s no universal morality. It’s not even worth talking about such a thing.

6 02 2010
Tim Dean

Hi scary. Good to see you back here.

I agree that there is no set of universal norms, as I’ve argued above, but where I differ from your perspective is that I think it’s not at all obviously a cultural thing, and I do think it’s worth talking about.

Many philosophers dismiss cultural variation in moral norms – one reason why is mentioned in my post. Others, such as Kant, suggest it’s quite possible that every culture to date has got morality wrong, and that there is a universal set of moral norms, we’re just yet to figure out what it is.

If we just dismiss these views as ‘obviously’ wrong, then I think we do a disservice to the debate, and therefore, to our own arguments on the matter as well.

6 02 2010
Paul

If it is true that different moralities and polities try to solve the same problems in different ways, than it seems that there should be an honest way to evaluate their relative effectiveness. If this is true, than it could help in the design of new constitutions, and also help diffuse knowledge about what constitutions and institutions are even supposed to be doing.

6 02 2010
Bjørn Østman

Tim, what is the value of coming up with a universal system? Why do you think it’s a good idea to devise a moral system? For what end?

(I’m not trying to be a troll – I just really don’t know what the point is, and am eager to know.)

6 02 2010
James Gray

Universality and unconditionality are two different possible elements of morality. Morality is universal if you accept that it is somehow wrong to demand that it’s OK for you to murder others but not OK for anyone else.

The idea that morality is universal in the sense of agreement among cultures is a separate sort of universality that I am speaking of here.

Morality is unconditional if you nothing can override a moral obligation.

I am a moral realist and I agree with Tim that there can be more than one way to have an equally effective moral system, but this is an extremely confusing and complected topic. A simple example would be whether society should drive on the right or left side of the road. For the most part that is arbitrary in the sense that they are equally good — but they are both good. To have no such rule would be a bad idea.

Also note that two values can conflict and there might be no way to resolve which should be given priority. We can put a suffering individual “out of their misery” by valuing pain over life, but it isn’t obvious that this is the correct decision. The values might be incommensurable.

To say that moral obligations are unconditional is merely to say that non-moral considerations can’t override them to tell us what should be done. There could be more than one equally moral course of action open to us.

6 02 2010
James Gray

I should also note that anti-realists often accept that morality is universal and unconditional in the way that I described. R.M. Hare has developed one of the most detailed and accepted accounts of anti-realist morality, but he calls it “universal prescriptivism” because he finds it so important that morality is universal.

However, I personally am not convinced by Hare, and I suspect that a kind of egoism might be more plausible for anti-realists.

7 02 2010
Tim Dean

Hi Paul. That’s precisely where my thesis is heading. If there is no one set of moral norms that is ‘best’, then it makes morality more like politics – at least liberal democratic politics. So you set up a social contract where everyone agrees to buy in to a broad framework where different values and norms can exist in tension with some mechanism for choosing the ones to adopt in any particular circumstance. David Gauthier (who I’m reading at the moment) suggests something very similar.

And hi Bjorn. I don’t think universal morality is such a good idea (well, it’s a nice idea, but I don’t think it can work), but I think it is a good idea to have some way to mediate between conflicting values and moral norms. Without that, we risk conflict between individuals and groups. Doesn’t mean we need to force everyone to one particular morality, we just bring them in to the one broad moral framework.

And James, welcome back. As I’ve said before, I think the moral systems we have in mind would look quite similar from the surface, even if the metaethical foundations are different. One could be a realist and still buy in to the framework I’m talking about – you’d have to be a realist about the value of sociality/cooperation rather than a realist about particular norms, but it’d work.

7 02 2010
Bjørn Østman

I don’t think universal morality is such a good idea (well, it’s a nice idea, but I don’t think it can work), but I think it is a good idea to have some way to mediate between conflicting values and moral norms. Without that, we risk conflict between individuals and groups. Doesn’t mean we need to force everyone to one particular morality, we just bring them in to the one broad moral framework.

Is that what happened in the past when groups of people met for the first time, that someone sat down and made rules for them to abide by? If not, then why is it needed now?

Additionally, depending on the particular broad moral framework you mention, don’t we all have that instinctively in the first place? I think we do. Many moral sentiments that are shared across cultures are innate (killing and stealing is bad), so it seems to me that things will work out on their own. Aren’t the potential conflicts also lessened by laws?

7 02 2010
Tim Dean

Hi Bjorn. Moral disagreement abounds, not only between cultures but within cultures. Consider issues of abortion, euthanasia, the death penalty etc. Or honour killings, treatment of women and freedom of expression. I think most people would agree that it would be desirable to have a framework for mediating between different moral perspectives in order to manage disagreement in as peaceful a way as possible.

Yet one of the barriers to making this happen is the broadly held view that morality – and one’s accepted moral norms – is universally applicable. Certainly there are relativists who would deny this, but they, too, have difficulty reconciling conflicting moral views.

I would agree with you that there are some universal or near-universal moral sentiments. I’ve written extensively about that on this blog (see the Evolutionary Ethics link at the top). However, these intuitions are just evolved heuristic solutions to the problems of cooperation. We shouldn’t necessarily rely on these intuitions to provide the our moral code today – instead, I’d suggest we look at the problems these intuitions evolved to solve and try to answer them directly, drawing on evolved intuitions where possible, and abandoning them when necessary.

I sometimes use the term ‘moral diet’. We evolved a taste for sweet and fatty foods because it helped our ancestors seek out high energy foods. Yet today this same taste is contributing to an obesity ‘epidemic’. Likewise we’ve evolved to discriminate between in-groups and out-groups, or to favour family and friends over non family and friends (nepotism). These evolved intuitions might lead to harmful behaviours in today’s world, so we might need to go on a moral diet and actively work against our evolved intuitions.

7 02 2010
James Gray

I think there is still some confusion about what “universal morality” means. Hare is a cultural relativist but he think each culture will have to adopt a kind of universal framework. This is also compatible with Kant who thinks you have to be able to “will” your action as a universal law. Some people might will different universal laws than others, but it might still be important to have this level of universality in morality. I will have to try to clarify that in my post.

From the practical point of view I have no problem with a social contract. We need to create political systems and civilizations despite the lack of agreement in ethics. Rawls might have one such way to try to make certain political systems without becoming overly self-destructive and account for moral diversity.

7 02 2010
Bjørn Østman

Tim, thanks for clarifying that. Very helpful.

One more question: To what extent are we talking about laws rather than (I suppose) voluntary morals?

7 02 2010
Tim Dean

James – yes, I think the term ‘universal’ is ambiguous. I guess I’m mainly referring to the *feeling* of universality that we have about moral judgments. Many (but not all) philosophers then run with that to try to build moral systems that apply to all humans, or in Kant’s case, all rational beings.

(I thought for that reason that Kant wasn’t remotely relativist. I was under the impression his riff was that if you’re a rational being, then by virtue of that and the categorical imperative, then you arrive at the ‘don’t treat people like means to an end’ thing, and that was the only universal categorical imperative, and it applies universally. Although I could be wrong. Kant is steeped in hidden complexities that never fail to bamboozle me.)

But I’d be concerned about any cultural relativism that allowed each culture to have its own set of universal laws – kind of like religious pluralism does today. Sure, the religions agree not to stomp on each other and to respect their perspectives, but by their very nature, they preach that they have the one and only truth, and the other religions are wrong. That causes conflict all over the world. I’d rather erode some of our faith in universality – except for the notion that the problem we’re all trying to solve is how to live together in peace.

And I’m sympathetic with Rawls, although I’d be coming from a very non-Kantian perspective. I think the ‘framework’ I’m talking about is somewhat similar to Rawls’ except it’s based on instrumental value and a kind of ultimate (not psychological) egoism. Not sure – have to think through the similarities/differences to Rawls more.

Bjorn – in a way I’m talking about making morals more like laws, i.e. contingent, hypothetical imperatives derived at through a mediated process and open to revision. Most people think of morals are being necessary, universal, categorical and not open to revision once settled.

8 02 2010
Paul

If it is true that the problems that morality are trying to solve, based on a cursory reading of research into the natural history of humanity, are problems of cooperation and pro-sociality among in-group members, won’t your arguments against universality also be arguments against peace on earth? That is to say, morality isn’t just about finding stuff to eat, it is also about killing people who are not part of your tribe, as martial effectiveness is one of the problems that all moralities try to solve.

8 02 2010
Tim Dean

Hi Paul. Our moral sense might have evolved to apply mainly to in-group members, but just because it evolved that way doesn’t mean we need to apply it that way today. Hence my ‘moral diet’ metaphor. The in-group favouring might have been beneficial thousands of years ago, but not today.

I’m in favour of peace on Earth, because such peace on aggregate benefits everyone more than conflict, even if conflict might benefit a few. So, as a potential sufferer of the ill effects of conflict, it’s in my interests to step in to a kind of social contract to encourage everyone towards peace.

We should use our evolved moral intuitions as much as possible where they will aid us in furthering this end, but mitigate them when they work against it, such as our in-group preference. Not easy, but I’d suggest necessary.

9 02 2010
James Gray

Tim,

It is possible for rational beings to differ in their universal will. The most important thing for Kant is that you aren’t a hypocrite. He seems to think that all rational beings will agree in their moral judgment, but that’s not the universality he was talking about (as far as I know). R. M. Hare and other anti-realists think morality is universal in pretty much the same way Kant talked about, but they think rational people can disagree without being hypocritical. One rational being might say that we should be put out of our misery if we will feel too much pain in our final remaining hours, but a different rational agent might think the final renaming hours are still worth living.

After reading Ermanno Bencivenga’s book on Kant, he makes it clear that this sort of question is not resolved. We don’t know what an “ideal rational agent” is, and we have no way to know the right thing to do giving a situation’s indefinite complexity.

9 02 2010
Tim Dean

Very interesting. Thanks for the clarification on Kant, James. I still have my doubts about Kantianism – it seems too divorced from reality for mine – but interesting none the less.

9 02 2010
James Gray

I didn’t try to suggest that Kantianism was true, but it is worth considering. What is it about Kantianism that makes it seem divorced from reality? Practical everyday ethical decisions might not be explained or justified by it, but Bencivenga seems to take it as a humbling reminder about our uncertainty more than anything else.

It is possible for an anti-realist to accept hypocrisy (or even deny being hypocritical) because they might just want to “look out for themselves” but that doesn’t seem to do the job of an everyday social morality that you are interested in.

10 02 2010
Tim Dean

I’ve always been suspicious of Kant’s reliance in reason and rejection of moral psychology – I worry that it can lead to us building a rational framework that appears nice and consistent, but it doesn’t actually relate to the messy world around us.

Certainly, rationality can help in justification after the fact, but there’s not much in Kant that can help me direct moral behaviour before the fact.

I also suspect that if Kantianism was applicable to the real world, a Kantian would be devoured by defectors (in the game theory sense). Give it a couple of generations, and there’ll be no Kantians left. Say what you will about sacrifice for the good, but that doesn’t make a good moral system in my books.

That said, the usual caveats. I have a fairly coarse grained appreciation of Kant, and I often underestimate the complexity of his transcendental idealism…

10 02 2010
James Gray

I’ve always been suspicious of Kant’s reliance in reason and rejection of moral psychology – I worry that it can lead to us building a rational framework that appears nice and consistent, but it doesn’t actually relate to the messy world around us.

I don’t know that Kant was completely uninterested in moral psychology. He wanted to make it clear that practical reason as he understands it does require the real world to be taken into consideration. It can be very messy to try to be moral in this world.

Some of his points about morality are purely conceptual because certain moral beliefs seem to depend on various assumptions. That’s what free will is about and so on. It is possible that we don’t have the ability to make decisions based on practical reason, but he seems right that this is very relevant to our everyday understanding of morality.

Certainly, rationality can help in justification after the fact, but there’s not much in Kant that can help me direct moral behaviour before the fact.

If my understanding is correct, then you are right. As I said, Kant doesn’t seem to answer everyday moral decision making.

I also suspect that if Kantianism was applicable to the real world, a Kantian would be devoured by defectors (in the game theory sense). Give it a couple of generations, and there’ll be no Kantians left. Say what you will about sacrifice for the good, but that doesn’t make a good moral system in my books.

Not sure what you mean. My sacrifice of an aspirin to relieve a stranger’s headache seems pretty good, but I think a utilitarian would agree with me. Utilitarianism itself doesn’t require anything egoistic.

That said, the usual caveats. I have a fairly coarse grained appreciation of Kant, and I of/ten underestimate the complexity of his transcendental idealism…

I don’t fully understand it all. I used to think Kant’s theory was utterly mysterious and strange, but I just decided to give Kant’s books a reading recently after finishing Ermanno Bencivenga’s book, Ethics Vindicated: Kant’s Transcendental Legitimation of Moral Discourse. Bencivenga does a good job at making Kant sound like an intelligent person, but I am still confused about Kant to some extent. If we could narrow down specific objections to Kant that might help.

My current understanding of the categorical imperative is posted here: http://www.facebook.com/pages/Ethical-Realism/108132484137#!/notes/ethical-realism/kants-categorical-imperative/272445378383

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: