It’s a popular notion: that morality is somehow universal. If murder is wrong, then it’s wrong for everyone. Or more pointedly, if murder is wrong, then regardless of what I might think on the matter, it’s wrong for me.
There certainly seems to be something about morality that sets it apart from other nominally prescriptive things, like etiquette (chewing with your mouth open is wrong) or instrumental value (using a hammer to drive in a screw is wrong). These appear to be more contingent, dependent on the situation at hand, the culture of the day and the objective that’s set to be achieved.
Not so morality. Murder is wrong. That’s all there is to it. It’s not a matter of convention, nor a matter of instrumental expedience. It’s just plain wrong.
But the pickle in the ointment is the seemingly intractable problem of getting everyone to agree on the universal moral norms that apparently apply to everyone. Moral disagreement is a real bugger for the notion of universality.
There are a few responses to the problem of moral disagreement, many of which are summed up tidily in a paper by John M. Doris and Alexandra Plakias called ‘How to Argue about Disagreement: Evaluative Diversity and Moral Realism’, which appears in volume two of the wonderful Moral Psychology series, edited by Walter Sinnott-Armstrong.
One common response is to deny that there is, in fact, any moral disagreement. Many moral realists contend that all moral disagreements would dissolve if the agents occupy optimal conditions, i.e. they’re furnished with all the relevant facts and have the rational capacity to understand them.
I don’t know about you, but this response trouble me. If we’re dependent on agents occupying optimal conditions to resolve moral disagreements, then we’re pretty much screwed when it comes to building a real-world moral system. There’s no guarantee than anyone ever has, or ever will, occupy the optimal conditions, which might, in real terms, relegate moral facts to some unknowable realm about which we can only speculate. That might be acceptable to some, but to me it just reinforces the notion that we need to develop a practical system of morality that actually works in the real world.
Another reason I’m wary of this notion of universality of norms is that I think it misconstrues the notion of universality in the first place. This is because it focuses on norms, the rules and principles themselves. However, this isn’t the only way to think of universality.
If it turns out that the norms are just strategies for solving a deeper problem – such as the problem of how to get large unrelated groups of individuals to live and work together without screwing each other over – then we can see that there might be many strategies for solving this problem. However, no one strategy is going to be best in every circumstance.
As such you’d expect there to be a variety of norms – or strategies – for solving the problem, and you’d expect that many of the norms would be in conflict. This is because sometimes it’s better to forgive and sometimes it’s better to punish and sometimes it’s better to make an example of someone, for example.
So, when we’re confronted with moral disagreement, instead of looking upon the conflicting norms and shaking our heads with resignation, we should look at the problem the norms are tying to solve. It might turn out that one norm is more effective at solving that problem in the circumstances on hand. Or it might turn out that both norms are effective in different environments, and we lack the information to know which environment we’re in, so we must be agnostic as to which one to prefer. We could also reconcile those who hold the opposing norms as being true by reminding them that they’re both trying to achieve the same end – or solve the same problem – but they both have different strategies for doing so.
I’m not suggesting this will be easy. The notion that norms are universal is deeply rooted in our psychology. It takes a step to see that it’s not the norms that are universal, but the problems they’re trying to solve. Even then, the problems may not be objective, as such, but they are likely to be fewer than the norms that arise to solve them. Still, I think this approach has a lot more going for it than hoping agents will hit upon the optimal conditions and discover that one set of norms rules above all others. For one, I think it might actually work.