I’ve finally had an opportunity to read through the challenge to Jon Haidt’s Moral Foundations Theory by Craig Anderson, of the University of Wisconsin, and I’ve found there are differences with my own similar challenge.
Craig suggests that truth/right-belief could be considered as a sixth moral foundation:
The central aspect of this morality is that people tend to moralize the beliefs that they hold to be true. Not only do individuals care that they themselves have proper beliefs, but they further feel that others should share those same beliefs.
On the surface, this is similar to my claim that truth/honesty could be a sixth moral foundation. However, there is a significant difference that becomes apparent when you dig deeper into what Anderson is suggesting.
First of all, I’d disagree that “people tend to moralize the beliefs that they hold to be true.” That includes too much that isn’t moral. I believe it’s true that ‘in Australia we drive on the left-hand side of the road’. If someone thinks we drive on the right, I don’t experience moral outrage, although I may be prompted to correct them. Further, if they think we should drive on the right, again I don’t feel moral outrage in the same sense as if they’d said we should cheat on our friends or eat our children. We don’t moralise beliefs that we hold true, we do the opposite, which brings me to Anderson’s next point.
I think the key point that Anderson is making is encapsulated in the second sentence in the quote above, particularly that people “feel that others should share those same beliefs.” Anderson offers the following example:
A very clear example of this form of morality can be seen in the clash between religions, or even somewhat in the clash between the religious and the secular. People of each religious group see their own canon of beliefs as the truth, and that others are wrong, immoral, or sometimes even inhuman, just for not believing in the right god(s).
However, I would suggest that Anderson is not talking about a moral foundation as such, but he’s adroitly recognised an underlying characteristic of all moral discourse. What makes moral matters different to matters of convention is that morals feel categorical: they feel as though they should apply everywhere and to everyone.
Driving on the left or the right is a norm of convention. A relevant authority could overturn the norm and none of us would be left reeling in moral outrage. On the other hand, inflicting harm on others is a moral norm, and cannot be arbitrarily overturned by an authority. Morals are different in that we feel that they’re somehow universal, and that if they apply to me, they should also apply to everyone else. As a consequence, it’s fully expected that individuals would “feel that others should share those same beliefs.”
Why is this not a moral foundation? Haidt proposes there are “innate and universally available psychological systems are the foundations of ‘intuitive ethics’.” He’s talking about things like harm/care as one vector, in-group/loyalty as another, etc. But they are all categorical: if I believe it’s wrong to harm person X in situation Y, or to be disloyal to authority Z, then I believe it’s also wrong for others to behave the same way.
The categorical nature of moral norms cuts across all the moral foundations, and thus cannot be a discrete moral foundation in itself. Instead the categorical nature of moral norms is an underlying functional characteristic that defines them as moral norms.
To test this, we could try to tease right-thinking/categoricalness from the other moral foundations. If we could find issues that triggered on the right-thinking/categoricalness scale that didn’t trigger the other moral foundations, then it could be a discrete moral foundation. For example, if there were issues where it was impermissible for person X to do Y but not person Z to do Y. But I’d expect it would be found that there are no moral issues that are categorical without also being related somehow to another moral foundation. Further, I expect it would be found that all other moral foundations would also trigger right-thinking categoricalness.
In contrast, my challenge suggests truth/honesty as a moral foundation, distinct from the other moral foundations. As I mention in my earlier post, an example might be that it’s judged wrong for person X to lie even if it has a positive outcome according to one of the other moral foundations.
Now, Anderson does mention truth and lying, so there is some crossover with my challenge, but truth is really contingent to Anderson’s main claim about right-thinking. It’s just that people promote right-thinking/categoricalness only about things they hold true, not that being truthful is morally obligatory in and of itself.