Evolution and Moral Ecology

This is a broad outline of the theory I’m developing for my PhD thesis. Consider it a work in progress. All comments and criticism are welcomed.

If humans have an evolved moral sense – and there’s mounting evidence that we have (Wright, 1994; Fehr et al., 2002; Boyd et al., 2003; Gintis et al., 2003; Lieberman et al., 2003; Fowler, 2005; Joyce, 2006; Haidt, 2008; Haidt at al., 2009) – then we should not expect it to function in an identical way between individuals.

Instead, we should expect a diversity in the function of the moral sense between individuals, and a corresponding diversity of moral intuitions and moral judgements. This is because there is no one solution to the problems that our moral sense evolved to solve that yields the best outcome in every environment or circumstance.

As such, our moral sense has evolved to produce a variety of strategies that increases the likelihood that we’ll be able to successfully respond to a wide range of situations and environments – with the ‘environment’ also including the strategies employed by other individuals.

The analogy for this view is one of an evolved ‘moral ecology’ – a diverse range of strategies that each perform well in their niches, while they last, but with no one strategy that dominates all others. In this essay I will employ the moral ecology metaphor to explain why our moral sense evolved the way it did, and how it produces the wide diversity of moral attitudes and beliefs we see today, not only between cultures, but within cultures, and discuss some of the implications for moral philosophy.

What is Morality?

The picture that emerges is one that sees morality not as a truth-seeking endeavour, as moral realists and objectivists claim (Smith, 1994; Bloomfield 2001) but as a device that evolved to regulate social interaction, encourage pro-social behaviour and punish anti-social behaviour (Westermarck, 1932; Mackie, 1977; Greene, 2002; Haidt et al., 2009).

It evolved because those individuals that were able to benefit from the rewards of cooperation – while mitigating the risks of free-riders – had a selective advantage over those who engaged in less cooperation or who succumbed to exploitation by free-riders (Hamilton, 1964; Axelrod & Hamilton, 1981; Krebs, 1998; Gintis et al., 2003; Joyce, 2006; Haidt, 2008). As such, morality evolved because it aided reproductive fitness in our evolutionary past (although that doesn’t mean that morality needs to be predicated on advancing reproductive fitness today).

The problems that morality evolved to solve are essentially the problems inherent in fostering prosocial and cooperative behaviour, which benefits all agents who interact, but opens up opportunities for free-riders to take advantage of the cooperative venture for their own gain at the other cooperators’ expense.

This phenomenon can be modelled in a number of ways, with one of the most effective being the Prisoner’s Dilemma, which will be used in this essay to illustrate different ‘strategies’ for fostering cooperation and protecting against free-riders (Binmore, 1990; Colman, 2003). In these models, particularly the Iterated Prisoner’s Dilemma, it emerges that there is no one strategy that performs optimally in every environment, given the strategies employed by other individuals within a population (Trivers, 1971; Dawkins, 1975; Axelrod, 2001). Yet there are situations in which a plurality of strategies can co-exist in a dynamic equilibrium, called an evolutionary stable strategy (ESS), whereby this equilibrium cannot be ‘invaded’ by any new strategies (Maynard Smith & Price, 1973).

I draw an analogy between this phenomenon and the existence of a diversity of moral attitudes held by individuals within a population. I invoke two theses concerning this diversity: ‘lesser moral diversity’, which suggests that there will always exist a mix of pro-social strategies within a population; and ‘greater moral diversity’, which suggests there will also always exist a proportion of self-interested/free-riding strategies in the mix of pro-social strategies within a stable population.

Evolution shaped our moral sense not by providing us with hardwired moral principles to follow, but by equipping us with a range of faculties that, when primed by the environment, tend to encourage pro-social behaviour and punish anti-social behaviour (Haidt, 2009).

Many of these faculties operate like heuristics – fast and frugal decision making processes that are capable of directing behaviour quickly and with minimal effort yet are also prone to error (Gigerenzer, 2008) – such as the moral emotions (Haidt, 2003), but we also have a highly plastic reflective reasoning faculty that can produce a wide range of behaviours in response to an equally wide range of environmental circumstances (Wilson & Tumminelli & Sesma, 2009).

Moral Psychology

There are three main components to our moral sense: perception (Murdoch, 1970; Blum, 1991; Lakoff, 1987 & 1996; Hauser, 2006); emotion (Hume, 1739/1985; Westermarck, 1906 & 1932; Haidt, 2001) and reason (Kant, 1787/1993; Sidgwick, 1907) that contribute to the production of moral attitudes and moral behaviour. When confronted by a scene, or after being exposed to an idea, the first faculty to be engaged is perception, which provides structure and meaning to the stimulus based primarily on prior experience as well as other innate mechanisms (Blum, 1991; Lakoff, 1987).

Almost immediately after scene is perceived, the emotions are engaged depending on the meaning and significance in the scene (Hauser, 2006). If the scene contains morally-charged elements, the moral emotions are engaged, such as empathy, guilt and disgust (Haidt, 2001). While the existence of these emotions are universal in all humans (except for the interesting counter-example of psychopaths), the stimuli that trigger them are largely primed by culture.

As such, one individual might find the idea of eating a dog disgusting while the idea won’t elicit a disgust response in an individual from a different culture. The emotional response to situations such as this plays a large role in determining whether an individual finds a particular behaviour or idea morally permissible or impermissible, which manifests as a moral intuition – an immediate sense of approval or disapproval giving an impression of the rightness or wrongness of the scene or its content (Haidt & Joseph, 2004).

It’s only at this point that conscious reflection is engaged, and along with it the application of rational principles and moral beliefs, to yield a conscious, and utterable, moral judgement. Often there might be only one moral intuition that is elicited from a scene, and this intuition will conform with an individual’s moral beliefs. However, sometimes multiple intuitions might emerge, and they might be in conflict with each other, such as in moral dilemmas.

At this point reason can play a role in mediating between the opposing intuitions to yield a conscious moral judgement about the permissibility or impermissibility of the content in the scene. Sometimes the moral intuition or intuitions will also conflict with the consciously held moral principles, with reason working to find a reconciliation between the two. Finally, the agent is led to performing some action (or withholding from performing some action), motivated by emotion, steered by reason.

Evolved Moral Sense

Evolution has influenced many steps in this process. First of all, each of these individual faculties is the product of evolution, from perception to emotion to reason. Many of the emotions also evolved in response to adaptive problems from our evolutionary past, such as fear promoting an aversion to the triggering stimulus, with many dangers from our evolutionary past – such as spiders, snakes, heights, the dark etc – eliciting a fear response disproportionate to the danger they represent in the contemporary world (Pinker, 1997). The very existence of certain emotions – such as embarrassment or guilt – also suggest we have evolved specific mechanisms to regulate social and moral behaviour (Haidt, 2003).

The moral diversity theses suggest that individuals will vary in their moral responses to certain situations due to environmental, developmental and experiential differences, as well as an innate variability in emotional responses or cognitive styles.

For example, two individuals might have different anger responses, with one being naturally quicker to anger and experiencing a more intense anger sensation, while the other might be naturally slower to anger and will experience a less intense anger sensation. These two individuals might respond differently to the same scene as a result of the variation in their emotional responses, even if their developmental and experiential background are identical. Over time, their variation in responses will lead to different experiences, which may further compound the variation in their future reactions. In time, these two individuals might come to hold widely disparate moral beliefs and might employ very different moral behaviours even in very similar situations.

Evolved Moral Diversity

If this account is plausible, then there needs to be a mechanism that can enable such a stable polymorphism – i.e. a persistent diversity of alleles in the gene pool that can lead to such a diversity of emotional and cognitive responses – to evolve. In fact there do exist other aspects of our genome that exhibit similar stable polymorphisms, one being the major histocompatibility complex (MHC).

The MHC is instrumental in our adaptive immune system, and functions to identify self cells from non-self cells, thus allowing our immune system to target foreign pathogens rather than attacking the body’s own cells (autoimmune disorders are what occurs when this system fails in some way). However, should a pathogen evolve such that it is recognised by the body as ‘self’, then it can elude our immune system. This means that had the MHC evolved into a single stable state, then it would have been vulnerable to a single mutant pathogen bypassing immunity – a result that could prove fatal for the entire species.

As a result, the MHC is in a constant ‘arms race’ with pathogens, and the MHC has evolved to be highly polymorphic – varying between individuals by as much as 80 per cent – effectively becoming a ‘moving target’, such that it’s less likely for a single pathogen to bypass the immune system in a large number of individuals at one time. One of the mechanisms that enabled the evolution of this stable polymorphism is frequency-dependent selection (Takahata & Nei, 1990).

This kind of selection is significant when the fitness benefit of a particular gene or allele depends on the presence of other genes or alleles in the population or broader environment. For example, the greater number of pathogens with MHC gene x, the less adaptive gene x is in the human MHC, while gene y becomes more adaptive, at least until pathogens mutate to express gene y, in which case gene x may once again become more adaptive, and so on. This situation can lead to the kind of stable polymorphism that we see in the MHC, and that I’m suggesting we find in various elements of our evolved moral sense.

Another example of a stable polymorphism is seen in other aspects of our psychology, such as personality (Bouchard & McGue, 2003; Bateson, 2004; Nettle, 2007; Buss, 2009). It appears as though the diversity in some aspects of our psychology, such as to what extent an individual is introverted or extroverted, is accountable for by genes.

One explanation for this is a similar frequency-dependent selection process that produced the highly polymorphic MHC; if one sees each personality type as promoting a particular strategy of behaviour, then the same frequency-dependent selection mechanism can establish a stable diversity of these strategies within a population.

In addition to innate variation, humans also have a tremendously plastic mind that is able to adapt to changing circumstances in the world, although often without the individual being consciously aware of their changing behavioural dispositions.

There is evidence that our responsiveness to environmental cues also influence moral judgements, such as individuals being subtly exposed to stimuli that evoke negative emotions, such as disgust, being more critical when it comes for forming moral judgements (Schnall, 2008), or individuals being more or less trusting of others, and engaging in correspondingly more or less cooperative or altruistic behaviour, based on their response to environmental cues, such as perception of the quality, or disrepair, of the neighbourhood they’re in (Wilson et al., 2008).

Moral-Political Diversity

One case study of this phenomenon of moral ecology can be found in the Liberal-Conservative political spectrum as it’s seen in the United States and many other liberal democracies. There is evidence from political psychology that an individual’s personality, emotional responses and cognitive style influence their political attitudes, even when accounting for environmental effects (Jost, 2003; 2007).

In this context, I liken the Conservative responses to a ‘suspicious’ strategy in the Iterated Prisoner’s Dilemma. This strategy is highly risk averse and more concerned with avoiding free-riders even at the expense of missing out on some potential cooperative ventures. The Conservative strategy is also more concerned with tight social cohesion amongst a group of individuals who can be trusted and is less open to interacting with individuals outside the group. This kind of strategy performs well in an environment with many free-riders, although it sacrifices cooperative opportunities in environments with many cooperators.

The Liberal strategy is less risk-averse and is more concerned with fostering cooperation even at the expense of suffering some free-riding. It fosters social cohesion not by strict conformity amongst a core group, but by being tolerant of diversity and of outsiders, and being more ready to attribute trust. The Liberal strategy performs well in an environment with many cooperators but performs less well than the Conservative strategy in an environment with many free-riders. It is also less tightly cohesive, and is more prone to fragmenting.

While environmental factors clearly play a strong role in determining which broad strategy an individual employs, so too do emotional responses and cognitive style, and the latter are influenced by our genes, suggesting an evolutionary influence on an individual’s chosen political ideology (Suedfeld et al., 1994). The result is a kind of political ecology, where a variety of strategies exist, working in tension with one another, but no one strategy is able to consistently dominate.

Moral Implications

While this thesis is descriptive, it does have some implications for moral philosophy. It sees morality, at least as it has evolved, as a device that regulates social interaction and encourages cooperation while punishing self-interested and free-riding behaviour.

As such, our moral faculty – including the moral intuitions and moral emotions that so influence moral philosophy – has not evolved to seek out truth per se, but is contingent on human biological interests and sociality. Even if we have a tendency to see moral judgements as being objective features of the world (perhaps also as a product of evolution), they are, in fact, subjective projections, in the spirit of Hume’s or Simon Blackburn’s projectivism.

However, they are not arbitrary; there is something about the way we are composed that influences the projections that an individual will make. This view is compatible with the error theory proposed by John Mackie (1977) and more recently reinforced by Richard Joyce (2001) and Joshua Greene (2002).

Instead of seeing morality as an objective feature of the world to be discovered, it might be more appropriate to understand it as a construct to be created by humans for our benefit. As there are no moral truths in the world – or moral1 facts, as Greene puts it – this doesn’t eliminate morality of the moral2 sort, which deals with issues of social interaction and concern for the interests of others.

This view is also compatible with social contract theories such as those by David Gauthier (1986) and Ken Binmore (1994; 1998). The underlying commonality is that morality exists to advance human interests through promoting pro-social and cooperative behaviour, and each individual agrees to sacrifice some of their freedoms – namely their freedom to free-ride – in order to benefit from the cooperation of others.

However, there is no one strategy as to how best to promote that cooperation, and there is no practical way to eliminate free-riders. As such, a moral system benefits from a pluralism at the level of strategies that work in tension to produce a diversity of responses to any particular situation, making it more likely that a better strategy will be available rather than if the society employed a moral system that adhered to only one strategy – say, Liberal or Conservative – which would likely ultimately prove unstable.

This means this picture of morality – at least as it has evolved, and given the problems it was trying to solve – leans towards a kind of pragmatic pluralist social contract theory that allows the moral ecology to flourish, and thus is more able to remain in equilibrium rather than become unstable.

21 responses

25 11 2010
Mark Sloan

Tim, I like the idea that diversity in our moral biology is an adaptive trait. It occurred to me many years ago that a troop of chimpanzees with varying vigilance and aggression levels is likely to be, as a group, reproductively fit in more environments and circumstances than a troop with equal levels of vigilance and aggression. I had not carried that thought over to biology dependent moral perceptions and the (biological) psychological basis of conservatives and liberals.

However, this “diversity increases reproductive fitness” idea seems to me to run into problems if you try to apply it to cultural moral standards which I understand you to imply. Some cultural moral standards decrease reproductive fitness. There are a lot more selection forces for cultural moral standards than just human reproductive fitness. Moral behavior started coming untethered from reproductive fitness with the emergence of culture and, at least in advanced countries, is arguably largely now irrelevant to reproductive fitness.

I disagree with your statement there are no moral facts. I am almost certain there is at least one moral fact. That fact is something to the effect of “Moral behaviors are behaviors that increase, on average, the benefits of cooperation in groups by means of acts that are unselfish at least in the short term”.

Whether we are really disagreeing though is dependent on the kind of ‘ought’ we are thinking of when we say “moral fact”. The above claimed moral fact comes only with what I call a ‘rational choice ought’. That is, you ‘ought’ to accept the burdens of this claimed moral principle only if you expect it will better meet your needs and preferences (be your ‘rational choice’ in Rational Choice Theory terms) than your alternatives.

Another kind of ‘ought’ a moral fact might entail is what I call an ‘irrational choice ought’. That is, you ‘ought’ to accept the burdens of this claimed moral principle even if you expect it will NOT better meet your needs and preferences than your alternatives. I agree there are no moral facts that entail this kind of ‘ought’.

Note that an ‘irrational choice ought’ might be claimed to be justified by rational arguments such as Kant’s. The ‘irrational choice’ is in single quotes because it is in the terminology of Rational Choice Theory rather than philosophy.

25 11 2010
Tim Dean

Hi Mark. James made a similar point about reproductive fitness – I think I need to stress more clearly that morality evolved to promote fitness, but that’s not what it’s necessarily about today.

There’s a distinction between ‘adaptation’ and ‘adaptive’. The former is something that evolved because it lent an adaptive advantage in our evolutionary past. The latter is something that lends an adaptive advantage today. The appendix is an adaptation. Reading is adaptive.

I’d say morality is an adaptation, but that doesn’t mean it’s adaptive. It might be, but it’s not necessarily so. And there are aspects of morality that might be maladaptive today, such as our in-group/out-group categorisations. Likewise, our sweet tooth was an adaptation, but it’s maladaptive in today’s environment.

On moral facts: there are facts about morality, yes. But there are no normative facts; no facts that are intrinsically prescriptive. What you have stated is a descriptive fact about morality. It is very likely true. But accepting the truth of that fact doesn’t imply we ought to do anything. You’d need to add another premise that ‘we ought to increase cooperation’, or something similar. I don’t believe there are any objectively true prescriptive facts of this nature.

Instead, it’s up to us to agree what our fundamental oughts should be, then draw on descriptive facts about how best to achieve them. (Sam Harris is all about the second, and good on him – but he ignores/side-steps that first bit and just slaps down ‘well-being’ as the end of morality, like it’s incontestable.)

Not sure if I understand your ‘irrational choice ought’. You’re suggesting I should be moral even if it’s not in my best interests to be so? I think being moral requires sacrificing the option of free-riding, which probably will advance my interests more than being cooperative. But social contract theories suggest that it’s in my long-term interests to give up free-riding, and it’s in most people’s long-term interests to encourage me to give up free-riding. Is that kinda what you’re talking about?

25 11 2010
Mark Sloan

Tim, I can add the word descriptive (which is certainly true) to my claim about the existence of at least one moral fact. I might have said: …I am almost certain there is at least one DESCRIPTIVE moral fact. That fact is something to the effect of “Moral behaviors are behaviors that increase, on average, the benefits of cooperation in groups by means of acts that are unselfish at least in the short term”….

But I can defend the idea that in the case of this particular descriptive moral fact, there is no need for an additional assertion to the effect of “people ought (‘imperative ought’) to increase the benefits of cooperation” as is usually needed to produce a prescriptive statement. (An ‘imperative ought’ is an obligation that ‘should’ be accepted regardless of the individual’s needs and preferences.)

What makes this descriptive moral fact interesting is it appears that, if practiced as a moral principle, it would likely better meet the needs and preferences of people living in societies than any available secular alternative. Some people could accept those arguments and decide they ‘ought’ (‘rational choice ought’) to intellectually commit to the principle as their ‘rational choice’, the choice expected to best meet their needs and preferences. Under these circumstances, no assertions about imperative oughts are required.

No descriptive moral fact, including this one, inherently entails any oughts except the weakest kind, the ‘rational choice ought’. Fortunately for this moral fact’s potential future as part of a “workable secular moral system”, that may be sufficient.

When you say there are no moral facts, do you mean there are no moral facts either inherently entailing or otherwise accreting the force of ‘imperative oughts’? For that definition of moral facts, I fully agree.

But the fact that those kind of moral facts do not exist says nothing about whether a moral fact that entails a rational choice ought (the weakest ought) could exist.

Example of a ‘rational choice ought’. I prefer to not burn my hand on a hot stove. My stove is hot. Therefore, I ‘ought’ (‘rational choice ought’) to not touch the hot parts of my stove. No source of imperative force is needed.

Finally, if you are defining morality by the right moral principle (I suggest the principle based on what the function of moral behavior is in human societies) then yes, acting morally will almost always be expected to better meet your needs and preferences in the log term. Our predictions about the long term future in the moment decisions are made are often wrong. Our predictions about moral behaviors are in particular flawed because they often must be made almost instantly and moral behaviors are unselfish which our ‘base animal instincts’ motivate us to reject.

I look at it as “Should I rely on the wisdom of the ages or my own flawed perceptions and predictions is the heat of the moment about what action will actually best meet my needs and preferences over a lifetime?”

25 11 2010
Tim Dean

And again, we find ourselves in almost complete agreement.

I’d probably refer to your ‘rational choice ought’ as an instrumental or prudential ought (or a hypothetical imperative – even though they aren’t strictly ‘imperatives). And they’re the only kind I (we) reckon exist.

So everything start with an ‘if you want to x’, and the oughts follow based on how best (rationally) to achieve x.

Yet there are moral facts in the sense there are facts about how best to achieve x. Just not facts that say we bindingly ought to achieve x no matter what.

That’s anti-realism for ya!

26 11 2010
Mark Sloan

Tim, I have gone back to read about the definitions of instrumental or prudential oughts and hypothetical imperatives to see if I can use them directly. It seems like a mess to me. It is so much easier to just define what I mean and move on!

Of course, I would like to use standard terminology, but I am going to have to study it more to be able to do that.

For example, I am not sure my ‘rational choice ought’ is the same as if “‘if you want to x’, and the oughts follow based on how best (rationally) to achieve x” because a ‘rational choice’ does not depend on our expectations, needs, or preferences being rational. Perhaps they are the same, but I like using the terminology of Rational Choice Theory because it specifies that expectations, needs, or preferences may be irrational.

Also, my understanding of hypothetical imperatives is that they are similarly focused on rational actions in a way that ‘rational choice’ oughts are not (I might be wrong about that).

In a related subject, I have written a paragraph that I thought might be of interest. It concerns how mere facts combined with ‘rational choices’ can provide useful moral guidance.

I prefer to meet my needs and preferences. I expect that committing to a moral principle that claims to be based on the function of almost all of cultural morality (the primary reason those cultural moralities exist in human societies) will better meet my needs and preferences than any alternative moral principle. Therefore, I ‘ought’ (‘rational choice ought’) to commit to that moral principle I expect will best meet my needs and preferences. By committing to it, I (may) accept that while cultural moral standards may be very useful in day-to-day life, they are only fallible heuristics for achieving the ‘ends’ of almost all of cultural moralities.

No source of imperative force (imperative oughts) is needed for these ‘mere facts’ to provide moral guidance.

26 03 2011
Richard

Comments from an interested amateur.

You have said what morality is not but not what it is. Is this an enterprise of discovery and if so how do you know it when you see it? For my money I would still describe morality as a truth seeking endeavour. More specifically I would define it as acting in pursuit of philosophical justice. A sense of morality is a measure of behaving justly, with regard to the consequences to all living things including regard to self but only to the extent that a just independent arbiter would; an ideal that does not necessarily preclude any of your views as far as I can see.

You subscribe to the view that morality evolved as survival benefit. I expect Darwinism has bearing on this. What I read is that morality evolved for the purpose of surviving or as an incidental observation that it assisted survival It may well have done. However, I see that morality may have alternatively evolved from empathy which then leads to Darwinian benefits. I think that is a significant distinction which does not contradict Darwinism. I may have misinterpreted.

I think I basically get what you are writing in Moral Psychology but I do not get what you are aiming at. If you are trying to say physiological predisposition determines morality then that is not explicit enough to come through to me. What I read is physiological and psychological impacts on morality.

27 03 2011
Richard

‘If you are trying to say physiological predisposition …’ was meant to be ‘If you are trying to say psychological predisposition …’.

30 10 2011
Beyond OWS: Problem #1: The Market Ain’t So Free « Ockham's Beard

[…] Evolution and Moral Ecology […]

11 01 2012
stonedead

Your subject matter and approach is quite interesting!

I wonder if you have considered works of so-called “ecopsychology”? It examines how the environment affects the mind and behavior, and therefore how the condition of the “natural world” is bound up with the human condition.

11 01 2012
Tim Dean

Hi stonedead. I hadn’t heard of ecospychology before, but reading a bit on it now, it seems quite different to my broad approach. This might be because I use the term “environment” in a more narrow (or maybe more broad?) sense than in common parlance. Environment to me means the external circumstances relevant to some action or event. So the natural environment is one aspect, but so is the cultural and behavioural environment.

(BTW, I also use “natural” in a technical philosophical sense – meaning “of the natural world”, which basically includes everything, including humanity and the artefacts we’ve produced. So the opposite of “natural” is not “artificial”, as in the common definition, but “supernatural” or “non-natural”, meaning not governed by the laws of the natural universe.)

So moral ecology is saying moral norms adjust to the environment – including the cultural context and the actions and norms promoted by others in that culture.

Ecopsychology appears to want to bring people into closer connection with the natural (i.e. non-artificial) world. That’s a different programme, although one that, on the surface, I’m sympathetic towards.

23 01 2012
Review: The Ethical Project « Ockham's Beard

[…] Evolution and Moral Ecology […]

28 01 2012
The Ethical Project: The Future of Ethics « Ockham's Beard

[…] Evolution and Moral Ecology […]

3 02 2012
Other Reading « War and Words

[…] the background readings we’ve been doing for our respective theses (I mean, have a look at this!) But… maybe it feels too close to a literature […]

27 03 2012
Evolution and Moral Ecology Abstract « Ockham's Beard

[…] Evolution and Moral Ecology […]

12 01 2014
GTChristie

Tim I like the fact that reason plays a role in making moral judgments or decisions at some point in the process, even if it’s dead last (after impression and emotion have occurred). At least there’s room for reason there, where most of the 20th century left it all emotive. Reason is something an agent would need to engage on purpose, though. And many agents might never reason at all: going on the gut. Maybe that’s what marks the difference between village ethos and civilization: the latter depends on convincing people to stop and think before they hurt somebody.

Re: moral facts. The tentative fact Mark wrote out “…something to the effect of ‘Moral behaviors are behaviors that increase, on average, the benefits of cooperation in groups by means of acts that are unselfish at least in the short term’” contains too many cognates to be a naturally occurring fact of nature. It’s full of concepts and relationships among them that may or may not be facts themselves. Tim’s exposition is more of the form “moral behaviors evolved because” rather than “moral behaviors are” — and the “because” is fairly simple and direct. (After all, this is Ockham territory, right?)

Having said that (and always the skeptic), I think the “cooperation” vocabulary is useful (“good cooperators survived”) and that strategy is compatible with evolutionary theory (“certain emotions evolved”). But when I see “embarrassment” and “guilt” listed among the relevant emotions, I think we’re making things more elaborate than they really are. We have to be careful not to project our expected conclusions onto our premises. Embarrassment needs physically unique characteristics to distinguish it from “just another form of fear.” I get suspicious when definitions become too rareified. What has to be true, to call this a fact? is the question we need to ask repeatedly.

13 01 2014
Mark Sloan

Hi GT,

I hope Tim does not mind my using his thread to reply to your objection to my proposed descriptive fact about morality as incompatible with it being a “naturally occurring fact”.

Its complexity is only superficial. It is due to my necessary use of existing language which was defined for other purposes.

I could equivalently state my claimed descriptive fact about what morality ‘is’ much more simply as “Moral behaviors are altruistic cooperation strategies”. But few people know what altruistic cooperation strategies are. (See Nowak “Evolution, Games, and God: The Principle of Cooperation” and Gintis “The Bounds of Reason: Game Theory and the Unification of the Social Sciences”.) The complexity you object to just defines altruistic cooperation strategies.

Alternatively, I could state my claim as “Moral behaviors are strategies that solve the cross species universal cooperation/exploitation dilemma”. This is the social dilemma of how to reliably obtain the benefits of cooperation, which commonly exposes one to exploitation, without being exploited – which would destroy future benefits of cooperation. Finding biological and cultural solutions to this universal dilemma is what made our ancestors such successful social animals.

I argue the ‘truth’ of my claim based on standard criteria for truth in science, but principally explanatory power for 1) the origins of the biology underlying our ‘moral’ emotions such as empathy, loyalty, shame, guilt, and indignation, 2) virtually all past and present enforced moral norms no matter how diverse, contradictory, and bizarre, and 3) Jonathon Haidt’s empirically discovered six cross culturally universal bases for making moral judgments: harm, fairness, freedom, loyalty, authority, and purity.

Also, including indignation, shame, and guilt as ‘moral emotions’ is not “projecting our expected conclusions”. Game theory shows that punishment is a necessary part of all altruistic cooperation strategies. This is why we commonly feel indignation when people violate moral norms we hold dear and think violators deserve punishment (of at least social disapproval). But punishing other people’s bad behavior is tricky and can backfire due to cycles of retribution. That is why our ancestors evolved shame and guilt, to efficiently provide internal punishment when we violate moral norms.

Biology makes no sense except in the light of evolution. Morality makes no sense either, except in the light of evolution.

Tim’s moral ecology seems to me to be consistent with my position. But Tim first adds the straight forward observation that, due to different circumstances, no single set of moral norms (altruistic cooperation strategies) will be able to maximize the benefits of cooperation in all societies and then he describes that observation’s implications for cultural moral codes.

Mark

13 01 2014
GTChristie

“Moral behaviors are altruistic cooperation strategies”
Well I know what that means. And it is succinct. And I admire it; it’s clever and better than the original statement. But in the succinct version “altruistic” is still an extra cognate there.

If this is a statement of empirical fact, then cooperation strategies that are not altruistic would falsify the definition.

To get out of that, you’d have to say that only altruistic cooperation strategies are moral behaviors. If you want to go with that, okay. But I’d have to disagree, if you go there.

The only point I was making: a complex process is going on, admittedly, but even the improved succinct description is preloading “cooperation” and “altruism” and “behavior is a strategy” into the definition of “moral behavior” before the process is understood. What if moral behavior is not actually about cooperation, but cooperation is just a happy by-product when everyone has the same moral framework? (Just to throw out an alternate possibility.)

I’m not criticizing your formulation (I tend to agree with what you’re after here). But you are stating a complex definition with too much in the bundle to verify as a single fact.

13 01 2014
Mark Sloan

GT,

Thanks for your reply.

However, I can readily defend the claim that “Moral behaviors are altruistic cooperation strategies” is factually ‘true’ in the normal provisional sense of science as a description of what morality ‘is’. However, “Moral behaviors are cooperation strategies” (which you are suggesting might be true?) includes behaviors, such as mutualism and coordination, which can increase the benefits of cooperation, but are not moral behaviors because they do not risk exploitation or require bearing some other cost. Risking exploitation in order to increase the benefits of cooperation defines the necessary ‘altruistic’ aspect of morality.

So far as I know, every single aspect of my claim about what morality ‘Is’ is required in order for the claim to be true in the normal sense of science. There are no extra parts. Leaving off any part, such as the altruistic aspect, would trash its explanatory power and lead to contradictions with known facts about past and present enforced moral codes and evolutionary game theory.

It all hangs together, or it all falls apart. I don’t see how to prove it ‘true’ element by element as you seem to be suggesting.

Mark

13 01 2014
Tim Dean

Interesting discussion. I’ll only add a few comments into the mix.

I’m sympathetic with Mark’s formulation of morality as “altruistic cooperation strategies”, but there are some problems to be overcome. First is moral norms that have existed that are either poor at promoting cooperation, or those that actually promote injustice or harm. The former might include norms that encourage such harsh punishments or strong retribution that they erode cooperation on average. The latter might be norms that entrench privilege in a minority at the expense of a majority.

To account for these I first distinguish between the normative form of moral norms and their content. On the first, a moral norm is any norm that appears to carry binding and inescapable authority (like a categorical imperative) – i.e. it can’t be overridden due to subjective preferences or prudential concerns. All cultures appear to have them, and they concern everything from proscriptions against harm to rules about dress codes and ways of speaking.

As for the content, I use functional definition, suggesting the function of morality has been to help solve the problems of social living and help facilitate prosocial and cooperative behaviour. Function here is in the biological sense – the function of an organ is that activity that explains why it was selected by evolution (i.e. the heart’s function is to pump blood, not to make a thumping noise).

However, few moral systems have been created with this function explicitly in mind, and have rather been innovated and spread according to cultural evolution – what has worked has tended to survive over time. Due to the messy cultural evolutionary process, we would expect there to be many sub-optimal moral norms – i.e. norms that are not terribly good at satisfying the function of morality. We would also expect there to be “corrupt” moral norms, which are created by those in power to entrench their power.

These sub-optimal and corrupt norms are moral only to the extent they possess the categorical normative form. However, in terms of their satisfying their function, they are not good moral norms, in the same way that a law enforced by a state that promotes injustice is still a law, it’s just not a good law.

As for moral emotions like guilt and empathy, these are the product of evolution, and do motivate moral behaviour. However, we have also evolved self-interested emotions, and the moral emotions are also error prone. That’s why we needed to invent normative systems enforced by punishment to elevate cooperation above the levels achievable through our limited built-in altruistic tendencies.

Don’t know if that clarifies any issues.

14 01 2014
GTChristie

Although it’s primarily a discussion of Sam Harris’ “The Moral Landscape,” a major section in the middle of the following blog post discusses altruism and norms and some other related issues mentioned here. In the last paragraph of Tim’s reply above, the phrasing “we needed to invent normative systems … to elevate cooperation above the levels achievable through our limited built-in altruistic tendencies” reminded me that I’ve blogged about this in the past. There is a point where “moral emotions” (remember they are primitive) leave off, and norms (especially laws) begin. I have always said that a cultural process creates them (after all, that’s where the variation is, in human societies). Tim’s last points above point directly to a need to understand where biological evolution leave off, and cultural development begins. (I purposely avoid saying “cultural evolution,” mostly for clarity). Any “science” of morality needs to explain as much as possible biologically, but at the same time, attribute to biology only what can be selected by evolutionary process — and leave the rest to cultural processes. (I tend to see those as “cultural dynamics,” which explains my enthusiasm for Tim’s “moral dynamics” idea … if I’m brilliant on that point, so is he. LOL.

14 01 2014
GTChristie

Leave a comment