Liberalism and Value Pluralism

25 09 2010

Does a commitment to normative value pluralism logically entail a commitment to liberalism? Isaiah Berlin is a known proponent of both pluralism and liberalism, and at times he’s appeared to suggest there is a logical connection between the two – although at other times he suggests the connection is only a psychological one; that it’s choice that makes human beings human, that we’re made pluralistic, and liberalism is the best system to enable these two psychological forces to coexist.

Isaiah Berlin, from Steve Pyke's collection.

I had the opportunity last night to attend a debate between two leading Isaiah Berlin scholars on this notion of the link between liberalism and value pluralism, Beata Polanowska-Sygulska from Jagiellonian University in Poland, and George Crowder from Flinders University.

I have to profess a vast ignorance when it comes to Berlin’s work – he doesn’t feature prominently (or at all) in the philosophical texts I’ve been wading through, although his views about pluralism and liberalism appear to be remarkably close to my own. I shall have to read his work more thoroughly before I complete the normative chapter of my own thesis on evolution and moral pluralism.

From the outset, I can pick one stark point of difference between mine and Berlin’s views: he believes in a plurality of incommensurable objective values, such as equality and liberty, whereas I don’t believe any objective values exist. However, that’s not a show stopper, as I think you can extricate the objectivity from Berlin’s values without too much trouble (and call it a fictionalism, if you will) and the crux of his argument will remain largely the same – the only difference is in some distant metaphysical justification for his pluralism.

On the connection between pluralism and liberalism, I have a slightly different take from those advanced last night. Mine is, unsurprisingly, informed by evolution and moral psychology. It goes a little something like this:

Humans have been grappling with the problems of social coordination for hundreds of millennia. These problems are effectively: how do you get large numbers of unrelated individuals to live and work together in a way that advances their interests (biological and psychological) without suffering the ill effects of socially disruptive behaviour, like cheating or ‘defection’ (in game theory terms), and without  succumbing to invasion by outsiders.

Evolutionary forces have worked such that those individuals who were able to solve these problems more effectively were able to leave a greater number of offspring for future generations. Thus, those individuals who evolved the psychological mechanisms that promote prosocial behaviour and censure socially disruptive behaviour, were lent a selective advantage. These psychological mechanisms include the moral emotions, problem solving heuristics and moral reasoning.

However, there is no one solution to the problems of social coordination that works best in every environment, particularly as the environment is also made up of the other ‘strategies’ employed by others, thus is dynamic rather than static. As such, evolution has not settled upon one set of psychological mechanisms or predispositions, for to do so would have been unstable and left that population prone to ‘invasion’ by other strategies – invasion from within, through mutation, or from without by other individuals with different psychological makeups.

The upshot – and Edward Westermark acknowledges this as early as 1906 – is that human psychology varies (genetically as well as a result of environmental influences), and this variation yields a broad spectrum of moral outlooks and values. Westermark’s passage is as follows:

The emotional constitution of man does not present the same uniformity as the human intellect. Certain cognitions inspire fear in nearly every breast; but there are brave men and cowards in the world, independently of the accuracy with which they realise impending danger. Some cases of suffering can hardly fail to awaken compassion in the most pitiless heart; but the sympathetic dispositions of men vary greatly, both in regard to the being which whose sufferings they are ready to sympathise, and with reference to the intensity of the emotion. The same holds good for moral the emotions. The existing diversity of opinion as to the rights of different classes of men, and of the lower animals, which springs from emotional differences, may no doubt be modified by clearer insight into certain facts, but no perfect agreement can be expected as long as the conditions under which the emotional dispositions are formed remain unchanged.

As such, value pluralism is an empirical fact about human psychology. But it’s more than just psychological; just because evolution has primed us with this variation, it doesn’t mean it’s good. There could be one moral value that actually serves our interests better than others. However, I don’t believe this is the case. As I mentioned above, there is no one solution to the problems of social coordination. And if you take solving the problems of social coordination to be important, then you will also find pluralism to be important, as it allows a range of solutions to arise, one of which might be the best in any particular situation.

On to the strengths of political liberalism: I don’t actually think it’s promoting autonomy that is the fundamental justification for liberalism, instead it’s that promoting autonomy allows the pluralism of values (or ‘strategies’) to work in tension with each other, thus preventing any one strategy from dominating and causing the society to become unstable (or to be in disequilibrium). Autonomy is a second-order value, but one that enables first-order values to be promoted most effectively.

Thus liberalism is the most effective political framework (that we know of to date) that allows the various strategies for social coordination to balance each other out, and best enables a society to meet the challenges of social coordination. It’s not without cost; as Berlin states, there will be situations where values conflict and you’ll inevitably get dilemmas with no perfect solution. But that’s the price you pay, and it’s a smaller price than adopting a monist approach and, say, placing egalitarianism or order, above all other values.

I suspect that Berlin would have disagreed with several points in my account of liberalism, but I think they’re largely detail. On the whole, I think my account is very similar in action to Berlin’s, although I’ll have to read a great deal more of his work to know for sure.

Advertisements

Actions

Information

14 responses

25 09 2010
James Gray

You think that we got our values from evolution, but is there any basis to them? You suggest some values might be better than others. Are the “better” ones not actually judged by a highest value — human flourishing?

But that value itself is up for interpretation. Is human flourishing merely personal reproductive advantage? Or can personal fulfillment count for something even if it has nothing to do with reproductive advantage?

25 09 2010
Tim Dean

Hi James. Essentially, I don’t believe there are any objective values, for a number of reasons, but one important one is Mackie’s argument from queerness.

But, I believe as a matter of empirical fact, that most people value advancing their intrinsic interests – those interests we have by virtue of having the properties we do. Things like avoiding harm, seeking good health, reproducing etc. Square pegs seek square holes.

People might think their values are otherwise, and they might pursue the values they think they have, or might pursue proxies of their intrinsic interests like chasing sensory pleasure, but often their psychological values are guesses about their intrinsic interests.

However, crucially, there’s no metaphysical necessity for people to pursue their intrinsic interests. They could choose to pursue other interests if they want. They’re not obliged to pursue interest x by virtue of being rational beings, for example. However, if they do pursue their chosen interests in a way that interferes with the interests of others, it’s pretty likely the others are going to push back. That’s just a matter of empirical fact.

Morality is the tool we use – the system we agree to buy in to – in order to regulate these conflicts of interest. And if we don’t agree to buy in, we’re ‘persuaded’ to buy in. And evolution has helpfully equipped us with a bunch of tools to aid us in behaving morally, although those tools aren’t perfect.

So there’s no true or false or logical necessity about which values we ought to pursue, but if we pursue our intrinsic interests, then having an effective moral code is a pretty good idea.

That said, we both end up promoting ‘human flourishing’, in a sense, so in practice our moral systems basically look the same, they just have different metaphysical justifications.

26 09 2010
Paul

I think that this post might contain an error of fact. At the very least, it is arguable whether liberal democratic government actually promotes a real diversity of value strategies. In my cursory reading of Jonathan Haidt’s work I remember there being a very strong correlation between liberal values and liberal democratic government. Hasn’t the trend been for these countries, regardless of their environment, to become less concerned with purity, deference to hierarchy, and in-group cohesion, and to become more concerned with fairness and compassion towards others? Maybe the strength of liberal government isn’t that it allows for a radical range of variation in values strategies, but that instead it moves the entire range of variation in a particular direction that is much more stable than others. It at least seems plausible that societies that were less concerned with the maintenance of hierarchy, purity, and in-group cohesion would be much more flexible and adaptable to changing circumstances; so long as they had a minimum amount of in-group cohesion, one might even imagine that they would be better at fighting wars precisely because of their openness.

Just laying out an alternate hypothesis. Also, to cite my inspiration, if not actually my source, I think Machiavelli made a similar sort of argument about the success of ancient Rome as opposed to the Greek cities in the “Discourses on Livy”.

26 09 2010
Mark Sloan

Tim,

I largely agree your position as described above from:

“It goes a little something like this:
Humans have been grappling …”

To:

“….There could be one moral value that actually serves our interests better than others. However, I don’t believe this is the case.”

One exception to our agreement is based on the relative uniformity of moral standards within single cultures compared to the great diversity between past and present cultures. This makes Edward Westermark’s focus on biological sources for moral values differences appear overstated. That is, reality supports, I believe, the idea that biological differences are not nearly so powerful as cultural differences in determining moral values – but perhaps I have misunderstood Westermark’s position.

But what I want to address here is your expectation that it is NOT the case that “There could be one moral value that actually serves our interests better than others”. I am puzzled by this statement and believe your expectation is unjustified on two levels.

First, some cultural moral standards produce objectively ‘poor’ results. Here ‘poor’ means results contrary to the needs and preferences of the individuals in the culture (the results the individuals in the culture would prefer as rational choices based on their needs and preferences).

Second, there is a growing consensus in the field of evolutionary morality that science supports the idea that past and present moral behaviors are heuristics and strategies for increasing the benefits of cooperation in groups by acting unselfishly (unselfishly at least in the short term). Since some moral behaviors are objectively ‘better’ in terms of meeting the needs and preferences of the individuals in the group as an empirical fact, then there ARE moral values that actually serve our interests (our needs and preferences) better than others by increasing the benefits of cooperation by acting unselfishly.

I agree with your point that “… crucially, there’s no metaphysical necessity for people to pursue their intrinsic interests” where I would say their relevant intrinsic interests will be met by adopting and practicing the moralities expected to best meet their needs and preferences over a lifetime. We differ in that I see this lack of metaphysical necessity as irrelevant for making moral choices. So far as I can see, such a metaphysical necessity is not part of reality – it demands the existence of “magic oughts”.

Mark

26 09 2010
James Gray

Tim,

But, I believe as a matter of empirical fact, that most people value advancing their intrinsic interests – those interests we have by virtue of having the properties we do. Things like avoiding harm, seeking good health, reproducing etc. Square pegs seek square holes.

Does evolution give us morally relevant information or not? If so, why?

People might think their values are otherwise, and they might pursue the values they think they have, or might pursue proxies of their intrinsic interests like chasing sensory pleasure, but often their psychological values are guesses about their intrinsic interests.

Why is that relevant?

However, crucially, there’s no metaphysical necessity for people to pursue their intrinsic interests.

Right. We couldn’t even say you ought to do something if you had to do it anyway.

They could choose to pursue other interests if they want. They’re not obliged to pursue interest x by virtue of being rational beings, for example. However, if they do pursue their chosen interests in a way that interferes with the interests of others, it’s pretty likely the others are going to push back. That’s just a matter of empirical fact.

Yes, it’s an empirical fact, but what’s it have to do with morality? Is morality rational? If so, then what we “ought” to do might have something to do with rationality.

I don’t know that the word “ought” needs to have a non-conventional meaning any more than the word “health” — even for a moral realist. Such things can have a relation to reality, even if they are conventional.

Morality is the tool we use – the system we agree to buy in to – in order to regulate these conflicts of interest. And if we don’t agree to buy in, we’re ‘persuaded’ to buy in. And evolution has helpfully equipped us with a bunch of tools to aid us in behaving morally, although those tools aren’t perfect.

So is morality a good thing? Should we pretend to buy in but be secretly evil?

So there’s no true or false or logical necessity about which values we ought to pursue, but if we pursue our intrinsic interests, then having an effective moral code is a pretty good idea.

Is it a “good idea” because it’s rational? Is a moral code that stones homosexuals to death not a rational code? Is it an equally good moral code to any other?

Let’s say that there is no such thing as morality at all. Would anything be different in the world? People would find ways to fulfill their desires and find a good life anyway.

Would rational people want an official moral code to help them achieve their goals? Would some moral codes be better than others?

That said, we both end up promoting ‘human flourishing’, in a sense, so in practice our moral systems basically look the same, they just have different metaphysical justifications.

Let’s say that evolution is primarily interested in reproduction, so it gives us values that help promote reproduction. Is that the appropriate definition of “human flourishing” or does it make sense that other interests, such as avoiding pain, also figure into it? Can we “discover” relevant interests other than instinctive ones provided by evolution?

Hume thought that we could reason about means but not about ends. Is that your position or do you think it is possible to reason about ends? Do our interests begin and end with instincts or can we discover or create new interests?

I would argue that we can reason about ends. We can have many experiences that might not be relevant to evolution or instincts, and yet we might find them to be enjoyable (or unpleasant) or valuable. I find this to be a limitation to the evolution standpoint. We could have experiences we find have value (to ourselves if nothing else) and these “interests” of ours can be worthy of consideration.

Various enjoyments we have (of learning, of power, of love) might be instinctive chemical induced rewards to promote certain behaviors, but it might be that we can value things for non-instinctive reasons as well. More importantly, we don’t have to know anything about instincts to realize what it is we value and why.

A while back ( https://ockhamsbeard.wordpress.com/2010/02/25/whats-so-special/ ) I mentioned my post on the anti-realist perspective and alluded to this issue because you were saying that there were only instrumental values. Now you are using different terminology (intrinsic interests), so it’s not clear to me what you think. The term “intrinsic interest” reminds me of “basic desire” and “final end.”

26 09 2010
Tim Dean

@Paul
I think you’re right that fairness and compassion are more successful strategies in today’s massively social world, and that liberalism promotes them above other more intensely conservative values, such as you find in tribal societies. However, should the circumstances change, such as the prospect of invasion from outside, I would expect a liberal democracy to begin favouring more conservative strategies – rigid social cohesion, nationalism, suspicion of outsiders, militarism, hierarchy etc. We saw this, to some extent, post September 11 in US politics. But you’re right to say that modern liberal democracies are heavily tilted towards Haidt’s liberal pole compared to tribal society, and these values are more successful in today’s world.

@Mark
I actually think we do agree more than you’ve suggested. My comment about it being possible there is one moral value that serves our interest is not to suggest that I believe there is one – just that it’s possible, and that I can’t logically rule out that possibility, no matter how remote. And I agree that the lack of metaphysical necessity is largely irrelevant to directing moral behaviour.

@James
Evolution gives us morally relevant information in the sense that it tells us facts about us and about the world, and these facts are relevant in making moral decisions. It doesn’t give us fundamental values, however. We need to decide on those values – but I do believe that, as a matter of empirical fact, that the core value of advancing human interests will be more or less universal.

On interests, I just want to avoid saying we should chase pleasure, a la hedonism. Pleasure is a heuristic to encourage us to seek more pleasure-causing stimuli because, in our evolutionary past, often such stimuli advanced our interests. So it’s a heuristic, but the stimulus, and whether it actually does or does not advance our interests, is what’s really important.

I actually think moral goodness is a lot like health. However, as with moral goodness, I think we define what health is, but it so happens that, as a matter of empirical fact, we largely agree on what health is – i.e. it’s things that suit our biological interests, such as undisrupted metabolism, or absence of disease. So there are objective facts about what makes someone healthy or not, but there’s no logical necessity surrounding the way we define health.

“So is morality a good thing? Should we pretend to buy in but be secretly evil?”

I don’t believe there is any real, objective, intrinsic ‘good’ or ‘evil’ – no external barometers by which we can measure such a statement. I would say that someone who behaves morally, but secretly does so for purely proximate self-interested reasons is perfectly moral in regards their behaviour. However, I wouldn’t call them a moral person, because secretly harbouring proximate self-interest would tend to lead them to behaving immorally. Sure, I can imagine a logically possible world in which everyone always behaves morally yet are ‘secretly evil’, but that doesn’t tell us much, except that such a strange logically possible world is not this world.

Morality is rational, in that reason can best help us produce a code that advances our interests. But reason cannot dictate that we ‘ought’ to pursue our interests.

A moral code that stones homosexuals to death would be a poor moral code – I can’t see any reason why stoning homosexuals to death can advance our interests in the modern world. Greater cooperation and tolerance would be a better code.

I think I’ve said before that a world without objective values would look identical to this one. Morality would still exist, as I define it.

On interests (I really need to do a more detailed post on this): what I mean by intrinsic interests is almost an Aristotelian idea. We are built in a certain way, and by virtue of that, some things ‘fit’ us better, or are more ‘good’ for us than others. We tend to seek those things out. Pleasure is often an indicator of something that ‘fits’; pain is often an indicator of something that doesn’t ‘fit’, like damage to the body. We, as a matter of empirical fact, tend to advance our intrinsic interests – but there’s no logically necessary reason for doing so. And, as far as evolution can inform us about how we’re made, it can inform us about our intrinsic interests. If we choose to advance them, then a moral code that promotes cooperation is an instrumentally good idea.

Hope that’s answered everything. Probably hasn’t, and I need to elaborate on these points in more detail, but do keep the arguments coming.

30 09 2010
James Gray

Tim,

I think that your view makes a great deal of sense, and I agree with the Aristotelian idea of “intrinsic interest.” That is what I mean by “final end” (or close to it.)

At the same time must more needs to be said. I think pleasure can be rationally sought for its own sake. There can be a rational sort of hedonism, but an egoistic short-sighted foolish sort of hedonism neglects other goals and can harm others.

30 09 2010
Mark Sloan

Tim, I understand we are both value pluralists based on our claims of evolutionary origins for cultural moral standards. We both (?) claim that human moral behaviors and virtually all past and present cultural moral standards are diverse and even contradictory strategies and heuristics for increasing either, as you say, “coordination” in cultures or, as I insist, only that special kind of coordination that produces, on average, specifically the “benefits of cooperation instigated and maintained by unselfish behavior”.

Consistent with the above claims and based on empirical historical evidence which shows the success of liberalism in creating cultures that meet the needs and preferences of their populations, I understand we are also both liberalists in the sense of belief in the importance of individual liberty and equality.

Where we may differ is in the importance of liberalism – specifically how the importance of individual liberty with regard to moral questions and equality of moral beliefs should be weighed versus the relative ‘badness’ of subculture’s or individual’s moral standards. Here ‘badness’ is judged by how well the moral standard fulfills the objective evolutionary function of moral behavior to, as you say (?), increase “coordination” or, as I insist, to increase a special kind of “cooperation”. (Here ‘function’ describes the dominant reason that moral behavior exists in human cultures.)

Your position that “no objective values exist” would seem to emphasize the primacy of individual liberty with regard to moral questions and equality of moral beliefs. Whereas, my position that there is an objective evolutionary function of moral behavior (to increase the benefits of cooperation in groups) would emphasize the primacy of the effectiveness of a cultural standard in increasing the benefits of cooperation within that sub-culture.

So I expect that my view of the evolutionary origins of moral behavior leads me to a less liberal liberalism than does your understanding of the evolutionary origins of moral behavior.

For readers not familiar with the arguments of evolutionary morality, I should point out that evolution and science can provide no justificatory force beyond rational choice for accepting the burdens of any morality. Further, someone who accepts the burdens of any morality as a rational choice (the choice expected to best meet their needs and preferences) is in no danger of committing the is/ought fallacy because no “magic oughts” are involved.

10 10 2010
Joe

Interesting discussion. I thought I would drop in (as a stranger) just two points.

1. Berlin’s understanding of the relationship between value-pluralism and liberalism is a key point of contention. I would suggest John Gray’s book as a counterpoint to Crowder, in particular. In his own writing, Berlin responds (with Bernard Williams) to Crowder (who at the time suggested that value-pluralism undermined the case for liberalism) by suggesting that the two were compatible but not necessarily related. That is Berlin thought that a belief in value-pluralism could find many outlets in a liberal society that gave individuals space to make their own choices – this is essentially where his views on liberty and pluralism meet. But his commitment to pluralism leads him to acknowledge that there are other ways of living that would also be good – because liberty is a value that will conflict with others, and as much as he may have been a partisan of liberty, Berlin suggests there are other ways of creating a good society that, while preserving some space for human freedom, would be committed to other central values. His main concern in addressing the question of universal values was in preventing cruelty, meaning that a society that didn’t preserve some measure of space for freedom, love, justice, etc. would be inhumane. Also, Berlin’s defense of universal values is distinctive: it essentially reflects two claims, (1) that there are certain values that are central to our understanding of what it is to be human (an analytic but historical necessity – as he would acknowledge the boundaries of this definitions have/will change), and (2) that there are particular values that most of humanity has valued most of the time (he’s quite clear that this is an empirical claim subject to verification, and to refutation by those defending rationalist accounts of value). You can pick up the main strands of this argument in his essay ‘Does Political Theory Still Exist’; as well as the extensive critical introduction to “Liberty”.

(2) There strikes me as a large but unaddressed problem in the various attempts to link morality and evolution.

First, What is the definition and relationship between impulses/drives, interests, and values?

I can see how genetic selection, which determines the physical processes in our body/development, affects our impulses. But given that genetic selection takes place through reproduction, all we can safely say is that human evolution has eliminated those impulses that made it very unlikely for human beings to be able to procreate (which would largely imply those that resulted in death before puberty – as there is no clear reason various anti-social behaviors would do much to limit the occurrence of procreation).

So, it would seem to me the claim that evolution has lead to the expression of interests and values is questionable on two fronts. First, how would natural selection have controlled for impulses that didn’t tend to result in early death in a way that is not essentially social – such as certain behaviors being socially stigmatized and punished. Second, I have not yet seen clear evidence that links genetic variations to complex psychological interests, much less intersubjective social values in a way that would make the claim, ‘cooperation is a product of our evolution’ plausible. What is the mechanism where by a gene that expresses some variation in the physical structure of the organism gives rise to interests and values?

Within the evolutionary morality literature (modest though my knowledge is) it seems what they are really talking about is some form of “social evolution” in which particular behaviors are selected out with the group through exclusions or that groups with different strategies survived while others perished. But this then makes the talk of evolution a social and not a physical phenomena – and all the old questions of morality come back again.

Even if one wants to claim that human’s want to pursue their interest, or want to be healthy – which seem on some level basic, there’s actually some massive assumptions at work. First, these sorts of interests and values are so vaguely stated that they would have to be far more specific before they could be usefully action guiding, and once we have introduced that degree of plurality what is the take away from a claim that ‘there’s a universal interest in pursuing one’s goals’, if the type of psychological subject, the form of communal life and the means and ends pursued are massively varied then the significance of the initial claim falls out. Both the idea of interest and health are tautological – which is fine if you see morality as a social phenomenon in which we have to construct the meaning of interest and health, as that’s where the interesting work is to be done. But that’s rather more of a problem for the suggestion that evolution results in interests and values, whether universal or not. I can see how the facts of our physical existence play into moral questions, but I don’t see how it actually answers fundamental moral questions.

Sorry to go on like that.

10 10 2010
Mark Sloan

Joe,

I am an ardent supporter of the idea that there is a universal basis of ‘moral behaviors’.

As an exercise for myself, which may or not be of interest to you, I will attempt to answer your Part 2 questions from the perspective of what I understand to be a growing consensus in the field of evolutionary morality.

You are rightly skeptical of the most commonly presented hierarchy of causes regarding evolution and morality. Specifically, the hierarchy described as: 1) genetic evolution exploits genetic variations, which 2) creates genetically determined specific psychological tendencies that, on average and in environments where there are synergistic benefits of cooperation, increase reproductive fitness, and 3) these specific psychological tendencies caused people to preferentially select cultural moral standards (wildly diverse and even contradictory as past and present moral standards are) as a part of cultural evolutionary processes which are parallel but separate from genetic evolution.

There are many logical problems with the above hierarchy. I could describe them, but prefer to describe an alternate hierarchy of causes I am comfortable defending. That hierarchy of causes is:

1. Game theory shows that in environments where there are synergistic benefits of cooperation for independent agents, unselfish behaviors that initiate and maintain cooperation can be evolutionarily selected for and be stable in competition with agents using purely selfish strategies. For example, in such environments, unselfish behaviors can spontaneous appear even among simple, self interested computer programs set up to ‘evolve’ by randomly changing behaviors.

2. There are many biological examples of organisms, even on the level of bacteria, behaving apparently unselfishly in order to exploit the benefits of cooperation. (In bacteria, acting unselfishly means acting in ways that eliminate or reduce their ability to reproduce in order to increase the likelihood of genetically similar bacteria reproducing.) In people, apparently genetically defined behaviors such as empathy, a sense of fairness, a willingness to risk injury and death to defend family and friends, shame, and guilt all appear to motivate behaviors that, on average, will increase the benefits of cooperation. In people, the behaviors motivated by these emotions are commonly called ‘moral’ and can be understood as strategies to exploit the reproductive fitness benefits of cooperation.

3. But cultural moral standards also appear chosen and molded by the same selection force, the benefits of cooperation. Groups can choose or mold the cultural moral standards they will enforce based on the material and emotional benefits they expect will result. For instance, the Mathew 7:12 version of the Golden Rule is wonderfully apt advocacy for initiating and maintaining both direct reciprocity (I do something for you and you may do something for me) and indirect reciprocity (I do something for someone who is not expected to reciprocate and others who observe this may do something for me in the future). Both are well established strategies from game theory. Sometimes reproductive fitness may actually be decreased by acting according to a moral standard that otherwise is expected to increase material or emotional well being. The evolution of cultural moral standards thus disconnected ‘moral behavior’ from reproductive fitness.

Both our genetically determined ‘moral emotions’ and cultural moral standards appear to have been selected for based on their effectiveness as strategies to exploit the benefits of cooperation in groups.

If the above hierarchy of causes of moral behaviors is to be accepted as an established part of science, then we should expect to be able to show that something like the following assertion is provisionally ‘true’: “Virtually all past and present cultural moral standards advocate behaviors that are unselfish (at least in the short term) and, on average, increase the benefits of cooperation in groups”.

I believe I can show that this is the case. Virtually all past and present cultural moral standards ARE strategies (some much more effective than others) from game theory for increasing the benefits of cooperation in groups. The wild diversity and contradictions in cultural moral standards are almost all due to choosing different 1) definitions of who is in the group that will receive most benefits (men, your family, your race) and who (women, strangers, other races) will be exploited and what markers of membership in the privileged ‘in’ group will be used (untrimmed beards, not eating certain foods, only heterosexual sex, and so forth).

Further, we would expect such a hierarchy of causes to explain moral puzzles such as “Why has morality been so difficult to understand?” I am happy to say it can, and the answer is, to me, both simple and entertaining.

Finally, the above understanding of morality (roughly “Moral behaviors increase, on average, the benefits of cooperation in groups by being unselfish at least in the short term.”) describes a strategy from game theory that reflects the fundamental nature of reality. This definition of morality is thus universal in the fullest sense of the word for all agents from the beginning of time to the end of time (assuming physical reality as it affects game theory remains constant).

Mark

11 10 2010
Mark Sloan

Joe,

Rereading my last comment, I see I only provided a background for answering your questions rather than answering them directly. I would be delighted to answer your questions directly based on my perspective if you have any interest.

Mark

12 10 2010
Tim Dean

Hi Joe. Thanks for the comment. On your first point, I’ll give Berlin’s essay a read – I’m keen to get a better grasp on what he has to say, as it seems fairly similar to my own inclinations.

On your second point: evolutionary ethics is often poorly constructed, and is often riddled with errors or problems. I’m attempting to provide a descriptive account of the origins of our moral sense and for the moral phenomena in the world (namely that morality feels objective, yet there’s tremendous moral diversity in the world) drawing on evolution. But I’m careful not to draw strict normative conclusions from the facts of evolution – at least, not in a trivial sense.

“First, What is the definition and relationship between impulses/drives, interests, and values?”

Interests are interesting (excuse the pun). I’ve written about them elsewhere on this blog (and need to write more), but basically I suggest there are several types of interests:
1) genetic (or ‘ultimate’) interests, which our moral sense – like our other psychological faculties – evolved to serve;
2) intrinsic interests, which are the interests we have by virtue of our physical and psychological properties – such that damage to our bodies is against our intrinsic interests while proper nutrition is for our intrinsic interests. They’re not necessarily our genetic interests because they’re about us as an organism rather than our genes;
3) psychological (or ‘proximate’) interests, which are what we believe our interests to be. Many of the mechanisms that feed psychological interests are heuristics evolved to serve our ultimate interests first, our intrinsic interests second – things like pleasure or hunger.

I would suggest that evolution can help explain all of these interests – but it cannot justify us pursuing any of them by itself. I would add, though, that as an empirical fact, most people pursue their intrinsic interests, and a moral system that serves intrinsic interests would probably be the most successful.

“I have not yet seen clear evidence that links genetic variations to complex psychological interests, much less intersubjective social values:

Evolution shapes faculties, and these faculties give rise to complex behaviours and interests. Emotion is one example. Moral emotions, such as empathy, guilt and disgust, are universal, although the triggers that set them off are culturally primed. Evolution doesn’t deliver us pre-equipped with any moral beliefs or psychological interests though.

“I can see how the facts of our physical existence play into moral questions, but I don’t see how it actually answers fundamental moral questions.”

I should add that on the ‘fundamental’ level, I’m a moral anti-realist. I don’t believe there are any metaphysically binding oughts that can be derived from facts about nature, or non-natural facts (since I don’t believe there are any non-natural facts).

In Joshua Greene’s language: I dismiss moral(1) – the belief that morality is truth-seeking and that moral facts exist; but I embrace moral(2) – the belief that morality is about social coordination and serving the interests of others.

There’s nothing binding – in the Kantian or moral realist sense – about moral(2), but a moral(2) system would look and work much like a moral(1) system, just without the metaphysical baggage, and with more room to negotiate. In fact, I reckon it’d look a lot like Berlin’s liberal pluralism.

13 10 2010
Joe

Interesting responses – and I’m glad to say much clearer thinking on evolution and morality than is sometimes the case.

In response to Mark:

What does it mean to say “game theory shows”?

If the idea is that it reveals a conceptual aspect of evolution – namely that cooperation can be stable in competition with selfishness – without suggesting that it defines a logic of evolution, such that evolution does select for cooperation, then I think I both understand what it means to say “game theory show” and that I would agree.

I think cooperative behavior among organisms is unsurprising and predictable, so no disagreement there. But what I wonder about is some conceptual fuzziness as we move out from cooperative versus competitive behaviors the can be selected for as they are expressed through genetic variations. First, there’s a jump from strategies based cooperation to “genetically defined behaviors” such as empathy, fairness, sacrifice, shame, and guilt – it’s not at all clear to me what it would even mean to say these are genetically defined, but my skepticism only extends as far of my ignorance of the link between gene expression and complex social behaviors. I’m fascinated to know if and how that would be filled in.

Second, it’s not clear to me that your list is accurately described as either behaviors or emotions – I’d need to know a bit more about how you understand those terms to be comfortable with your next claim that they motivate (further?) behaviors conventionally called “moral”, which exploit the reproductive benefits of cooperation.

I appreciate the distinction between genetically based “moral emotions” (even as I remain certain about what that actually refers to) and cultural moral stands. And I think the distinction of “moral behaviors” that don’t necessarily serve reproduction – but I am unclear what it means to then say both genetically based “moral emotions” and cultural “moral behaviors” are selected for based on effectiveness for exploiting the benefits of cooperation, as the previous distinction would suggest that benefits cooperation actually has to very different meanings. On one hand the benefit of cooperative behavior is the same for bacteria and homo sapiens – better chances of reproduction; while on the other the benefits of cooperation are… what? Social survival, equal provision of social goods, meeting basic needs, etc.

This makes the conclusion you want to draw problematic. You suggest, that

“Virtually all past and present cultural moral standards advocate behaviors that are unselfish (at least in the short term) and, on average, increase the benefits of cooperation in groups.” But this generates a series of questions:

(1) What is the nature of this claim? Is it empirical, such that we could only answer by an extensive historical examination of moral standards. Is it formal in some way – such that we would say an absence of such standards is a case of maladaption.

(2) Where has cooperation become “unselfish”? I bring this up because I think it highlights the broader problem in moving from organic behavior that we would describe as cooperative rather than competitive, to cognitive behavior that we evaluate as selfish or unselfish, basically cooperation is moralized by moving to a language of selfishness.

(3) What are the benefits of cooperation provided to the group? Which restates my question, but this seems vital since you suggest that you can show this is the case – what exactly is being shown to be the case? My worry would be that this requires extending the meaning of “benefits of cooperation” to the point that it becomes morally (if not descriptively) vacuous.

Further, I think clarifying what the benefits of cooperation in cultural “moral behaviors” will complicate the claim that contradictions/diversity can be explained through (1) who receives and is excluded and (2) how privilege is marked out.

Finally, I return to the first question, what does it mean to say that strategies from game theory reflect the fundamental nature of reality? Your reference to adaptive computer simulation suggests an understanding of the world that would be usefully described through such a metaphor, such that, it seems to me that your claim about game theory would amount to something along the lines that these strategies are the necessary outcomes of the hardware and software at work.

I’ll restrict myself to questions, rather than advocating any alternative – as I’m interested in this way of thinking about morality, but my own views would start from a different place.

In response to Tim:

I think we share more ground and I’m sympathetic to the question of how morality is developed as an part of existence as biological creatures.

The distinction between interests is informative. A few further questions:

(1) In regard to ‘ultimate’ interests, what does it mean to say that our moral sense evolved to serve them? Is it anything more than a drive toward reproduction or maintaining ourselves as organisms? Even in that case I would have some trepidation in using the language of interests/service as I often worry about the way we describe biological/evolutionary processes influences our understanding. Biological imperatives might be my preferred term.

(2) regarding intrinsic interests, again a useful distinction, but leads me to wonder what space is there between the genetic and the social – this might be clarified by asking which organisms have interests beyond the genetic, do they require cognition for example? At what level?

(3) Finally regarding proximate interests, my only question would be what degree are these social as well as psychological?

I appreciate the distinction between clarifying the source of interests and the justification in pursuing them, and that practically meeting a moral system that meets these interest is more likely to be (stable) successful – which raises the question successful for what?

I think we’re in pretty much complete agreement about what genetics/biology provides – I might be inclined to be a bit more careful and specific about the faculties that evolution shapes, but only because my limited knowledge of how genes are related to cognitive social behavior. I think it’s obvious that our biological nature (physical and cognitive) is the medium in which we develop capacities for language, memory, self/other-perception, etc. – but is the universalism of specific emotions like empathy, guilt and disgust based in biology or shared cognitive/social development? Which of course presumes an adequate answer to the question, are these emotions in fact universal and how do we know?

I share your anti-realism in the terms you describe – and I wish more people that I know interested in (though not necessarily seriously working on) the relationship between morality and evolution would be clearer about these sorts of commitments (the same approbation goes to Mark as well).

The further question I would have then is: this seems to imply that the most interesting moral questions (to me at least) have to be answered on a different ground. If a moral system needs to take account of our biological nature to be successful, what other standards do we judge/develop moral systems/principles/values by?

I find Berlin convincing on this point – he suggest there are many values in human life, not an unlimited number, and that they can, though will not always, conflict and that morality is essentially about both what it means live well by our own standards and to understand/make the choices endemic in life.

Thanks for the clear and thoughtful responses for the slightly ignorant skeptic.

Joe

There’s nothing binding – in the Kantian or moral realist sense – about moral(2), but a moral(2) system would look and work much like a moral(1) system, just without the metaphysical baggage, and with more room to negotiate. In fact, I reckon it’d look a lot like Berlin’s liberal pluralism.

16 10 2010
Mark Sloan

Joe,

Your above questions addressed to me are good ones and fully answerable.

I have started four different times to answer them. Unfortunately, I have not yet been successful in producing a compact, clear response that I am satisfied with.

A significant part of this problem is that the field of evolutionary morality is not yet settled science. To answer many of your questions, I cannot refer to the literature. Thus, I have to provide supporting arguments for my assertions being likely provisionally ‘true’ as a part of science as part of my answers.

It appears that the time has come for me to start a Blog with references and supporting arguments that I can refer to for discussions in forums like this.

While it is clear to me that studying morality as part of science will prove to be both culturally useful and intellectually satisfying, I am still in the process of figuring out how to make that case to people who are knowledgeable about moral philosophy.

Mark

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: