One of us bought a Volkswagen Turbo Diesel Sportwagen on August 24. The car seemed too good to be true, offering three features that are usually incompatible: great performance, great gas mileage, and low tailpipe emissions. A few weeks later, he learned that the impressive specs described to him in the showroom were not correct. Volkswagen diesel engines change their behavior radically when they undergo emissions testing.

Similarly, Gray and his colleagues have presented a bold theory – Dyadic Morality – which claims to offer three features that are usually incompatible. As they put it in their recent post on this blog: “compared with the popular MFT, dyadic morality is more parsimonious, more consistent with recent experimental evidence, and more embracing of cultural differences.” But are these claims also too good to be true?

Dyadic Morality (DM) is certainly parsimonious. It promises to reduce the multitude of moral constructs to one: harm.

“A dyadic template suggests not only that perceived suffering is tied to immorality, but that all morality is understood through the lens of harm” (Gray, Young, & Waytz, 2012, p. 108).

“ . . . moral disagreements can be understood with one simple question: “What do liberals and conservatives see as harmful?” (Schein & Gray, from their SPSP blog post)

This is an exciting proposition. If true, it could sweep away more complicated pluralist theories like Moral Foundations Theory (MFT; Graham, Haidt, Koleva, Motyl, Iyer, Wojcik & Ditto, 2013) and Relational Models Theory (Fiske, 1991; Rai and Fiske, 2011). But let’s look closely at how DM has been tested. The impressive specs we read about in the showroom (the introduction and discussion sections of the journal articles) may not really be justified by what happens in the testing garage (the methods and results sections).

Preparing for the tests (theoretical issues)

On the way from the showroom to the testing garage, Gray and his colleagues make a number of modifications to the theories in question. The main change is that the strong “harm hypothesis” described above (that all morality can be reduced to harm perception) morphs into a very weak and non-controversial harm hypothesis: “we test six predictions of dyadic morality, which can be summarized as follows: Harm is central in moral cognition for both liberals and conservatives” (Schein and Gray, 2015a, p. 1147). The concept of “centrality” is operationalized in their papers as being merely more important, frequent, or accessible than other moral concerns, which still exist as independent concerns explaining substantial variance. The stunning parsimony promised to us in the showroom is lost. The claim that harm is “central” or most important is not new; we have been reporting that educated Western cultures have a mostly “harm-based” morality since 1993 (Haidt, Koller, and Dias, 1993). But harm is not all there is to morality, even among Westerners (Koleva, Graham, Iyer, Ditto, & Haidt, 2012).

Gray and his colleagues make a second questionable move prior to testing: they amplify the contrast between DM and MFT still further by morphing MFT in the opposite direction as DM – from weak to strong. They present MFT as a theory that employs the strong modularity of Jerry Fodor in which modules are completely distinct, domain-specific, and fully encapsulated processing systems. Modules (in their version of MFT) work in isolation to produce binary outputs: yes or no, on the presence of harm, unfairness, betrayal, disrespect, or degradation. Gray et al. then claim that if a purity violation (such as masturbating to a picture of one’s sister) produces judgments of harm as well as degradation, then the strong modularity of MFT has been contradicted.

But in fact MFT employs the weak notion of modularity developed by the anthropologists Dan Sperber and Larry Hirschfeld (2004). As we explained in our main statement on the topic of modularity, the foundations are developmental constructs – they refer to what is innately given as part of the “first draft” of the evolved human mind, which then gets elaborated in culturally specific ways:

“Each of these five [sets of concerns] is a good candidate for a Sperber-style learning module. However, readers who do not like modularity theories can think of each one as an evolutionary preparedness (Seligman, 1971) to link certain patterns of social appraisal to specific emotional and motivational reactions. All we insist upon is that the moral mind is partially structured in advance of experience so that five (or more) classes of social concerns are likely to become moralized during development” (Haidt & Joseph, 2007, p. 381, emphasis added).

Gray and his colleagues use this bait and switch approach in a number of recent papers (e.g., Gray, Schein, & Ward, 2014; Gray & Keeney, 2015; Schein & Gray, 2015a): a weak version of the harm hypothesis – in which harm (expansively defined to include any kind of personal, group, institutional or spiritual offense) is merely “central” to moral judgment – is contrasted against a straw-man version of MFT featuring five fully encapsulated, biologically-based Fodorean processing modules. From these tests, Gray and colleagues claim that the strong version of the harm hypothesis (all moral concerns reduce to harm, and all moral judgments come from a single template-matching process) has been vindicated. It has not. The studies said to pit MFT against DM fail to test the claims actually made by these theories. It is as though a software switch is flipped and the harm hypothesis morphs, but just for the duration of the testing.

In the testing garage (empirical problems)

Let’s look more closely at what happens once DM is pulled into the testing garage. In their recent post on this blog, Schein and Gray focus on their paper challenging MFT’s account of the moral differences between liberals and conservatives (Schein & Gray, 2015a). Their provocative claim is that deep down, conservatives think about morality no differently than do liberals: everyone cares only about harm. But there are many different kinds of harmful actions (e.g., impurity is just soul-harm, disrespect is just authority-harm), and so this essentially monist theory is relabeled “harm pluralism.” Conservatives are more likely than liberals to see harm in some places (for example, when someone burns a flag in private), but if you focus directly on perceptions of harmfulness, then the differences between liberal and conservative morality vanish.

One could argue that this simply shifts the explanatory burden out of the purview of their theory and onto a different question (Why do liberals and conservatives see harm in different places?), but let’s focus instead on how they chose to test their claim. How did they show that liberals and conservatives are really the same, despite the many differences reported in the moral psychology literature? Schein and Gray (2015a) conducted seven experiments in which liberals and conservatives could have judged differently. For example, in Study 2, participants assigned ratings of immorality to short vignettes describing actions that were harmful, unfair, disobedient, disloyal, or impure. In this study –and 6 others – they found few cases in which the differences between liberals and conservatives reached statistical significance. This is said to support DM over MFT.

But in fact, the tests were engineered to be extremely easy for DM to pass. Their relatively small mTurk samples (ns from 79 to 111) were subjected to an unprecedented binary split in which self-described liberals were in one group, and everyone else – including moderates – was lumped into the group “conservatives.” This is extremely problematic, not only because moderates are not conservative, but because participants who do not know where to place themselves (including many libertarians and people who are simply non political) have little choice but to pick the midpoint of the scale, making the “moderates” a hodge-podge of political views. To make matters worse, Schein and Gray justified lumping moderates in with conservatives by saying that it was “consistent with past work,” citing Haidt (2012) as their precedent. But when we asked them by email to clarify what passage in The Righteous Mind justified such a procedure, they could not point to any particular part of the book.

There are many other problems with the procedures used in the DM testing garage. For example, fairness repeatedly emerges as an important other foundation of morality, not reducible to harm, but these unexpected moral “emissions” do not seem to be fully reported. We describe these problems in detail in a separate blog post. But the core problem running through all the studies is the same: no MFT-disconfirming evidence is provided for the strong hypothesis that all morality reduces to harm, despite the authors’ claims to the contrary.

What now?

Like Volkswagen, DM has made some excellent innovations. We think Gray and his colleagues are right that judgments of harmfulness are more common than judgments based on other moral foundations – certainly in Western cultures, and perhaps even in more traditional ones. We acknowledge that our prior writings have sometimes given the impression that all moral foundations are equally important in everyday morality. In Western cultures that is surely not the case, even among conservatives. Furthermore, we like and accept DM’s claims about dyadic completion (e.g., Gray, Schein, & Ward. 2014) – the tendency of people to “fill in” a victim or villain automatically when a harmed patient or intentional agent is not obviously present. But just because DM can explain some things does not mean that it can explain everything. Like Volkswagen, DM is a solid product that has become too ambitious, trying to conquer markets for which it is ill-suited (such as political psychology).

Gray and colleagues claim to have “disconfirmed MFT on its own terms” (Gray & Keeney, 2015; Schein & Gray, 2015b) but what they have really done is tested it on their own terms, pitting an expansive and forgiving version of their own theory against a cartoonish version of ours. They offer a theory that is too good to be true, and they support their assertions with reports from a testing garage rigged in their favor.

It may be time for a recall.


References

Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations: Communal sharing, authority ranking, equality matching, market pricing. New York: Free Press.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. H. (2013). Moral Foundations Theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55-130.

Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143(4), 1600–1615.

Gray, K., & Keeney, J. E. (2015). Disconfirming Moral Foundations Theory on its own terms: Reply to Graham (2015). Social Psychological and Personality Science.

Gray, K., Young, L., & Waytz, A. (2012) Mind perception is the essence of morality. Psychological Inquiry, 23, 101-124.

Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon.

Haidt, J., & Joseph, C. (2007). The moral mind: How 5 sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence & S. Stich (Eds.), The Innate Mind, Vol. 3 (pp. 367-391). New York: Oxford.

Haidt, J., Koller, S., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613-628.

Koleva, S., Graham, J., Iyer, Y., Ditto, P.H., & Haidt, J. (2012) Tracing the threads: How five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality, 46, 184-194.

Rai, T. S., & Fiske A. P. (2011). Moral psychology is relationship regulation: Moral motives for unity, hierarchy, equality, and proportionality. Psychological Review, 118, 57–75.

Schein, C., & Gray, K. (2015a). The unifying moral dyad: Liberals and conservatives share the same harm-based moral template. Personality and Social Psychology Bulletin, 41, 1147–1163.

Schein, C., & Gray, K. (2015b). Making sense of moral disagreement: Liberals, conservatives and the harm-based template they share. SPSP Blog post, August 12, 2015.

Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2, 307-320.

Sperber, D., & Hirschfeld, L. A. (2004). The cognitive foundations of cultural stability and diversity. Trends in Cognitive Sciences, 8, 40-46.


Jonathan Haidt (NYU)

Jesse Graham (USC)

Peter Ditto (UC Irvine)