How do humans make moral judgments? This has been an ongoing and unresolved debate in psychology, and with good reason. Moral judgments aren’t just opinions. They are the decisions with which we condemn others to social exclusion, jail, and even violent retaliation. Given their weight, moral judgments are often assumed to be rational, though recent psychological research has suggested that they may be more like gut feelings. While debates about whether moral judgments are deliberate, conscious attributions, or automatic intuitions have been fruitful both theoretically and practically, the next direction in moral research needs to take a pragmatic turn. Rather than continue to ask whether morals are deliberate or affective, it’s time to ask when moral judgments are deliberate and when they are affective, and how these different types of reasoning both inform judgment.

Evidence for deliberate moral judgment, such as research by Cushman and Young, argues for a model of moral judgment in which people utilize rational cognitive processes to assess the cause of the potential moral wrong and that wrong’s effects.  Their research presented situations to participants in which one person acted in a way that affected another person. The researchers manipulated information about who caused a wrong, whether they intended to cause the wrong, and what harm has been caused. These facts were shown to change the moral judgment of the participants, suggesting deliberate moral judgment. This theory emphasizes the ‘correct’ assessment of moral facts, in which the goals of minimizing harm and maximizing well-being serve as the two aims of morality. It is ultimately a theory of rational decision-making; people observe the world, determine if their facts match their overall moral code, then make moral judgments. This model probably matches how many people think about their own moral stances; reasonable, fact-based, and most importantly, correct.

Intuitionist theorists such as Jonathan Haidt, however, have demonstrated that our moral reasoning is sometimes quite unreasonable. Haidt argued that affective responses, such as feelings of disgust, serve as the primary motivator for moral judgment (Haidt, 2001). In this theory, there are natural or socially learned intuitions of what is right or wrong that prompt snap judgments of a given moral situation. Rather than deliberating over the causes, intent, and effects of harmful actions, Haidt argues that situations that provoke prohibitive moral judgment often come from harmless, hard-to-justify situations. For example, a scenario in which a man has sex with a dead chicken and then cooks and eats it provokes negative moral judgments despite no clear harm being caused. Participants, when faced with such an evocative but harmless scenario, couldn’t come up with a rational justification beyond “it’s wrong”. This moral dumbfounding is used as evidence to suggest that affect is primary in moral judgment, and that the rational justifications are just posthoc reasoning.

If this debate sounds philosophical, that’s not a coincidence. In some ways, these two theoretical camps mirror the philosophical traditions of deontology and consequentialism. Deontology is the philosophical view that morality comes from a central tenet or rule, which is then applied invariably to an observed moral situation to create judgment. A deontological view of morality would hold that lying is always wrong, even if it means lying in a way to protect someone. This contrasts with consequentialism, which argues that moral assessment comes after events, in which the causes, effects, and other circumstances are evaluated together to form a judgment. In a consequentialist view of morality, a lie is only bad if the person lying hurts someone else. If the lie would protect someone’s feelings or safety, then the positive consequences of the act make the act acceptable or even compulsory. For Cushman and Young, the logical assessment of intent, cause, and effect are all part of the harm calculation that consequentialist morality is based upon. By contrast, Haidt’s moral dumbfounding is meant to demonstrate that people have emotional probations that act even when harm is explicitly made absent. A moral prohibition that keeps its moral valance despite circumstance has the hallmarks of deontological moral reasoning, the rules of which Haidt argues come from cultural history and emotional reactions based in human evolutionary history. So while this debate is contemporary and relevant to our day-to-day lives, it has a long history. No surprise it’s still unresolved!

While it may seem like we’re doomed to debate these points into eternity, there may be another way. Ditto and Liu complicate this theoretical dichotomy with work that focuses on moral conflict and the relation of moral convictions with moral facts. While agreeing with the premise that affect plays a strong role in moral judgment, Ditto and Liu argue that consequentialist moral judgments, like the kind studied by Cushman and Young, do require factual assessment in order to justify themselves. Further, failure to do so can create cognitive dissonance within an individual, which in turn can alter moral affect. One way in which this is resolved is a disputation of facts, in which people with strong moral convictions (such as being anti-death penalty) tend to also be highly invested in discounting or ignoring evidence that may undermine their view that their position is morally superior. An example they give is people who advocate for or against the death penalty. A person with a vested interest in ending the death penalty does so not just because they believe it is the right thing to do (i.e. killing is wrong), but because it is the best thing to do (the death penalty does not deter crime). Ditto and Liu call situations in which deontological intuition and consequentialist fact clash moral conflict. In their view, morality may indeed be the product of deontological intuitions, but humans do not perceive or assess their morals as simple rules that they have chosen but as reflections of the factually best way to live. In this way, moral rules move from simple prohibitions to a collection of prudential, logical ways of living and not living.

This accounts for how we may anecdotally experience our moral beliefs. It also suggests that the facts we consider central to our moral judgments might be subject to motivated reasoning, or the biased consumption of facts.  Further research has shown the process by which this moral realism can be manipulated, providing some evidence for a moral system based on deontological rules that are rationalized in a post hoc manner. By manipulating the deontological rules on which the affective moral judgments are hypothesized to be based (i.e. killing a person as punishment is wrong), participants would temporarily alter or soften their position. Specifically, participants were randomly assigned to conditions in which they read essays that argued for or against capital punishment, but did so by arguing in ways that did not deal with facts about capital punishment. For example, a pro-capital punishment essay would be about the importance of justice, casting people guilty of premeditated murder as subhuman monsters and stating that the death penalty was the only closure that was good enough for families. Participants later were asked their views on capital punishment, in which those exposed to the pro-essay were more favorable to the practice than those assigned to the against-essay. More importantly, these participants would then discount evidence that contradicted the position they were manipulated into supporting. For example, people in the pro-death penalty condition expressed that the death penalty was a good deterrent to crime, and downplayed the harm. This suggests we’re quite deliberate in our moral reasoning, but only when the facts make us look right.

If we believe that this model of deontological fact-seekers is a fitting one, what then? Are deontological judgments just affective feelings? While Ditto and Lui’s work suggests that they may be, that question is still somewhat open. One clue to answering this question may lay in the impact of moral wrong on emotional expression, as studied by Paul Rozin and colleagues. Rozin et al. found evidence to suggest that specific types of moral violations, such as harming someone or lying, provoked predictable emotional responses in participants such as anger. Rozin and colleagues argue that these findings suggest the importance of affect in moral reasoning. If moral emotions are linked to moral violations, and moral judgments are based on intuitive deontological stances, then is it the case that manipulating the emotional state of a participant could manipulate their moral judgment? Would this manipulation also alter the way in which individuals make attributional assessments about relevant facts and how those facts contribute to the moral justification of their judgment? Research into this question would not only contribute to the debate about how deliberate and intuitive reasoning initiate moral judgment, but also suggest a new way of assessing moral decision-making. If being angry can alter the assessment of facts about an important moral issue, are those judgments really as informed and rational as we would like to hope? If a lawmaker has a bad flight, might they be more likely to disregard new facts about a contraception bill? Better understanding how affect influences moral decision-making and relevant factual assessment can help better understand how much our day-to-day context alters moral decisions we treat as core to religious, civic, and personal identity.


Joseph Tennant is a PhD student in Comparative Human Development at the University of Chicago. His research focuses on the cultural psychology of religion and its effects on morality, learning, and theories of causality. His upcoming dissertation is a comparative study of Evangelical Christians and Atheists, and the differences in their moral reasoning.


Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science35(6), 1052-1075.  

Ditto, P., & Liu, B. (2012) Deontological Dissonance and the Consequentialist Crutch. In Mikulincer, M., & Shaver, P. (Eds.), The Social Psychology of Morality: Exploring the Causes of Good and Evil (pp. 51-71). Washington, D.C.: American Psychological Association  

Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology, 63(4), 568.  

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review108(4), 814.  

Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: a mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of personality and social psychology76(4), 574.  

Shweder, R. A., & Haidt, J. (1993). The Future of Moral Psychology: Truth, Intuition, and the Pluralist Way. Psychological Science, 4(6), 360-365.