By: Elizabeth Gilbert

When things go wrong, people look for someone to blame—but whom, how much, and why?  We blame others more when they cause the bad outcome (he crashed my car is more blameworthy than he was hit while driving my car).  And we blame others more when they intend a bad outcome (he crashed my car on purpose is more blameworthy than it being an accident).  Under the law, legal responsibility is generally based on these two separate “elements”: (1) whether you actually caused the outcome (actus reas, for example, did you crash the car); and (2) your mental state (e.g., intent, knowledge) when the outcome occurred (mens rea, for example, did you do it on purpose).

 The law asks people to consider these factors separately, but people might not be doing what the law asks of them.  For example, our recent studies show that we judge people to be more causal if they merely should have known that things might go wrong, even if they acted the same way as people who did not have reason to believe things might go wrong.  Why?  One hypothesis is that they might be seen as truly having wanted the bad outcome to occur.  Another is that they are seen as bad people and therefore deserving of more blame.  Our hypothesis was that it occurs as a result of counterfactual thinking – that is, thinking about the way the world could have been different.  If someone has knowledge relevant to a bad outcome, we can imagine that they could have, or should have, acted differently to prevent the outcome.  Therefore, they seem more causal of it (and so more worthy of blame).

Our studies showed that merely having relevant knowledge can affect causality judgments (Gilbert, Tenney, Holland, & Spellman, 2015).  In the first study, participants read about Julia, whose lawnmower malfunctioned, destroying her own prize-winning tulips.  (The fact that they were her own tulips suggested that she did not intend to destroy them.) Half of the participants also read that Julia’s mechanic had told her that her mower was prone to malfunctions.  When asked what was the main cause of the tulips’ destruction, these participants were more likely than the others to name Julia (rather than, for example, the lawnmower).  They also rated Julia as more causal on a continuous scale.  Thus, Julia’s mental state (knowing her mower might malfunction) affected judgments about her actions – even in a case where she obviously did not want the outcome to occur.

Our next two studies attempted to show that counterfactual reasoning might explain how knowledge influences causation judgments.  If someone has knowledge (e.g., of the potentially defective mower), then she could have (counterfactually) stopped the destruction of the tulips by buying a new mower or having her mower fixed.  If the person tries to take a preventive action, then even if the bad outcome happens anyway, an obvious counterfactual has been eliminated.  Under our hypothesis, eliminating the counterfactual thus should decrease the effect of knowledge on causation.

Using a new scenario, Experiment 2 found just this effect.  Participants read about  Josh, who let his friend Sarah borrow his car.  Sarah then got into a serious car accident.  We varied whether Josh knew that his car brakes were having problems and also whether he warned Sarah about the brake problems.  As in Experiment 1, when Josh knew his brakes were having problems, he was judged to be more causal of the car accident than when he did not have knowledge.  However, when he took preventive action by warning Sarah about the brakes in advance of the accident—thus eliminating an obvious counterfactual (i.e., warning Sarah)—his causal attribution levels decreased to nearly the level of not having the knowledge in the first place.  It thus seems that if you act appropriately given your knowledge and make it harder for people to think, “if only you’d done something about it,” you may protect yourself from increased blame.

Experiment 2 thus indirectly tested the counterfactual hypothesis; we eliminate an obvious counterfactual and found an effect on judgments.  But we didn’t know for sure that participants would have generated that counterfactual.  In contrast, Experiment 3 directly tested the counterfactual hypothesis by explicitly prompting participants to generate counterfactuals and then evaluate them.

As in Experiment 1, participants read about Julia, whose lawnmower malfunctioned, destroying her flowers.  This time, however, prior to rating her causation, participants were asked to generate counterfactuals: “list four ways that things could have happened differently, so that the tulips would not have been destroyed.”  Participants seemed to have no problem generating these counterfactuals, and examples ranged from ordinary (e.g., “if only she had fixed the lawnmower”) to highly amusing (e.g., “if only she had asked her boyfriend to mow the lawn” and “if only she were out solving world peace rather than mowing”). 

Participants then rated their counterfactuals for potency (i.e., how likely it is that each counterfactual actually could happen and, if it did happen, how likely it is that it would actually change the outcome (see Petrocelli et al., 2001)).  They also rated the counterfactuals for controllability (i.e., how much Julia could have actually controlled each one).   As in Experiment 1, Julia was rated as more causal when she had knowledge than when she did not.  Moreover, the relationship between her knowledge and causality appears to be due to counterfactual thinking.  Regardless of whether Julia had knowledge, participants generated counterfactuals that were highly potent.  However, when Julia had knowledge, participants generated counterfactuals about how the outcome could have been prevented that they thought she could control more.  Such controllable counterfactuals in turn predicted increased causal ratings.

These results add to a growing body of research suggesting that people’s intuitions do not always align with the law when assigning causation and blame.  But, whereas previous research has generally suggested that increased causation and blame judgments are due to a top-down process in which people first assess morality or bad intent (Alicke, 1992; Knobe, 2005), our results extend to merely having knowledge and are consistent with a more cognitive, bottom-up process based on counterfactual thinking.   This suggests that maybe Sir Bacon and Spiderman were right:  knowledge is power, and with great power comes great responsibility. 

As a 6th-year social psychology graduate student at the University of Virginia, Elizabeth studies the reasoning strategies that people use in everyday sense-making and decision-making.  Much of her research is inspired by her time practicing law (she used to be a litigator) and her travel to East Asia (she will happily go almost anywhere).  She is particularly interested in the different strategies that people use to understand how the world "fits together," including causal reasoning, analogical reasoning, and blame assignment.  Email: [email protected]. Website: elizabethagilbert.com