Has Society Changed in Implicit and Explicit Gender Stereotypes?


Over the past year, there has been nothing more certain than change. The COVID-19 global pandemic has forced us apart, blurring the divisions between our workplaces and homes, and threatening our health; the Black Lives Matter movement has highlighted the needs for systemic transformations in racial justice; and the rise of populism and misinformation have challenged the tenets of democracy. Although 2020 (and 2021!) are unprecedented in many ways, the fact of change is not just a quirk of the current times. Rather, change is part of the very nature of human society and human psychology. As humans, we adapt to these changes in our world by updating our beliefs and attitudes—changing what we think and feel about the world around us.

But changing our society’s beliefs and stereotypes is hard. It can take generations (or more) for a society to change how it thinks about issues like interracial marriage or climate change, or, as we study in our newest research, gender stereotypes that associate men with careers and women with family, and men with science and women with arts. Moreover, the possibility of stereotype change faces yet another stumbling block: even if we can change what we explicitly report on typical surveys and questionnaires, the more implicit, less controllable and more automatic, associations in our minds may resist long-term change. That is, it may be hard to break the link that immediately arises between the ideas, “women” and “family” (or “arts”) or “men” and “work” (or “science”).

In our research, we set out to document whether, and if so at what speed, explicit and implicit gender stereotypes (about career/family and science/arts) have changed over the past decade. As in our previous work documenting long-term changes in other implicit biases (for example, attitudes about race, sexuality, or age), we used the massive dataset collected through the Project Implicit demonstration website through which anyone can have their attitudes or beliefs measured (and learn where they stand). Specifically, we used data from 1.4 million tests of implicit and explicit gender stereotypes for two topics—men-career/women-family and men-science/women-arts—collected continuously from volunteer respondents over twelve years (2007-2018).

Good news: over this time, explicit gender stereotypes have been decreasing in bias. Stereotypes associating men-career/women-family and men-science/women-arts have lost about 19% and 14% of their original bias, respectively. These rates of change are similar to what we have previously seen in some other slower-changing explicit biases such as about body weight.

But what about implicit gender stereotypes? Psychologists (and the general public) have long thought of these implicit, automatic biases as particularly difficult to durably change. Like habits, implicit biases are thought to be deeply ingrained in our mind and environments such that they will be resistant to updating. Despite these expectations, we found that—more good news—implicit career/family and science/arts stereotypes have dropped by 13% and 17%, respectively, over twelve years. This rate of change is therefore quite similar to what we saw in explicit gender stereotypes.

Now, the next question we inevitably have to ask ourselves is who is changing? Is this stereotype change isolated to just a few groups in society (perhaps liberals, or younger folk)? Or is it actually more widespread across people and places?

To test the spread of change, we looked at differences across groups defined by gender, race, politics, age, education, and religion—that is, we compared the rate of change in men versus women, Black Americans versus White Americans, liberals versus conservatives, and so on, to see whether the two groups were moving in parallel or not. Furthermore, because of the international nature of the Project Implicit data, we were able to test whether the rate and direction of change differed across U.S. states, different countries, and different geographic regions of the world.

Gender stereotype change was not an isolated phenomenon. Nearly every demographic group, as well as 92-98% of states (depending on the stereotype in question), 82-91% of countries, and every single UN geographic region, was moving (largely in parallel) towards less bias over time. This surprisingly widespread pattern focuses our attention to forces happening at broadest level of society, such as social movements like #metoo, widespread increases of women’s representation in science-related fields and the workforce, or changes in social norms. Change is not just from the idiosyncratic motivations or actions of a few people, but, rather, is a truly societal transformation.

These results are hopeful. They show that gender stereotype change is possible and, thus, that continued efforts to promote change in society—such as widespread interventions to get more men to be interested in caregiving roles or more women to persist in science—are worthwhile and important.

However, the results also offer cautions. Implicit (and explicit) gender stereotypes remain far from eradicated. With our statistical models we were able to estimate how long it might take for these stereotypes go away. Shockingly, if we continue at past rates, it will take at least 134 years for implicit male-career/female-family stereotypes and 37 years for implicit male-science/female-arts stereotypes. Clearly, we still have much work ahead to accelerate our individual and collective change towards equality in our gender stereotypes.


For Further Reading

Charlesworth, T. E. S., & Banaji, M. R. (2021). Patterns of implicit and explicit attitudes and stereotypes III. Long-term change in gender stereotypes. Social Psychological and Personality Science. https://doi.org/10.1177/1948550620988425

Charlesworth, T. E. S., & Banaji, M. R. (2019). Patterns of implicit and explicit attitudes: I. Long-term change and stability from 2007 to 2016. Psychological Science, 30(2), 174–192. https://doi.org/10.1177/0956797618813087

Charlesworth, T. E. S., & Banaji, M. R. (2019). Gender in science, technology, engineering, and mathematics: Issues, causes, solutions. The Journal of Neuroscience, 39(37), 7228–7243. https://doi.org/10.1523/JNEUROSCI.0475-18.2019

 

Tessa Charlesworth recently received her PhD from the Department of Psychology at Harvard University with Professor Mahzarin R. Banaji. She studies the patterns and sources of long-term change in implicit social cognition.

 

Having Essentialist Beliefs Predicts People’s Attitudes about Social Groups

With which statement do you agree more?

Statement A: People can behave in ways that seem ambiguous, but the central aspects of their character are clear-cut.

Statement B: It is never possible to judge how someone will react in new social situations.

These statements tap into a belief known as essentialism, the tendency to believe that the differences that we see between individual people – and between social groups – are natural and unchangeable. Just as people differ in their levels of extraversion or agreeableness, they also differ in the degree to which they believe in essentialism. Those who agree with statement A (and other statements like it) are considered high in essentialism. Those who agree with statement B and others like it are considered low in essentialism. People with essentialist views tend to use more stereotypes, are less likely to pursue interactions with people of other races, and report less positive feelings toward mixed-race individuals.

We were interested in how essentialism might relate to people’s levels of racial prejudice. We reasoned that people high in essentialism may be more likely to take their observations about another person’s behavior to the “next level” by inferring that his or her behaviors reflect what that person, and other people who are similar to him or her, are truly like. For example, after learning about a Muslim person who committed an act of terrorism, a person high in essentialism might be especially likely to view Muslim people as terrorists. We tested this general idea in two studies. 

In our first study, we examined the link between essentialism and people’s racial attitudes towards Black Americans. We asked 500 Americans to fill out a measure of essentialism (from which statements A and B above were borrowed). Then we measured their attitudes towards Black Americans in two ways. The first measure was a survey with statements like, “How negative or positive do you feel toward Black people?” Our second measure was a less obvious measure of racial bias called the Implicit Associations Test (IAT). Our version of the IAT measured people’s tendency to associate “Black people” with negative emotionally-laden words.  We found that people’s levels of prejudice towards Black Americans, as measured by both the survey questions and the IAT, were associated with higher levels of essentialist thinking.  People higher in essentialism seem to harbor more racial prejudice towards Black Americans.

But why was this the case? Essentialism might lead to stronger prejudice because people high in essentialism are more likely to generalize the behaviors of one group member to other members of the group. If this is the case, then essentialist thinking should be linked to generalizing about groups even in ways that result in more positive attitudes about groups. For instance, after learning that some Muslims engaged in charitable behaviors, people high in essentialism might  associate Muslims with generosity more strongly than people low in essentialism.

In a second study, we measured the essentialism levels of 3,300 Americans, then described an imaginary group called the Laapians to them. Some people were told that Laapian individuals had engaged in 20 bad behaviors (such as parking in a space reserved for the handicapped). Others learned that Laapians had engaged in 20 good behaviors (such as they helped an elderly man who dropped some packages). A final group learned that Laapians had engaged in 20 neutral behaviors (they went to work). Participants then rated their attitudes towards Laapians as a whole.

As we expected, people higher in essentialism formed stronger attitudes towards Laapians based on learning that some individual Laapians had done some good or bad things.  This was true whether people thought the Laapians behaved poorly or well. Thus, among people who were high in essentialism, those who learned that some Laapians behaved positively had more positive attitudes towards Laapians as a whole. After learning exactly the same information, people low in essentialism formed weaker attitudes about the Laapian group. Again, this pattern of essentialism was true for both nice and nasty behavior. 

Our results shed light on why two people can draw different conclusions after witnessing a person perform the same behavior.  People high in essentialism are more likely to see the behaviors of individual group members as indicative of what all members of that group are like. But people high in essentialism are more likely to form negative and positive attitudes about the members of social groups.  Essentialism is not the same as prejudice.

If you’re concerned about forming unfair biases toward groups on the basis of the behaviors of a few individuals, here are a few strategies you could try. First, when you witness someone’s behavior and find yourself thinking that it is representative of their group, you could remind yourself that human behavior can be caused by many things, including things that have little or nothing to do with the person’s characteristics. When you witness a person acting a certain way, it is also helpful to think about the situational pressures that might have led the person to act in that manner. It could also help to imagine yourself acting in a similar way and identifying some of the pressures that could lead you to behave that way. Finally, if you ever see a Laapian behave poorly, you can remind yourself of a counterexample – in which a Laapian behaved kindly or heroically.  In general, thinking more deeply about the events and people that you witness may help reduce your likelihood of harboring unfair biases, be they positive or negative.


For Further Reading

Chen, J. M., & Ratliff, K. A. (2018). Psychological essentialism predicts intergroup bias. Social Cognition, 36(3), 301-323.

Jacqueline Chen is an Assistant Professor of Psychology at the University of Utah, where she directs the Social Cognition and Intergroup Perception Lab. Her research examines issues related to social perception, diversity, and intergroup relations.

Kate A. Ratliff is an Associate Professor of Psychology at the University of Florida and Executive Director of Project Implicit. Her research focuses on prejudice, stereotyping, and other intergroup biases.

Unconscious and Unaccountable: What Happens When We Attribute Discrimination to Implicit Bias?

Lately, the terms “unconscious” and “implicit” bias have been getting a great deal of attention in the news and popular culture when prejudice and discrimination are discussed. Generally, these terms refer to the tendency for people to associate a group of people with a set of stereotypes or attitudes without being aware that they are doing so. For example, you may make automatic, unconscious inferences about other people based on your knowledge of their race, nationality, marital status, or preference for certain kinds of music.

Researchers often determine whether people have unconscious biases through tests that measure how quickly people associate a group of people (such as women) with a trait or category (such as mathematics). For example, an Implicit Association Test often reveals that people are faster to pair men with science terms (Chemistry) and women with liberal arts terms (English). Furthermore, these implicit associations can affect people’s behavior—for example, leading them to hire men over equally qualified women for job positions in the sciences—without even being aware that this unconscious bias is affecting their decision.

In the past couple of years, psychologists have been trying to educate the public about these subtle yet pernicious biases. Indeed, maybe you have heard or read about implicit bias somewhere before. This attempt to raise awareness about implicit bias was meant to encourage people to understand how they may act in discriminatory ways without realizing it. And, making people aware of implicit bias might lessen its impact. But, could teaching people that behavior can be affected by implicit biases also have a downside? My research suggests yes. When people believe discrimination was caused by implicit bias, they hold those who behave in discriminatory ways less accountable for their behavior.

In this research, my colleagues and I had participants read evidence that one group of people (such as doctors or police officers) discriminated against another group of people (such as elder people or Black Americans). For example, one of these articles explained that recent research determined that doctors’ bias against older people affects their patient care. They read that doctors who have a strong bias against older people spend less time with elderly patients and exhibit more dismissive body language toward them, which affects whether patients get the care they need.

Critically, in these articles, half of the participants read that the discrimination was due to implicit bias—that is, due to biased attitudes and beliefs that the doctors were unaware they held. The other half of the participants were led to believe that the discrimination occurred due to typical, explicit bias—that is, it was due to biased attitudes and beliefs that the doctors were aware they held.

We then asked participants how much they thought the doctors (or police officers) should be held accountable for their discriminatory behavior. When they read that the discrimination was due to implicit rather than explicit bias, participants held the doctors and police officers less accountable for their behavior. Further, participants also thought the perpetrators should be punished less for their discriminatory behavior when it was due to implicit bias.

Note that in these studies, the behavior was the same and always resulted in the same outcomes. The only difference was whether the article said that the behavior was caused by implicit or explicit bias. What’s perhaps the most concerning is that people held others less accountable even when the outcomes of the discrimination were particularly harmful (such as premature death for elderly patients).

What is it about discrimination that arises from implicit bias that makes it different from discrimination borne of explicit bias? Because implicit biases are held unconsciously, people may believe that their effect on our behavior is unintentional. Intentional, deliberate discrimination certainly seems worse than unintentional, unconscious discrimination. However, it is important here to distinguish awareness of biases from awareness of behaviors. Even if people are unaware of their biases, they are still aware of their discriminatory behavior.

Imagine a child who runs recklessly around the house and breaks some fine China. The child may not have intended to break the plate, but he also wasn’t trying not to break anything. He was aware of his reckless behavior. A similar idea can be applied here. Regardless of their unconscious biases, people can make an intentional effort not to discriminate. For example, people can put in an effort to be aware of any differences in their behavior toward men and women or toward White and Black people to ensure they are treating everyone the same regardless of race, gender, age, sexual orientation, and other characteristics. In other words, we are still responsible for our behavior, even when it is driven by implicit biases.

Ultimately, whether or not a person meant to discriminate does not change much for the person who experienced the discrimination. Discrimination is hurtful. It has psychological, physical, and even financial costs, such as missing out on promotions or being denied premier loans. Perhaps, instead of holding people less accountable for discrimination attributed to implicit bias, we should be thoughtful about whether or not implicit bias was the cause and what we can do to discourage discriminatory behavior caused by such biases.


For Future Reading:

Daumeyer, N. M., Onyeador, I. N., Brown, X., & Richeson, J. A. (2019). Consequences of attributing discrimination to implicit vs. explicit bias. Journal of Experimental Social Psychology, 84, https://doi.org/10.1016/j.jesp.2019.04.010

Cameron, C. D., Payne, B. K., & Knobe, J. (2010). Do theories of implicit race bias change moral judgments? Social Justice Research, 23, 272–289. https://doi.org/10.1007/s11211-010-0118-z

Redford, L., & Ratliff, K. A. (2016). Perceived moral responsibility for attitude-based discrimination. British Journal of Social Psychology, 55, 279–296. https://doi.org/10.1111/bjso.12123

Simon, S., Moss, A. J., & O’Brien, L. T. (2019). Pick your perspective: Racial group membership and judgments of intent, harm, and discrimination. Group Processes & Intergroup Relations, 22, 215–232. https://doi.org/10.1177/1368430217735576

 

Natalie Daumeyer is a graduate student at Yale University studying how people make sense of persistent discrimination and inequality.

Where You Live Can Affect Your Biases

Of all the things people think about when moving to a new city, their likelihood of becoming more biased is not one of them. Maybe it should be.

Implicit biases—the immediate inferences we all make about other people—can inadvertently result in discrimination, even by those of us who are motivated to treat others fairly and equally. And these automatic biases can have negative consequences. For example, in places where the average implicit bias against Black individuals is high, Black infants suffer worse health at birth than White infants. In those places too, police are more likely to disproportionately use lethal force against Black citizens.

How can we combat implicit biases? Researchers who have searched for the answer to this question have made a critical assumption, leading some of them to conclude that implicit bias cannot be changed. The assumption is this: Implicit bias is a characteristic of the person, something like a personality trait. In this view, some people are inherently more implicitly biased than others, so it makes sense to focus on designing trainings or “interventions” that could reduce people’s biases.   

That is exactly what a large group of researchers attempted to do in a collaborative effort that tested nine interventions designed to reduce implicit bias across 18 university campuses.  Their research found that all of the interventions reduced implicit bias immediately, but the effect did not last even a few days. They concluded that implicit bias is a stable feature of the person that is difficult to change.

But what if implicit bias is not a characteristic of individual people, but rather of the places where they live? We automatically make certain inferences about the people in various social groups because we have knowledge of the stereotypes in our culture. However, some places are more likely to remind us of those stereotypes than other places.

In particular, places that have more social inequalities are more likely to reinforce negative stereotypes. Automatically associating “Black” with “poor,” for example, is more likely to occur in cities where economic racial disparities are obvious due to high levels of segregation and high poverty rates among Black individuals. So people who live in places that perpetuate racial disparities through unfair policies and unequal access to resources (often called “structural racism”) are more likely to encounter situations that cue these stereotypic associations.

However, any given person has many different experiences and social encounters from day to day, and these experiences may cause people’s automatic, stereotypic associations—and biases—to fluctuate. Although someone might score high on implicit bias one day, he or she might not the next. However, when you take the average implicit bias of all the people living in a certain place, then you get a sense of the true level of bias in that location.  

We call this phenomenon the “Bias of Crowds” because it is conceptually similar to the “wisdom of crowds,” which refers to the fact that averaging the collective knowledge of several people is more likely to yield the true answer than surveying any one individual in that group. With the Bias of Crowds theory in mind, we re-analyzed the data from the large intervention study that was conducted across 18 university campuses. The researchers who conducted that study assumed that because the average reduction in implicit bias disappeared a few days after the interventions, individuals’ implicit biases had returned to their original levels. When we looked at the data, though, we found that this was not the case.

A few days after receiving the intervention to reduce implicit bias, individuals’ levels of implicit bias had changed mostly randomly—some people’s bias went up, some people’s went down, and some were largely unchanged—but these changes didn’t have much to do with people’s original level of bias.

However, when we looked at the average implicit bias on any given campus, those scores did return to similar levels as before the interventions took place. But we also found that campuses that displayed a confederate monument, had less faculty diversity, or had less economic mobility among students were more likely to have higher average levels of implicit bias. These campus characteristics potentially acted as visible reminders of racial stereotypes and activated a stronger wave of biases on some campuses than on others.

So how can we combat implicit bias? Our results suggest that places are stubbornly biased and can contribute to the bias of individual people. Restructuring the places where we live to be more inclusive, equitable, and welcoming of diversity might be a first, and longer-lasting, way to reduce implicit bias than interventions aimed at individual people. The results imply that the effort cannot be concentrated on a few “biased people” and set aside. To change a place requires shared social commitment, ongoing and active effort, and tangible investment. In the case of college campuses, for example, removing confederate monuments and increasing the diversity of the faculty are concrete steps that may effectively weaken the collective bias that, at first, seemed so resistant to change.

If we are more likely to be biased at certain places, then it is time to start thinking about trying to change not just people’s minds, but also their surroundings.


For Further Reading:

Lai, C. K., Skinner, A. L., Cooley, E., Murrar, S., Brauer, M., Devos, T., . . . Nosek, B. A. (2016). Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of Experimental Psychology: General, 145, 1001–1016. doi:10.1037/xge0000179

Payne, B. K., Vuletich, H. A., Lundberg, K. B. (2017). The bias of crowds: How implicit bias bridges personal and systemic prejudice. Psychological Inquiry, 28, 233–248. doi:10.1080/1047840X.2017.1335568

Vuletich, H. A., & Payne, B. K. (2019). Stability and Change in Implicit Bias. Psychological Science, 30(6), 854–862. https://doi.org/10.1177/0956797619844270

 

Heidi A. Vuletich is a social and developmental psychologist who studies how social and economic inequalities affect people’s perceptions of themselves and others.  

Celebrity Fat Shaming Has Ripple Effects on Women’s Implicit Anti-Fat Attitudes

Washington, DC and Montreal, Quebec - Celebrities, particularly female celebrities, are routinely criticized about their appearance—indeed, celebrity “fat-shaming” is a fairly regular pop-cultural phenomenon.  Although we might assume that these comments are trivial and inconsequential, the effects of these messages can extend well beyond the celebrity target and ripple through the population at large. Comparing 20 instances of celebrity fat-shaming with women’s implicit attitudes about weight before and after the event, psychologists from McGill University found that instances of celebrity fat-shaming were associated with an increase in women’s implicit negative weight-related attitudes. They also found that from 2004 - 2015, implicit weight bias was on the rise more generally.

Explicit attitudes are those that people consciously endorse and, based on other research, are often influenced by concerns about social desirability and presenting oneself in the most positive light. By contrast, implicit attitudes—which were the focus of this investigation—reflect people’s split-second gut-level reactions that something is inherently good or bad.

“These cultural messages appeared to augment women’s gut-level feeling that ‘thin’ is good and ‘fat’ is bad,” says Jennifer Bartz, one of the authors of the study. “These media messages can leave a private trace in peoples’ minds.”

The research is published in Personality and Social Psychology Bulletin, a journal of the Society for Personality and Social Psychology.

Bartz and her colleagues obtained data from Project Implicit of participants who completed the online Weight Implicit Association Test from 2004 to 2015. The team selected 20 celebrity fat-shaming events that were noted in the popular media, including Tyra Banks being shamed for her body in 2007 while wearing a bathing suit on vacation and Kourtney Kardashian being fat-shamed by her husband for not losing her post-pregnancy baby weight quickly enough in 2014.

They analyzed women’s implicit anti-fat attitudes 2 weeks before and 2 weeks after each celebrity fat-shaming event.

Examining the results, the fat-shaming events led to a spike in women’s (N = 93,239) implicit anti-fat attitudes, with more “notorious” events producing greater spikes.

 

Chart showing that implicit anti-fat bias jumped 2 weeks after a celebrity fat-shaming event, and that levels remain higher for another 4 weeks

While the researchers cannot definitively link an increase in implicit weight bias to specific negative incidents in the real world with their data, other research has shown culture’s emphasis on the thin ideal can contribute to eating disorders, which are particularly prevalent among young women.

“Weight bias is recognized as one of the last socially acceptable forms of discrimination; these instances of fat-shaming are fairly widespread not only in celebrity magazines but also on blogs and other forms of social media,” says Amanda Ravary, PhD student and lead author of the study.

The researchers’ next steps include lab research, where they can manipulate exposure to fat-shaming messages (vs. neutral messages) and assess the effect of these messages on women’s implicit anti-fat attitudes. This future research could provide more direct evidence for the causal role of these cultural messages on people’s implicit attitudes.


Citation: Amanda Ravary, Mark W. Baldwin, and Jennifer A. Bartz. Shaping the Body Politic: Mass Media Fat-Shaming Affects Implicit Anti-Fat Attitudes. Personality and Social Psychology Bulletin. Online before print April 15, 2019.

Open Access: The data reported in this paper are available in the Supplemental Materials and archived at the public database Open Science Framework (https://osf.io/iay3x). 

Funding This research was supported by a Fonds de recherche du Québec—Société et culture (FRQSC) Team Grant (FRQ-SC SE-#210323).

Personality and Social Psychology Bulletin (PSPB), published monthly, is an official journal of the Society for Personality and Social Psychology (SPSP). SPSP promotes scientific research that explores how people think, behave, feel, and interact. The Society is the largest organization of social and personality psychologists in the world. Follow us on Twitter, @SPSPnews and find us on facebook.com/SPSP.org.

How to Overcome Unconscious Bias

Imagine playing a game where you’re seated in front of four decks of cards. On the back of two decks are pictures of puppies; on the other two are pictures of spiders. Each deck has some cards that win points and others that lose points. In general, the puppy decks are “good” in that they win you more points than they lose while the spider decks are “bad” in that they lose you more points they win. You repeatedly select cards in hopes of winning as many points as possible. This game seems pretty easy— and it is. Most players favor the puppy decks from the start and quickly learn to continue favoring them because they produce more points.

However, if the pictures on the decks are reversed, the game becomes a little harder. People may have a tougher time initially favoring spider decks because it’s difficult to learn that something people fear like spiders brings positive outcomes and something people enjoy like puppies brings negative outcomes.

Performance on this learning task is best when one’s attitudes and motivations are aligned. For instance, when puppies earn you more points than spiders, people’s preference for puppies can lead people to select more puppies initially, and a motivation to earn as many points as possible leads people to select more and more puppies over time. But when spiders earn you more points than spiders, people have to overcome their initial aversion to spiders in order to perform well.

This potential conflict between attitudes and motivations on behavior is not reserved for puppies and spiders. There are social domains where attitudes and motivations point in competing directions. Race is a clear example. On average, white people associate black people with negativity. These anti-black attitudes can exist in both consciously controlled explicit attitudes or in less consciously controlled implicit attitudes. At the same time, many white people also value appearing and being racially unprejudiced. For instance, data from a 2015 volunteer sample found that while 80 percent of white people had an easier time pairing black than white faces with negative words, 73 percent also agreed with statements such as “I am personally motivated by my beliefs to be non-prejudiced.”

What happens to race-related behavior when our attitudes and motivations conflict with one another? My co-author Sophie Trawalter and I examined this question in a series of studies recently published in the Journal of Experimental Social Psychology. We found that white participants strongly resisted displaying anti-black behavior, even if this meant sacrificing a chance for a financial reward.

Our studies adapted a tool called the Iowa Gambling Task, the learning measure described in the opening paragraph. Our version of the Iowa Gambling Task asked people to repeatedly select one face from an array of black or white faces. Participants were told that it was their job to win as many points as possible over 120 selections, and that people in the top 10 percent of points earned would win a gift card.

Across conditions, we manipulated whether black or white faces represented the good or bad decks. In one condition, selecting black faces generally led to gaining points and selecting white faces generally led to losing points. In the reverse condition, selecting black faces generally led to losing points and white faces generally led to gaining points.

Our results highlighted the impact of both attitudes and motivations on behavior. At the beginning of the task, we saw the influence of racial attitudes. Participants performed better in the condition that aligned with anti-black attitudes, earning more points when black faces were tied to losses. Participants had a much harder time initially earning points when black faces were tied to gains, and this was particularly true among those reporting higher levels of consciously preferring white to black people.

However, as the task progressed, we saw the influence of racial motivations. While people were initially better at earning points when black faces were tied to losses, performance in this condition did not improve over time. That is, participants appeared to avoid reinforcing that black faces were associated with negative outcomes like losing points. Conversely, while people had a harder time initially learning that black faces were associated with gains, they showed a great deal of improvement in this condition throughout the task.

In fact, by the end of the study, participants tasked with learning that black faces led to point gains were outperforming those tasked with learning that black faces led to point losses. Moreover, this ability to learn that black faces led to point gains was weakly but reliably related to a greater desire to avoid racial prejudice. In other words, participants highest in reporting a motivation to appear unprejudiced were best able to acquire the association between selecting black faces and positive outcomes.

One remaining issue is whether participants simply could not or chose not to reinforce anti-black associations. While our data cannot definitively answer that question, we have some reason to believe participants were “playing dumb,” and choosing to not perform well when the task paired black faces with losses. For instance, remember that when the task paired puppies with gains and spiders with losses, performance was good both initially and significantly improved over time. That is, performance suffered only when the task supported potentially unwanted racial associations.

Do these studies prove that people who are motivated to be unprejudiced need not worry about racial bias in their behavior? No. After all, even people motivated to appear unprejudiced still had a much easier time earning points initially when black faces were paired with losses. But, this work does highlight how people can work against undesired attitudes given the right motivation. Our white participants valued acting unprejudiced so much that they were willing to forego possible reward to avoid strengthening any anti-black associations. As we say in the paper, attitudes may have the first word but not the final say in behavior.


By Jordan Axt. This post was first published on Scientific American and is shared with the editor's permission.

What is the Secret to Success?

By Melissa J. Ferguson, Cornell University and Clayton R. Critcher, University of California, Berkeley

At hundreds of colleges and universities across the country, thousands of students are in the midst of the fall semester, trying to manage the academic tasks of studying, exams, papers and lectures. A lot is riding on their academic performance – earning (or just keeping) scholarships, landing summer internships, gaining employment and of course acquiring new skills and knowledge.

The vast majority of students will tell you they intend to do well, that they know it takes hard work to succeed. But some students will end up hitting more bars and parties than books. That is, not everyone ends up putting in that hard work.

In our own work, we have found that asking college students questions like, “How important do you think it is to do well at college?” gives us essentially no information about who will do well in terms of grades.

College students are hardly unique in not following through on their intentions and goals. Frustrated parents might do well to look to their own unused gym memberships or perennial weight-loss resolutions to realize that intentions are not always sufficient to ensure steady progress toward one’s goals.

Why is there such a disconnect between our intentions and our actions? And, how can we predict who has the grit to succeed, if we can’t depend on what people tell us?

Explicit or implicit beliefs?

When people are directly asked how important they think it is to succeed at some goal, they are reporting their “explicit beliefs.” Such beliefs may largely reflect people’s aspirations, such as their sincere intentions to buckle down and study hard this semester, but these may not always map onto their subsequent inclination to persist.

Rather than depend on people’s explicit beliefs, in our research we looked instead to people’s implicit beliefs.

Implicit beliefs are mental associations that are measured indirectly. Rather than asking the person to state what they think about some topic, implicit measures use computerized reaction-time tasks to infer the strength of someone’s implicit associations. For instance, a great deal of research by psychologists Brian Nosek, Tony Greenwald and Mahzarin Banaji over the last two decades has shown that people often hold negative implicit associations about members of stigmatized racial and ethnic groups.

Even though many participants in these studies explicitly stated they believed in fairness and equality among racial groups, they nevertheless showed implicit biases toward racial and ethnic groups. In other words, whereas people “said” they were egalitarian, they in fact possessed strong negative associations in their mind when it came to certain racial groups.

Implicit associations are critical to understand because they can predict a range of everyday behaviors, from the mundane (what foods people eat) to the monumental (how people vote).

But do implicit associations predict who has the grit to succeed at life’s difficult goals?

Here’s what we did

To find out, instead of measuring people’s explicit beliefs about the importance of their goals, we measured people’s implicit beliefs about the importance of an area (e.g., schoolwork, exercise) and then measured their success and persistence at relevant tasks (e.g., grades, gym regimens).

We used a computer-based test called the “Implicit Association Test (IAT)” to measure our participants’ implicit beliefs. The test takes about seven minutes to complete. Participants have to don noise-canceling headphones and sit in a distraction-free cubicle.

In five of our studies, we used this test to measure students’ cognitive association between “importance” and “schoolwork.” Student participants were asked to indicate, as quickly as they could, using computer keys, whether each of a series of words was related to “schoolwork,” was a synonym of “importance” or was a synonym of “unimportance.” Examples of such words included “exam,” “critical” and “trivial.”

The test was set up in such a way so that even a slight difference in the speed of response (at the level of milliseconds) could reveal differences in the strength of the association between schoolwork and importance.

In short, it allowed us to measure the extent to which people implicitly believed that schoolwork was important.

Multiple studies to corroborate

Could millisecond differences in reaction times meaningfully capture people’s beliefs and predict success in their goals? For instance, could this seven-minute-long measure of milliseconds predict who would earn straight A’s in their college classes?

We found that they did. And we didn’t observe this relationship just once. We found that again and again – across seven different studies, run in different labs, with different populations and predicting different types of persistence and success. Across five studies, we found that college students’ implicit belief in the importance of schoolwork predicted who got higher grades.

We didn’t limit our study to college performance. We also tested other goals, such as going to the gym. We found that those who had a stronger association between importance and exercise were significantly more likely to exercise more often and more intensely.

Then we conducted a test to find out how implicit beliefs predicted test-taking abilities. We tested college students’ implicit beliefs about the importance of the GRE (Graduate Record Examination), a widely used exam that helps determine graduate school admissions and scholarships. Those who showed a stronger association between importance and the GRE scored significantly better on a practice GRE test.

A unique measure of likelihood of success

Like any measure, ours wasn’t perfect. We couldn’t always predict in every instance who would succeed or fail. But our brief computerized test provided new insight into who was likely to succeed – an insight not captured by more traditional measures.

For example, higher SAT scores are taken to be a measure of who will likely do better at college and better on the GRE. Our data did show that SAT scores are a good predictor of both. However, knowing participants’ implicit beliefs in the importance of school or the GRE predicted success over and above what SAT scores could tell us. In other words, even when two people scored the same on the SAT, the one with the stronger implicit belief about the importance of the GRE tended to score better on the practice exam.

One interesting finding in our studies was that implicit beliefs predicted some people’s success more than others. Closer examination showed that those for whom exerting self-control was difficult – those who said they have trouble completing assignments on time, who could be easily dissuaded from making it to spin class or who have difficulty maintaining focus during long reading comprehension passages – were those who most benefited from having a strong implicit belief that the goal was important.

In other words, it was those individuals in need of a boost who most clearly benefited from the implicit nudge that their pursuits were important.

What exactly is the role of implicit beliefs?

Our work adds to a growing body of evidence that the ordinarily hidden-from-view, implicit associations in our mind offer new insights about many everyday decisions and behaviors.

For example, just as implicit associations can predict intergroup behavior, first impressions of other people and voting behavior, our new findings show that they also predict success at some of life’s most challenging tasks.

However, there are still some questions that remain. For example, do implicit beliefs in the importance of working hard actually cause people to do better, or do they simply identify who is likely to succeed? Could changing people’s implicit beliefs have real effects on their prospects for success?

To be clear: It is certainly not the case that what people say about how much they care about something does not matter at all. Indeed, we would guess that people who say they care nothing about exercising will not be heading to the gym, regardless of their implicit associations between exercise and importance.

But, especially among those who say they do care about something – such as the vast majority of college students caring about their performance at school – a measure of their implicit beliefs may give us a better idea about how likely they are to succeed.

The Conversation

Melissa J. Ferguson, Professor of Psychology, Cornell University and Clayton R. Critcher, Associate Professor of Marketing, Cognitive Science, & Psychology, University of California, Berkeley

This article was originally published on The Conversation. Read the original article.

What’s Different (or Special) About Women in STEM Majors?

Once upon a time (not that long ago), it was widely assumed that most women just didn't have what it takes to be scientists or mathematicians. Most people now know that this isn't the case. Girls and women often outperform boys and men in math and science classes and some women do become famous engineers. And yet fewer women than men still pursue careers in science, technology, engineering, or math (STEM) fields. The gender gap in STEM is a problem for women, who may limit themselves to certain career options due to stereotypes that women don't belong in STEM. And unfortunately, a lack of diversity in the STEM workforce also translates to a lack of diversity of ideas in these fields, which may limit technical innovation as well.

Identifying barriers to women's participation in STEM fields is important. Better understanding of these barriers could lead to interventions that create greater occupational opportunity for women and racial minorities (who face similar issues) and foster greater diversity of ideas in areas that matter to society.

I wondered whether women who choose STEM majors are more likely to associate themselves automatically or unconsciously with STEM compared to women who choose more traditional majors.  These mental associations could be one barrier that influences women's participation in STEM, and therefore might provide an opportunity for intervention.

Implicit Associations of Women in STEM

Implicit associations are fast, automatic, or unconscious connections people make in their minds between two mental concepts. They are called "implicit" because people may be unaware that they have these associations.  As a result, they can be difficult to measure.

Implicit association tests (or IATs) are the best available measure of these elusive "unconscious" associations according to research. People taking an IAT see words representing a concept such as "female," such as a typical female name like "Emily," and then respond as quickly as possible, matching the word to one of two pairs of categories they might (or might not) associate with it, such as "science career/male" or "person career/female." They do this repeatedly during an experimental session, with varied combinations of the categories and target words. When people have implicit associations between categories in a pair, they respond faster when the target words match both categories in that pair.  Although the differences are small (measured in thousandths of a second), they reveal the connections our brains make to categories.

Scores

I used the IAT to measure associations between the self and others ("me" or "them") and STEM careers. Scores on this IAT provide a unique index of unconscious self and career stereotypes. My study involved 240 women at different stages of their college experience who completed an online survey, including two IATs as well as questions about their STEM educational experiences and their explicit (i.e. openly acknowledged) associations between gender and STEM careers.   

Women majoring in STEM had stronger implicit associations between themselves and STEM careers, as measured by a self-career IAT, than did women in female-dominated majors, specifically those that prepare students for careers that involve working with or helping people. In addition, STEM women also had weaker male-STEM career associations, as measured by a second IAT that measured the relative strength of associations between male and female gendered names ("Emily" or "Benjamin") with these same careers. Differences on both IATs emerged in first-year as well as third- and fourth-year students, suggesting that implicit associations were present before women started college, and therefore may have influenced their career plans from the start.

Further, these implicit associations were related to parents and teachers "pushing" STEM throughout childhood and adolescence. Women in STEM majors were more likely to have had parents or teachers who encouraged them in science and math, while also introducing more "hands-on" activities or math and science information with real-world applications. And these same women had stronger self-STEM career and weaker male-STEM career associations. These findings suggest that early support for STEM might shape STEM implicit associations, which in turn affect women's choice of college major.

Finally, and somewhat unexpectedly, results for explicit associations in this study were the opposite of the findings for implicit associations. Explicit associations are more consciously accessible connections that people make when asked directly about them. Women in STEM majors held stronger traditional explicit gender-STEM associations than women in female-dominated majors, and these differences were greatest for students further along in college. In other words, while women who choose STEM implicitly associate themselves with STEM, they are still very aware that men are more likely to work in science, technology, engineering, or math. Nevertheless, they persist!

How Can Implicit Associations Be Used to Engage More Women in STEM?  

This study suggests that women's implicit associations between themselves, their own gender, and STEM careers may influence their educational and career decisions, and that early positive experiences and support in STEM likely play a role in the development of these associations. Although short-term interventions rarely change people's existing implicit associations, interventions focused on encouraging young girls' interest in science and math throughout their school years are likely well-timed and long-lasting enough to shape implicit associations between the self and STEM. Parents and educators looking to encourage girls into the exploration of STEM fields should seek and support these programs as part of their efforts.

My results also suggest that the IAT test could be used to assess the effectiveness of these early interventions. Educators aiming to encourage women to seek STEM careers would be wise to include IATs as measures to evaluate the effectiveness of their efforts. 


For Further Reading

Dunlap, S.T. & Barth, J.M. (2023). STEM Identities and Gender-STEM Stereotypes: When and Why STEM Implicit Associations of Women Emerge and How They Affect College Major Choice. Sex Roles, 89, 19-34. https://doi.org/10.1007/s11199-023-01381-x

Dunlap, S.T. & Barth, J.M. (2019). Career stereotypes and identities: Implicit beliefs and major choice for College Women and Men in STEM and Female-Dominated Fields. Sex Roles, 81, 548-560. https://doi.org/10.1007/s11199-019-1013-1

Greenwald, A.G., Banaji, M.R., Rudman, L.A., Farnham, S.D., Nosek, B.A., & Mellott, D.S. (2002). A unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept. Psychological Review, 109, 3-25. https://doi.org/10.1037/0033-295X.109.1.3


Sarah T. Dunlap is a Research Associate at the Institute for Social Science Research at the University of Alabama and studies factors that influence women's career choices.

New Research Reveals Historic Migration’s Link to Present-Day Implicit Racial Bias

Roughly six million Black people moved away from the American South during the Great Migration between 1910 and 1970, hoping to escape racial violence and discrimination while pursuing economic and educational opportunities. Now, research has uncovered a link between this historic event with present-day inequalities and implicit biases.

In a new Social Psychological and Personality Science article, researchers report that current implicit bias among White people at the county level is associated with the proportion of Black residents living in that county during the Great Migration (circa 1930). The research supports the Bias of Crowds theory—which emphasizes the role of unequal environments or situations in contributing to collective levels of implicit bias.

"Our work suggests that the consequences of historical racism are not confined to the past," says lead author Heidi Vuletich of the University of Denver. "The systems and structures that we all navigate can often go unquestioned and unchanged—the theory inspiring this work says that maintaining the status quo can mean allowing negative historical legacies to continue."

Researchers analyzed over 1.6 million responses from White people visiting Project Implicit and taking the Implicit Association Test, which measures people's associations between the racial categories "Black" and "White" and evaluations "Good" and "Bad." Respondents were spread across 37 states and 1,981 counties in the North and Western United States. In counties that had larger Black populations in the middle of the 20th century, present-day White people showed a stronger implicit preference for White over Black people.

Researchers also analyzed data from nearly 215,000 Black people who completed the Implicit Association Test, and did not find the same associations as White respondents to historical legacies. Dr. Vuletich notes that understanding their response can help researchers understand the psychological processes and circumstances under which environmental factors relate to bias.

Dr. Vuletich explains that this data can help inform strategies for combatting racial inequity going forward.

"Even as explicit forms of racism have become less prevalent, implicit biases remain common and manifest even in people who value equity and inclusion. Organizations, governments, and other institutions pursue solutions that focus almost exclusively on how to change individuals' thoughts and behaviors," Dr. Vuletich says. "Our results corroborate the need to spotlight structures and systems as contributors to bias in our communities."

Despite the community-level focus of this research, Dr. Vuletich also emphasizes the need for people to examine their own prejudices on an individual level.

"Our findings do not exonerate people from responsibility to reduce their biases, but they do exhort them to pursue structural solutions and change."

--

Press may request an embargoed copy at [email protected].

Study: Vuletich, Heidi; Sommet, Nicolas; Payne, Keith. The Great Migration and Implicit Bias in the Northern United States. Social Psychological and Personality Science.

Black Americans More Strongly Associate Threat with Black Men Than with White Men

In the U.S., media often prominently feature Black individuals as dangerous, and there are widespread cultural stereotypes linking Black to concepts like violence, aggression, and criminality. Research that I recently conducted exploring the likely consequences of this pernicious and problematic portrayal of Black people found that White Americans more strongly link Black men compared to White men with "dangerous," and White men compared with Black men with "positivity." Also, White Americans equally link Black and White men with general "negativity" and more strongly link Black (but not White) men with "dangerous" versus "negative."

So, not only do White Americans hold an automatic Black-threat association, but the Black-threat link is distinct from and stronger than a Black-negative link. In fact, in that work, there was no automatic Black-negative link. These results suggest that White people's quick and automatic evaluation of Black men may be that they appear to pose a threat, rather than that they are disliked.

What About Black Americans' Response to Stereotypes?

Yet Black Americans are exposed to many of the same cultural stereotypes as are White Americans. Consequently, like White Americans, Black Americans may come to hold a mental association linking Black to threat. Although it may seem surprising to suggest members of a group can hold negative attitudes toward their own group, research has shown that members of historically disadvantaged and minoritized groups can sometimes show associations that favor the outgroup. For example, on measures of automatic associations of a group with good versus bad, members of marginalized groups tend to favor the outgroup. As examples, elderly people favor the young, people with disabilities favor people without disabilities, and people with obesity favor people with normal weight. And, along these same lines, Black Americans sometimes favor White Americans. Yet, recent research suggests that liking another group does not necessarily imply disliking one's own. In other words, Black people may hold a White-positive association without holding a Black-negative association.

Threat and Negativity Are Not the Same

My work shows that threat associations are primary to negative associations, and this means that Black Americans can hold a Black-threat association even in the absence of a Black-negative association. To be clear, if Black Americans associate threat with Black people, that would not suggest the association is accurate, or that Black Americans, believe, endorse, or are responsible for this association. Nor does it suggest that Black Americans do not also hold a coexisting but weaker White-threat association. Instead, a stronger Black- versus White-threat association may be one consequence of the pernicious presence and influence of the Black-threat stereotype in America. Indeed, my most recent work found that, like White Americans, Black Americans

  • more strongly link Black than White men with "dangerous"
  • equally link Black and White men with general "negativity"
  • more strongly link Black (but not White) men with "dangerous" versus "negative"

These results suggest that cultural stereotypes linking Black Americans to danger-related concepts like violence, aggression, and criminality may lead to a Black-threat association with outgroup but also ingroup members. Finding a unique Black-threat association implies that, even among Black Americans, Black men may evoke an automatic threat process and rapid threat responses aimed at self-defense.

Individuals raised in the same society are likely to internalize some of the same stereotypes, regardless of whether that stereotype is about their group or another group. Considering bias from all angles may help explain why, for example, disproportionate police use of force encounters are not limited to White officers. Instead of revealing implicit disdain or dislike, such occurrences may indicate threat responses that result via the activation of threat associations. Although one might expect Black individuals to be especially motivated to control the effects of bias against their own group, threat-evoking stimuli are extremely impactful, and a person's responses can be hard to control. Consequently, the impact of threat-based anti-Black bias may occur in numerous social domains, including jury decisions, hiring decisions, and school discipline decisions.

Increased understanding of bias in the U.S. can be gained by considering threat-based anti-Black bias to be the result of a societal-level issue that shapes associations. These associations can be held by White people, but also by members of other ethnic and racial groups, indeed even other Black Americans. In doing so, the source of the harmful consequences of such bias may be identified as the result of a systemic societal problem and can begin to be addressed at various levels.


For Further Reading

March, D. S. (2022). Perceiving a danger within: Black Americans associate Black men with physical threat. Social Psychological and Personality Science. doi.org/10.1177/19485506221142970

March, D. S., Gaertner, L., & Olson, M. A. (2021). Danger or dislike: Distinguishing threat from valence as sources of automatic anti-Black bias. Journal of Personality and Social Psychology, 121, 984-1004. doi.org/10.1037/pspa0000288.

March, D. S., & Gaertner, L. (2021). A method for estimating the time of initiating correct categorization in mouse-tracking. Behavior Research Methods, 53, 2439-2449. doi.org/10.3758/s13428-021-01575-9.

March, D. S., Gaertner, L., & Olson, M. A. (2018). On the prioritized processing of threat in a Dual Implicit Process model of evaluation. Psychological Inquiry, 29, 1-13. doi.org/10.1080/1047840X.2018.1435680.


David March is an Assistant Professor at Florida State University and studies how people process information, particularly threatening information, and how the unique processing of threatening information affects judgments and behaviors.