It has been just over a year since a group of radicalized individuals stormed the U.S. Capitol on January 6th, 2021. The crucial role of social media in this incident is undeniable: far-right social-media platforms, such as Gab and Parler, were used to communicate exactly which streets to take in order to avoid the police, and some posted about carrying guns into the halls of Congress. Bolstered by former President Trump, far-right groups had organized on their trusted social-media networks and invited others to their “rightful” cause. Eventually, this online activism became real-world violence.

My colleagues and I at the University of Southern California delved into social media postings to see if we could find underlying psychological clues to explain what could motivate people to use extreme and violent tactics. A close look at the history of hate crimes and radical groups, and our research, point to a common ground between them: they all have a shared moral vision, that is, adherence to a set of guiding principles that are perceived to be held by all group members. This vision then motivates individuals to use radical or violent strategies to achieve that shared moral vision. In other words, people who are embedded in morally homogeneous environments might develop dichotomous thinking (a “friend or foe” mindset) and demonstrate tunnel vision, focusing all their efforts exclusively on the destruction of the opponents for a sacred purpose.  

We studied nearly 25 million posts on Gab using advanced computer methods for recognizing language usage, and found that the more a person’s language in their posts aligned with their group’s moral values, the more prone they were to use hateful, derogatory language toward oft-targeted minoritized social groups. In other words, we find that the more people are in morally homogeneous “bubbles,” the more likely they are to resort to radical means and verbal violence against others, aiming to achieve their prejudicial vision.

Studying radicalized networks, like Gab, is particularly important in order to understand the underpinnings of radicalization. As mainstream networks such as Twitter and Facebook began to limit the activity of groups such as QAnon on their platforms, these ideologies have slowly resurfaced in other networks that allowed them to openly call for violence under the guise of “freedom of speech.” 

To make sure that our results are not limited to the idiosyncratic features of Gab, we repeated our analyses on a different social-media network called “Incels” founded for “involuntary celibates.” While at first sight Incels might seem less harmful than Gab, that may not be the case: The incel ideology has inspired multiple instances of deadly violence. Elliot Rodger, for example, killed six and injured fourteen (before killing himself) in 2014 to instigate a “War on Women” for “depriving me of sex.” By examining over 900,000 posts in this online community, we again found that Incels users who find themselves in a “bubble,” wherein their beliefs and values are strongly reinforced, are more prone to post hate speech, calling for radical acts to defend heterosexual men and violence against women. In these morally homogeneous environments, individuals feed one another’s moralized visions of the world and feel like others in their group are just like family members, a “band of brothers.”

Taking Our Questions Into The Laboratory

After we uncovered these antecedents of hate speech and the calls for violence in Gab and Incels, we wanted to further understand the mechanism at play, so we designed several experiments. We asked people to imagine they have been invited to a Facebook group and that others in this community shared their moral values. In a comparison group, we told participants about the same Facebook group, but told them that few members shared their moral values. We then asked participants about their intentions to use extreme and illegal measures to protect this hypothetical group. We found that people who were led to believe that they are in a group with shared values had higher radical intentions to protect the group at any cost, even by acceptance of resorting to violent means. In another experiment, we asked U.S. participants to first choose a moral value most important to them among five values: care and compassion, fairness and justice, loyalty to the group, respect for authority, and  physical and spiritual purity. Then we led them to believe that other Americans also share their selected moral value. We also had a comparison group to whom we said that few Americans shared their selected moral value. Again, we found that people who were told that the majority of Americans shared their particular value showed increased radical intentions and they even became slightly more willing to “fight and die” for the United States and the values it stands for. This was the case regardless of what moral value people chose and whether they were liberal or conservative in their political views. We learned two important things:

  • Morality is unique in motivating extreme behavioral expressions of prejudice. Non-moral views (those about mere preference rather than principles about right and wrong) may not have the power to drive people to the edges. Therefore, diversity of moral worldviews within social networks can be considered a good next step to avoid formation of moral echo chambers.
  • Social media networks have rewired our social life, and they can give us a false image of our social world. Too much similarity in the views in our feed could give us a picture that “everyone thinks like me” and that “everyone who does not think like me is evil,” which could worsen political polarization and erode our ability to tell truth from falsehood.

Real-World Threats Of Online Radicalization

The storming of the U.S. Capitol is a good example of how online radicalization breeds physical violence: Those who were convinced that the 2020 presidential election was stolen from former President Trump organized online using the hashtag #StoptheSteal (signaling their effort to stand up to injustice) on Gab and elsewhere, which served as a hub for organizing the insurrection.

These Trump supporters were acting because they were presumably deeply convinced that the presidential election was stolen, a moral transgression. They thought that someone needed to do something to bring order back into their country, and they thought they should “go in.” The insurrection was a demonstration of how moralization in echo chambers can lead to violence and death chants for the Vice President of the United States in Congress halls.


For Further Reading

Atari, M., Davani, A. M., Kogon, D., Kennedy, B., Ani Saxena, N., Anderson, I., & Dehghani, M. (2021). Morally homogeneous networks and radicalism. Social Psychological and Personality Science. https://doi.org/10.1177/19485506211059329

Hoover, J., Atari, M., Davani, A. M., Kennedy, B., Portillo-Wightman, G., Yeh, L., & Dehghani, M. (2021). Investigating the role of group-based morality in extreme behavioral expressions of prejudice. Nature Communications, 12(1), 1-13. https://doi.org/10.1038/s41467-021-24786-2

Fiske, A. P., & Rai, T. S. (2015). Virtuous violence. Cambridge, England: Cambridge University Press.


Mohammad Atari is a social psychologist, currently a Postdoctoral Fellow at the Department of Human Evolutionary Biology at Harvard University where he studies cultural change and moral values using experimental methods and natural language processing.