Eliot Smith

Eliot R. Smith is Distinguished Professor Emeritus of Psychological and Brain Sciences, Indiana University, Bloomington. He moved to Indiana in 2003 after 21 years on the faculty at Purdue University. His research interests include prejudice and intergroup relations, especially the role of group-based emotions; person perception and stereotyping; and computational modeling. He has served as Editor of Personality and Social Psychology Review and JPSP: Attitudes and Social Cognition, and is currently Associate Editor of Journal of Experimental Psychology: General.

Do you have a favorite conference memory or story?

At the third SPSP conference, in Savannah in 2002, I met with the executive committee to present a plan for a Summer Institute in Social Psychology.  The idea was based on, frankly, the envy that some of us felt for the overwhelmingly successful biennial Summer School sponsored by the European Association for Social Psychology, which has played a huge role in both transmitting knowledge and building friendship and collaboration networks among European social psychologists.  The plan was accepted and some of us wrote a grant proposal that SPSP submitted to the National Science Foundation. That funding began the now ongoing series of Summer Institutes (now SISPP), with the first held at the University of Colorado in 2003.  Graduate students, consider attending SISPP! 

Do you have a favorite course to teach and why?

Besides specialized graduate courses, my favorite has been an advanced undergraduate course on the social psychology of public opinion. I designed it based on a course that I TA’d for in the early 1970s taught by Tom Pettigrew, my PhD advisor.  It lets me combine methodological material like writing questions, the science of self-report, and survey sampling with substantive content from social psychology like social influence and the role of ideology in judgments. I also present and critique current polling data in virtually every class session. Teaching this course makes me realize how much fascinating work on political psychology is going on in our field today.

What are your current research interests?

My long-time collaboration with Diane Mackie on emotions and intergroup relations continues, and we are working together on two papers right now (to be submitted or resubmitted this summer).  In a relatively new direction, my colleague Selma Šabanović and I have an NSF-funded project examining human-robot interaction within the theoretical framework of intergroup relations.  That has led to a number of publications, the most recent being Smith, Sherrin, Fraune, and Šabanović (PSPB, 2020)—a paper examining whether positive or negative emotions are stronger predictors of willingness to interact with robots.  I also have major research interests in person perception and social cognition, illustrated by a recent paper by Smith and Mackie (PSPR, 2016), a novel model of social influence.

Outside of psychology, how do you like to spend your free time? 

I retired at the end of 2018, imagining that I would have plenty of free time to garden, travel, visit family, and enjoy music (perhaps brush up my rusty piano skills).  In reality, I am keeping as busy as ever.  I am still doing a lot of research and writing—perhaps even more given my freedom from teaching and administrative responsibilities.  Many days are still filled with working on manuscripts with colleagues, as well as some journal editing and reviewing duties.  In the pandemic I do find time to maintain a small garden, and hope to get to that piano playing someday.

How Do Robots and Humans Interact?

To what extent do people place identity with or against robots? Can we take a robot’s perspective? Do we see robots as moral beings?

Xuan Zhao, who studies perspective-taking, empathy, and prosocial behaviors, launched the session by highlighting the theoretical and practical relevance of examining human-robot interaction.

Arguing that studying robots provided "a great opportunity for us to understand what makes us fundamentally human — and perhaps what isn't so unique about humans”, her research with Bertram Malle at Brown University examines what the appeal of human resemblance is.

Zhao said, “Previous research at the intersection of psychology, robotics, and design found that robots with more human-like appearance and movements are often evaluated as having more mind, intelligence, and positive human characteristics.

However, we still know fairly little about the deeper psychological mechanisms triggered by human-like appearance.” Thus, Zhao studied how likely people are to see the world from the vantage points of a variety of different agents: human agents, a highly human-like android robot, a highly human-like mannequin, two moderately human-like robots, a non-human-like mechanical robot, and a cat.

Over seven studies, Zhao found that the more an agent appears human-like, the more people consider and adopt its perceptual point of view. Even when participants were made to believe that the human-like robot was simply a mannequin, they still took its perspective, despite knowing it had no mind.

Thus, human-like appearance is a strong cue that directly triggers visual perspective-taking. Human visual perspective-taking may then help with understanding robots, predicting their behavior, and interacting with them in a shared space.

If we can take a robot’s perspective based on appearance, how do we categorize ourselves in a world of robots? Kurt Gray spoke on people’s perceptions of robot workers and how those perceptions affected individuals’ perceptions of other people.

Gray and his team, Joshua Jackson and Noah Costello, had participants rate their anxiety about the rise of robot workers, and then rate their anxiety about other human groups, such as people of a different race, gender, or religion.

Gray found that people were more likely to place themselves in the same category as humans that were different from them when they reported more anxiety about the rise of human workers. He posited that humans were perhaps uniting against the superordinate threat of robot workers.

Can robots make moral decisions?

Bertram Malle spoke on “evaluating the morality of artificial agents’ decisions.” Malle wrote a hypothetical scenario in which a drone, artificial intelligence, or human pilot had to make a decision to launch or cancel a strike that was likely to prevent terrorists from killing civilians but also had an 80% chance of killing a civilian child. The agent was given permission by military command to launch the strike but decided to either launch or cancel the strike. Then, after reading the scenario, the participants evaluated this decision on a sliding scale of blame, from no blame at all to the most blame possible.

Participants were then asked, “Why do you feel the [agent] deserves this amount of blame?” 25-50% of participants denied that either machine, AI, or drone, had moral agency, and thus could not take any blame. Broken down by agent and by decision, Malle said, “The human received significantly more blame for canceling than launching the strike whereas artificial agents received equal amounts of blame.”

Malle hypothesized that humans, but not artificial agents, are seen as embedded in the command structure, therefore received less blame for launching (justified by commanders’ recommendation) and more blame for canceling (going against the recommendation).

Thus, the human pilot was judged within a chain of command. In the second study, when the justifications (from the command structure) were removed from the scenario, humans and artificial agents showed similar amounts of blame.


Written By: Elisa Rapadas (elisarapadas.me)

Presentation: Merging Psychology and Robotics: Evidence for How Humans Perceive Robots, a symposium held Friday, March 2, 2018.

Speakers: Dr. Xuan Zhao, University of Chicago Booth School, Dr. Melissa Ferguson, Cornell University, Dr. Kurt Gray, UNC Chapel Hill, Dr. Bertram Malle, Brown University

Friend or Foe: Do Humans Like Robots?

It looks like a scene straight out of a sci-fi movie—you go to work, and report to your robot boss. Based on your work last month, it tells you, in beeps and boops, your performance is subpar. You go for lunch and order a coffee. This time, another robot gets it for you, brewed to the perfect milk-to-coffee ratio you like.

Though it may seem crazy, robots are increasingly part of everyday life. Robots help serve food, man hotels, and even make hiring decisions in the workplace. But how humans feel about them remains complicated.

Robot Bosses

Take robot bosses, for instance, that have the power to evaluate your job performance. In a recent study, I tested how people reacted to receiving negative feedback from a robot supervisor. Participants completed a task and then had their performance criticized by either an anthropomorphized robot (which had more humanlike characteristics like a human name and voice and an animated face), a mechanistic robot, or a human supervisor.

What I found was that people did not like the anthropomorphized robot or human supervisor very much, and given the option to power down the robot supervisor in retaliation, most participants did. Interestingly, participants reacted much less negatively toward the mechanistic robots, because participants felt that the robot did not intend to hurt them through the delivery of negative feedback.

But you may think, "Wait, a robot can't actually intend to do anything. It doesn't have a mind." This is an example of how we do (or don't) attribute a mind to another entity. Whether or not you perceive a robot as having a mind on its own depends on your judgments of its agency (can it think and act as it wishes?) and subjective experience (can it feel emotions?).

In my study, anthropomorphized robots were more likely to be seen as possessing agency—and behaving intentionally—than non-anthropomorphized robots. In other words, having a humanlike name and face made a robot supervisor seem more deliberate in its abuse, which in turn evoked bitter, retaliatory behaviors from the person under its supervision.

What About Robot Co-Workers?

Does working with them—as opposed to for them—help? The short answer is no. In another study, I found that mere exposure to the idea of robots in workplaces made people feel insecure about their jobs.

In one study, participants working at an automobile manufacturing company reported that the more they interacted with robots on a daily basis, the more insecure they felt about keeping their job. Job insecurity also led to burnout and poor, uncivil workplace behaviors. Another study in the United States reveals that areas with the highest rates of robots also had the highest rates of people searching job recruiting sites (a proxy of how insecure people might feel about their jobs), even though unemployment rates weren't higher in those areas. We replicate these findings across industries and countries. Interestingly, we also find that robots elicit the greater fear among laypeople, above and beyond known job threats such as immigrants, younger employees, and even intelligent algorithms. 

Seemingly, we do not like robot co-workers much either. Despite the popular, spectacular claims that robots will improve our lives, the reality of our relationship with robots looks bleak. If we perceive some robot bosses as potential abusers and robot co-workers as job-stealers, then where can we co-exist with robots at all?

Robots that Serve Us?

Turns out, there is a type of robot that humans like more: robot servers. Think of robots that bring your food order, and robots that take your luggage to your hotel room. Known as service robots, these robots are designed specifically to help humans.

In an earlier study conducted at Henn na Hotel in Japan, the first robot-staffed hotel in the world, I surveyed hotel guests on how satisfied they were with their stay. Here, I found that guests were not only generally satisfied, but more so when they anthropomorphized the robots that serve them (by thinking about or treating them as if they were human). 

This time around, seeing a robot as more humanlike made people think it has feelings and the capacity to experience them. And it made them like the robots more while also more forgiving of any errors they made.

So, 'Do humans like robots?' The answer is it depends not so much on the types of robots, but what contexts are we interacting with the robots in. In contexts where robots can criticize us or potentially replace us, we see them as threats. In contexts where robots can serve us, we see them as "friends." Ultimately, whether we cower at robots or welcome them with open arms seems to largely depend on what we believe they can do to us or for us.


For Further Reading

Yam, K. C., Goh, E. Y., Fehr, R., Lee, R., Soh, H., & Gray, K. (2022). When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. Journal of Experimental Social Psychology102, 104360. https://doi.org/10.1016/j.jesp.2022.104360

Yam, K. C., Tang, P. M., Jackson, J. C., & Gray, K. (2023). The rise of robots increases job insecurity and maladaptive workplace behaviors: Multimethod evidence. Journal of Applied Psychology. https://doi.org/10.1037/apl0001045

Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021). Robots at work: People prefer—and forgive—service robots with perceived feelings. Journal of Applied Psychology106(10), 557. http://dx.doi.org/10.1037/apl0000834


Kai Chi (Sam) Yam is an Associate Professor of Management at the National University of Singapore Business School. His research focuses on behavioral ethics, leadership, humor, and the future of work.