From health care professionals to political pundits, policy advisors to sports commentators, advisors are often portrayed as experts in their respective fields. These experts can make surprisingly inaccurate predictions about the future, yet people continue to trust in their predictions. For example, investors make stock investments based on the advice of financial “gurus” who are at chance at predicting the market, and consumers adopt questionable health practices recommended by medical talk shows. This behavior is puzzling – why do people hurt themselves by repeatedly listening to the advice of “experts” who perform no better than chance?

In a series of three experiments, we explored the causes and consequences of the undue optimism in advisors. We had participants predict the price fluctuations of a fictitious stock in an experimental task that mimicked real-world financial decision-making. Participants could rely on trends in the stock as well as advice from financial advisors to make their predictions. The advisors varied in their accuracy, performing reliably above, at, or below chance. Despite encountering each advisor multiple times, participants consistently overestimated the advisors’ expertise, following the advisors’ advice more than warranted by the advisors’ performance.

Using computational models, we dynamically tracked participants’ beliefs about the advisors over the course of the experiment. This allowed us to tease apart two distinct sources of biases – (i) optimistic initial expectations about the advisors and (ii) confirmation bias in how those expectations are updated. Both components were necessary for participants to develop persistently inflated beliefs about the advisors. If participants’ initial beliefs were not optimistic, confirmation bias alone would not lead to systematically biased beliefs. Similarly, if participants had optimistic initial beliefs but updated them appropriately, they would arrive at an accurate estimate of the advisors’ expertise over time.

Having explored the cognitive processes underlying optimism biases in advice-taking, we wanted to investigate strategies to help people incorporate advice more optimally. We reasoned that if optimistic initial expectations indeed drove the unrealistic optimism in advice-taking, manipulating participants’ initial expectations could mitigate these biases. In Experiment 2, we confirmed this prediction by showing that participants were no longer optimistically biased when we provided them with accurate expectations about each advisor.  

In Experiment 3, we tried an alternative approach of calibrating participants’ expectations. Previous work suggests that the aggregated opinion of many individuals can come remarkably close to the truth, a phenomenon commonly known as “the wisdom of the crowds”. We thus collected Yelp-like star ratings of each advisor from an initial group of participants who completed the task, and presented the average ratings to a second group of participants at the start of the experiment. The hope was that the aggregated crowdsourced ratings would provide an accurate impression of advisors’ expertise, and that presenting these ratings to the second group of participants could help them avoid developing overly optimistic beliefs about the advisors.

We found, however, that the ratings from the first group were themselves optimistically biased. Instead of calibrating the expectations, they propagated and exaggerated the over-optimism in the advisors. As a result, participants in the second group were even more optimistically biased than the initial group. These results provide a possible explanation of the longevity of “expert” advisors – the media propagates optimistic expectations about these advisors, expectations that could be wrong yet resistant to change.  

How then, can people guard against becoming optimistically blinded when taking advice? In the book “Superforecasting”, Tetlock and Gardner found that experts made more accurate predictions when they have to make quantitative predictions that they were then held accountable for. Here, we argue that similar strategies can help advice-takers determine when and from whom to take advice. Relying on hearsay, intuition and general impressions is likely to result in bias. Instead, we ought to tabulate and make public quantitative metrics of advisor performance, so that advice-takers can consider them when deciding whether to utilize a piece of advice. Expect advisors can often be helpful, but knowing when they are not will help advice-takers discern how to incorporate advice when making choices.


Yuan Chang Leong is a graduate student in Professor Jamil Zaki’s Stanford Social Neuroscience Lab. His research combines behavioral experiments, computational modeling and neuroimaging to study how motivations and expectations influence learning and decision-making.

Jamil Zaki is an Assistant Professor of Psychology at Stanford University. His research focuses on the cognitive and neural bases of social behavior, and in particular on how people respond to each other's emotions (empathy), why they conform to each other (social influence), and why they choose to help each other (prosociality).