When I was working as a secondary school teacher, I realized that even seemingly tiny changes in my class could significantly improve students’ motivation and behavior. For example, take one of my students who didn’t seem to be interested in any subjects at all. In my class introducing moral philosophy to high schoolers (a challenging subject, even for college students), I employed several strategies (e.g., using easy and informal language) to make myself into an attainable and relevant exemplar for students, based on my own and others’ past research about exemplars. A couple of months later, that student’s homeroom teacher thanked me for the dramatic change in the student’s behavior, saying: “Hyemin, my student stopped sleeping in class! You made such a huge change!” Eventually, this once disengaged student earned a very good grade in my tricky moral philosophy class.

As shown in this example and by other psychological research, even a small, short-term intervention can change the developmental trajectories of students in the long-term. This means that if an intervention backfires and causes negative outcomes, then it might be a long-term disaster to students. Thus, educators and policy makers should be careful while planning to implement newly developed interventions in the real world. That is, they may need to conduct long-term, large-scale experiments in addition to small lab-level experiments in order to have more data and make better decisions. Unfortunately, it is very tricky to conduct these kinds of long-term, large-scale experiments due to the lack of time and resources as well as ethical issues.

How can we address the conundrum of conducting long-term, large-scale experiments while developing educational interventions? My new research suggests a potential solution: to simulate the potential outcomes of interventions using small datasets collected from lab-level experiments in advance of field experiments. In two recently published studies (Han, Lee, & Soylu, 2016, 2018), I developed a simulation tool that can be used for the prediction of outcomes of educational interventions in collaboration with a computer scientist and educational neuroscientist. In these studies, our team implemented Evolutionary Causal Matrices (ECM), which were proposed in the field of evolutionary psychology, with a widely-used computer language, Python. In studies on cultural evolution, ECM have been utilized to simulate long-term transitions among different statuses in different populations based on matrices representing status transitions between t and t+1. If we repeat such a transitional calculation multiple times, we can estimate long-term outcomes of transitions in a specific system (e.g., t+100).

In our studies, a dataset collected from studies testing moral educational interventions in a lab and classroom during a short-term experiment was used for simulations. The simulations were performed to examine: 1. which types of moral educational interventions more effectively promote students’ voluntary service engagement compared to other types of interventions, and 2. how often such interventions should be conducted to produce a large effect. In addition, we developed Python classes to enable potential users to conduct similar simulations with their own datasets. We reported simulated outcomes of different types of interventions with different frequencies over years.

Of course, because simulated outcomes are estimated based on results from relatively small experimental results, errors are inevitable. In addition, the simulation method that we developed can only take into account simple factors, such as an intervention type and frequency, due to the limitations of ECM. So, simulated outcomes do not necessarily show us exactly how the reality will be. However, the simulated outcomes provide insights about how to set hypotheses and how to design experiments to researchers and educators who intend to test outcomes of interventions in large-scale educational settings. And, future research can help to address these limitations. For instance, applying machine learning and deep learning methods will allow us to consider diverse human and environmental factors with improved prediction accuracy while performing simulation. Once we have a more sophisticated tool for simulation, researchers and educators will be able to have better knowledge about how to design more effective educational programs.


Dr. Hyemin Han is Assistant Professor in Educational Psychology and Educational Neuroscience at the University of Alabama. As an interdisciplinary research interested in the improvement of education, Dr. Han conducts research projects in Social, Emotional, and EDucational (SEED) Neuroscience. His research interests include neuroscience of morality, socio-moral development, growth mindset, educational intervention, computational simulation, and professional ethics education.