Silvia Saccardo (CMU)

Event Date

Location
1113 Social Science and Humanities Blue Room

Assessing Nudge Scalability: Lessons from Large-Scale RCTs (with Hengchen Dai, Maria Han, Naveen Raja, Sitaram Vangala, and Daniel Croymans)

Field experimentation and behavioral interventions have the potential to inform policy. Yet, many initially promising ideas show substantially lower efficacy at scale, reflecting the broader issue of the instability of scientific findings. Here, we identify two important factors that can explain variation in estimated intervention efficacy across evaluations and help policymakers better predict behavioral responses to interventions in their settings. To do so, we leverage data from (1) two randomized controlled trials (RCTs; N=187,134 and 149,720) that we conducted to nudge COVID-19 vaccinations, and (2) 111 nudge RCTs involving approximately 22 million people that were conducted by either academics or a government agency. Across those datasets, we find that nudges’ estimated efficacy is higher when outcomes are more narrowly (vs. broadly) defined and measured over a shorter (vs. longer) horizon, which can partially explain why nudges evaluated by academics show substantially larger effect sizes than nudges evaluated at scale by the government agency. Further, we show that nudges’ impact is smaller among individuals with low baseline motivation to engage in the target behavior—a finding that is masked when only focusing on average effects. Altogether, we highlight that considering how intervention effectiveness is measured and who is nudged is critical to reconciling differences in effect sizes across evaluations and assessing the scalability of empirical findings.

Event Category