Thanks for visiting! I’m an assistant professor at the London School of Economics in the department of psychological and behavioural science where I study the nexus between technology, persuasive communication, and attitude and behaviour change. I’m also a research affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology and currently hold an early career research fellowship from the Leverhulme Trust in which I’m studying the impact of “microtargeted” persuasive communication on voters’ attitudes and behaviour in democracies. If you’d like to know more, please get in touch.
The world could witness another pandemic on the scale of COVID-19 in the future, prompting calls for research into how social and behavioral science can better contribute to pandemic response, especially regarding public engagement and communication. Here, we conduct a cost-effectiveness analysis of a familiar tool from social and behavioral science that could potentially increase the impact of public communication: survey experiments. Specifically, we analyze whether a public health campaign that pays for a survey experiment to pretest and choose between different messages for its public outreach has greater impact in expectation than an otherwise-identical campaign that does not. The main results of our analysis are 3-fold. First, we show that the benefit of such pretesting depends heavily on the values of several key parameters. Second, via simulations and an evidence review, we find that a campaign that allocates some of its budget to pretesting could plausibly increase its expected impact; that is, we estimate that pretesting is cost-effective. Third, we find pretesting has potentially powerful returns to scale; for well-resourced campaigns, we estimate pretesting is robustly cost-effective, a finding that emphasizes the benefit of public health campaigns sharing resources and findings. Our results suggest survey experiment pretesting could cost-effectively increase the impact of public health campaigns in a pandemic, have implications for practice, and establish a research agenda to advance knowledge in this space.
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 U.S. political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence of a log scaling law: model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are barely more persuasive than models smaller in size by an order of magnitude or more. Second, mere task completion (coherence, staying on topic) appears to account for larger models’ persuasive advantage. These findings suggest that further scaling model size will not much increase the persuasiveness of static LLM-generated messages.
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
Much concern has been raised about the power of political microtargeting to sway voters’ opinions, influence elections, and undermine democracy. Yet little research has directly estimated the persuasive advantage of microtargeting over alternative campaign strategies. Here, we do so using two studies focused on U.S. policy issue advertising. To implement a microtargeting strategy, we combined machine learning with message pretesting to determine which advertisements to show to which individuals to maximize persuasive impact. Using survey experiments, we then compared the performance of this microtargeting strategy against two other messaging strategies. Overall, we estimate that our microtargeting strategy outperformed these strategies by an average of 70% or more in a context where all of the messages aimed to influence the same policy attitude (Study 1). Notably, however, we found no evidence that targeting messages by more than one covariate yielded additional persuasive gains, and the performance advantage of microtargeting was primarily visible for one of the two policy issues under study. Moreover, when microtargeting was used instead to identify which policy attitudes to target with messaging (Study 2), its advantage was more limited. Taken together, these results suggest that the use of microtargeting—combining message pretesting with machine learning—can potentially increase campaigns’ persuasive influence and may not require the collection of vast amounts of personal data to uncover complex interactions between audience characteristics and political messaging. However, the extent to which this approach confers a persuasive advantage over alternative strategies likely depends heavily on context.
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.