Thanks for visiting! I’m an assistant professor at the London School of Economics and Political Science in the department of psychological and behavioural science where I study the nexus between technology, persuasive communication, and attitude and behaviour change. I’m also a research affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology and the Public Opinion Analytics Lab in the United Kingdom. I currently hold an early career research fellowship from the Leverhulme Trust in which I’m studying the potential impact of “microtargeted” persuasive communication on voters’ attitudes in democracies. If you’d like to know more, feel free to get in touch.
There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27% respectively-than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs’ unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 US political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence that model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are only slightly more persuasive than models smaller in size by an order of magnitude or more. Second, we find that the association between language model size and persuasiveness shrinks toward zero and is no longer statistically significant once we adjust for mere task completion (coherence, staying on topic), a pattern that highlights task completion as a potential mediator of larger models’ persuasive advantage. Given that current frontier models are already at ceiling on this task completion metric in our setting, taken together, our results suggest that further scaling model size may not much increase the persuasiveness of static LLM-generated political messages.
The world could witness another pandemic on the scale of COVID-19 in the future, prompting calls for research into how social and behavioral science can better contribute to pandemic response, especially regarding public engagement and communication. Here, we conduct a cost-effectiveness analysis of a familiar tool from social and behavioral science that could potentially increase the impact of public communication: survey experiments. Specifically, we analyze whether a public health campaign that pays for a survey experiment to pretest and choose between different messages for its public outreach has greater impact in expectation than an otherwise-identical campaign that does not. The main results of our analysis are 3-fold. First, we show that the benefit of such pretesting depends heavily on the values of several key parameters. Second, via simulations and an evidence review, we find that a campaign that allocates some of its budget to pretesting could plausibly increase its expected impact; that is, we estimate that pretesting is cost-effective. Third, we find pretesting has potentially powerful returns to scale; for well-resourced campaigns, we estimate pretesting is robustly cost-effective, a finding that emphasizes the benefit of public health campaigns sharing resources and findings. Our results suggest survey experiment pretesting could cost-effectively increase the impact of public health campaigns in a pandemic, have implications for practice, and establish a research agenda to advance knowledge in this space.
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.