Research
Under Review
The Unintended Consequences of Labeling AI-Generated Media Online Gabrielle Péloquin-Skulski, Kai Zhou, Ziv Epstein, Adam J. Berinsky, and David G. Rand. Preprint can be accessed here.
Abstract
Media platforms have recently introduced initiatives to label AI-generated media, aiming to increase transparency about content creation. Yet such efforts may carry unintended consequences. AI-generated media often accompany informational content that can vary in veracity. However, labeling may confound perceptions of the media’s authenticity and the content’s veracity, reducing belief in true information. Moreover, since it isn’t feasible to label all AI-generated media, partial labeling may lead people to assume that the absence of a label implies authenticity and/or veracity. We test for these labeling and implied effects in two survey experiments (N = 11,044), where respondents evaluated political news posts. Labeling decreased perceptions of the authenticity of AI-generated images but also lowered belief in and willingness to share posts—even when the associated claims were true. Furthermore, exposure to partial labeling increased the perceived authenticity of unlabeled content. These results highlight the need for carefully designed labeling practices online.
A Double-Edged Sword: Unpacking the Relationship between Political Sophistication and Belief in Political (Mis)Information Gabrielle Péloquin-Skulski, Chloe Wittenberg, Adam J. Berinsky, Gordon Pennycook, and David G. Rand. Available upon request.
Abstract
Past research offers competing perspectives on the role of political sophistication in receptivity to political (mis)information. The first perspective holds that political sophistication (defined as individuals’ knowledge and ability to reason about politics) serves a protective function, furnishing the domain-specific expertise people need to parse political truth from fiction. The second, by contrast, posits that political sophistication magnifies partisan differences in accuracy judgments, with individuals more likely to deem information true when it aligns with their political predispositions. However, previous work is limited in its ability to adjudicate between these perspectives; most notably, it does not account for the relationship between political and cognitive sophistication, the latter of which is often associated with better discernment of true vs. false claims. To address these questions, we re-analyze twelve observational studies assessing perceptions of nearly 400 true, false, and hyper-partisan news headlines (N = 7,879; 125,202 observations). Overall, we find support for both theoretical perspectives: political sophistication tends to be associated both with enhanced truth discernment—namely, the ability to distinguish true versus false headlines—and more biased judgments of politically concordant versus discordant headlines. Together, these findings help to reconcile two dominant perspectives in the literature and shed new light on the predictors of political beliefs in a digital age.
The Limits of AI-Persuasion: How Voters React to Synthetic Campaign Media Gabrielle Péloquin-Skulski, Victor Livernoche, Aarash Feizi, Svetlana Zhuk, Kellin Pelrine, André Blais, Reihaneh Rabbany, and Jean-François Godbout. Available upon request.
Publications
Labeling AI-Generated Media Online Chloe Wittenberg, Ziv Epstein, Gabrielle Péloquin-Skulski, Adam J. Berinsky, and David G. Rand. 2025. PNAS Nexus. Paper can be accessed here.
Abstract
Recent advancements in generative AI have raised widespread concern about the use of this technology to spread audio and visual misinformation. In response, there has been a major push among policymakers and technology companies to label AI-generated media appearing online. It remains unclear, however, what types of labels are most effective for this purpose. Here, we evaluate two (potentially complementary) strategies for labeling AI-generated content online: (i) a process-based approach, aimed at clarifying how content was made and (ii) a harm-based approach, aimed at highlighting content’s potential to mislead. Using two preregistered survey experiments focused on misleading, AI-generated images (total N = 1,759 Americans), we assess the consequences of these different labeling strategies for viewers’ beliefs and behavioral intentions. Overall, we find that all of the labels we tested significantly decreased participants’ belief in the presented claims. However, in both studies, labels that simply informed participants that content was generated using AI tended to have little impact on respondents’ stated likelihood of engaging with their assigned post. Together, these results shed light on the relative advantages and disadvantages of different approaches to labeling AI-generated media online.
Party Preference Representation André Blais, Eric Gunternmann, Vincent Arel-Bundock, Ruth Dassonneville, Jean-François Laslier, and Gabrielle Péloquin-Skulski. 2022. Party Politics. Paper can be accessed here.
Abstract
Political parties are key actors in electoral democracies: they organize the legislature, form governments, and citizens choose their representatives by voting for them. How citizens evaluate political parties and how well the parties that citizens evaluate positively perform thus provide useful tools to estimate the quality of representation from the individual’s perspective. We propose a measure that can be used to assess party preference representation at both the individual and aggregate levels, both in government and in parliament. We calculate the measure for over 160,000 survey respondents following 111 legislative elections held in 38 countries. We find little evidence that the party preferences of different socio-economic groups are systematically over or underrepresented. However, we show that citizens on the right tend to have higher representation scores than their left-wing counterparts. We also find that whereas proportional systems do not produce higher levels of representation on average, they reduce variance in representation across citizens.
What Are the Consequences of Snap Elections on Citizens’ Voting Behavior? Jean-François Daoust and Gabrielle Péloquin-Skulski. 2021. Representation. Paper can be accessed here.
Abstract
In some democracies, the ruling party can strategically call a ‘snap’ (or ‘early’) election before the end of its mandate in order to maximise its chances of re-election. Little is known on the consequences of calling such an election. In this article, we contribute to this literature by analysing whether snap elections affect citizens’ voting behaviour. Does being angry at the decision of the incumbent government have an impact on citizens’ decision to vote or not to vote and/or their vote choice calculus? To answer these questions, we make use of two different and independently conducted surveys in Canada during a snap election. We do not find evidence that calling an early election reduces citizens’ likelihood to vote. However, when they do decide to vote, citizens that resent the decision to call an early election are substantially more likely to punish the incumbent government.
What Do Voters Do When They Prefer a Leader From Another Party? Jean-François Daoust, André Blais, and Gabrielle Péloquin-Skulski. 2021. Party Politics. Paper can be accessed here.
Abstract
There is little research on voters who display incongruent preferences, that is, those who prefer a leader from another party than their preferred one. We address two questions. How many voters prefer a leader from another party? Do these incongruent voters vote for their preferred party or leader? We use the Comparative Study of Electoral Systems (CSES) data sets covering 83 legislative elections over a time period of 20 years (1996–2016). We find that 17% of the electorate typically prefer a leader from another party. In that group, the vast majority (80%) end up supporting their preferred party while 20% of voters support their preferred leader. We find that partisans and those located at the extremes of the political spectrum tend to have more congruent preferences. Moreover, the proportion of incongruent voters who cast their vote for their preferred leader is higher in less established and less polarized countries as well as among non-partisans. We discuss the implications of our findings for our understanding of the role of parties and leaders in contemporary democracies.
Do (many) voters like ranking? André Blais, Carolina Plescia, John Högström, and Gabrielle Péloquin-Skulski. 2021. Party Politics. Paper can be accessed here.
Abstract
Do (many) voters like ranking? We address this question through an experimental study performed in four countries: Austria, England, Ireland and Sweden. Respondents were invited to participate in three successive elections. They were randomly assigned to one of four possible voting scenarios and asked to vote. The voting scenarios differed in terms of party supply (three or five parties) and the type of vote choice (vote for one party only or possibility of ranking all parties). After they had voted, respondents were asked about their satisfaction with the party supply and the voting system (using a scale from 0 “not at all satisfied” to 10 “very much satisfied”). We find little difference in overall satisfaction between those elections where people could rank order the parties and those where they could not.
Logarithmic versus Linear Visualizations of COVID-19 Cases Do Not Affect Citizens’ Support for Confinement Semra Sevi, Marco Mendoza Aviña, Gabrielle Péloquin-Skulski, Emmanuel Heisbourg, Paola Vegas, Maxime Coulombe, Vincent Arel-Bundock, Peter John Loewen, and André Blais. 2020. Canadian Journal of Political Science. Paper can be accessed here.
What is the Cost of Voting? André Blais, Jean-François Daoust, Ruth Dassonneville, and Gabrielle Péloquin-Skulski. 2019. Electoral Studies. Paper can be accessed here.
Abstract
Despite a wealth of literature on the determinants of electoral turnout, little is known about the cost of voting. Some studies suggest that facilitating voting slightly increases turnout, but what ultimately matters is people’s subjective perceptions of how costly voting is. This paper offers a first comprehensive analysis of the subjective cost of voting and its impact on voter turnout. We use data from an original survey conducted in Canada and data from the Making Electoral Democracy Work project which covers 23 elections among 5 different countries. We distinguish direct and information/decision voting costs. That is, the direct costs that are related to the act of voting and the costs that are related to the efforts to make (an informed) choice. We find that the cost of voting is generally perceived to be very small but that those who find voting more difficult are indeed less prone to vote, controlling for a host of other considerations. That impact, however, is relatively small, and the direct cost matters more than the information/decision cost.
Conference Proceedings
A Guide to Misinformation Detection Data and Evaluation Camille Thibault, Jacob-Junqi Tian, Gabrielle Péloquin-Skulski, Taylor Lynn Curtis, James Zhou, Florence Laflamme, Yuxiang Guan, Reihaneh Rabbany, Jean-François Godbout, and Kellin Pelrine. KDD Datasets and Benchmarks Track 2025. Paper can be accessed here.
▸Runner Up Best Paper Award
Abstract
Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of 36 datasets that consist of statements or claims, as well as the 9 datasets that consist of data in purely paragraph form. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as spurious correlations, or examples that are ambiguous or otherwise impossible to assess for veracity. We find the latter issue is particularly severe and affects most datasets in the literature. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. Finally, we propose and highlight Evaluation Quality Assurance (EQA) as a tool to guide the field toward systemic solutions rather than inadvertently propagating issues in evaluation. Overall, this guide aims to provide a roadmap for higher quality data and better grounded evaluations, ultimately improving research in misinformation detection.