Artificial Intelligence
Post-Doctoral Fellowships
Australia
Confronting the Challenges of AI-Generated Misinformation
Misinformation has emerged as one of the most critical global challenges of the past decade, with far-reaching consequences for democracy, public health, and societal trust. Recent advancements in artificial intelligence (AI), particularly the rise of large language models (LLMs), have amplified these concerns. LLMs, while powerful, pose unique risks in the context of misinformation for several reasons. First, their conversational style can make their outputs appear highly trustworthy. Second, their responses often reflect human biases and stereotypes embedded in their training data, and they can even fabricate convincing but false information. Third, malicious actors can exploit LLMs to generate persuasive yet misleading content at scale, further exacerbating the spread of misinformation.
Given the destabilizing effects of misinformation, it is crucial to understand the potential impacts of AI-generated content and develop strategies to mitigate its influence.
Project Aim and Methods
The primary goal of this project is to systematically investigate how the effects of AI-generated misinformation on human cognition can be mitigated through targeted intervention strategies. The research will focus on three key areas:
1. Trust and Decision-Making: The project will explore how people’s trust in AI-generated information can be influenced—either bolstered or reduced—and how this trust impacts their reasoning and decision-making processes.
2. Exposure Training: Both active and passive forms of exposure training will be tested to determine their effectiveness in fostering resilience to misleading AI content.
3. Source Credibility and Social Influence: The research will evaluate interventions that emphasize the credibility of information sources and leverage the influence of social norms to counteract the effects of misinformation.
The study will examine both textual and audio-visual AI-generated content, including material on contentious topics, to provide a comprehensive understanding of the issue.
Expected Impact
This project aims to make a significant contribution to addressing the growing threat of AI-generated misinformation. By identifying strategies to reduce trust in misleading content and enhance public resilience, it will provide actionable insights for policymakers, educators, and technology developers. The findings are expected to help safeguard democratic processes, protect public health, and restore trust in information ecosystems shaped by AI.
The research will be disseminated through publications, news articles, presentations at international conferences, and online platforms associated with the lab as well as relevant social media such as ResearchGate, LinkedIn, and Twitter. A project-specific website will also be created (https://www.emc-lab.org/) for greater visibility. These efforts aim to ensure the research reaches key stakeholders and the public effectively.

Paul
MCLLHINEY
Institution
University of Western Australia
Country
Australia
Nationality
United Kingdom
Related articles
Artificial Intelligence
Cyber Security
Post-Doctoral Fellowship
Australia
Trust in the Fediverse: Leveraging Community Protocols and Automation to Combat Online Harms
Online communication has grown rapidly, but effective governance frameworks for such communication is still evolving. Instead of becoming a "democratic... Read more

Ashwin
NAGAPPA
Queensland University of Technology
Artificial Intelligence
AXA Chair
United Kingdom
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more

Thomas
LUKASIEWICZ
University of Oxford
Artificial Intelligence
Joint Research Initiative
Belgium
Fairness in AI: ending bias and discrimination
Garbage in, garbage out: how to ensure fairness-aware machine learning? When measuring fairness, a natural preliminary question to ask is... Read more

Toon
CALDERS