Artificial Intelligence
Post-Doctoral Fellowships
France
Addressing Misinformation on Natural Hazards in Social Media (NatHazFake)
Flash floods, earthquakes, volcanic eruptions, and other natural hazards often lead to a surge of social media posts that could be used to mitigate their impact. However, there is an ever-increasing amount of misinformation and fake posts in social media that has been the source of alarm and erosion of public trust. For instance, during Hurricane Sandy in 2012, fake images were shared twice as often as real ones, causing widespread chaos. Similarly, during the 2023 Turkey–Syria earthquakes, misinformation prompted the Turkish government to block access to social media, hindering communication for those in need. Social media platforms, while valuable for disseminating safety guidelines and real-time updates, are also prone to the rapid spread of deceptive content, which undermines disaster management efforts and public trust.
Project Aim, Methods and Deliverables
This project aims to address this situation by developing innovative methodologies and practical tools for identifying and managing false content. Using a decade’s worth of verified data from the Global Disaster Alert and Coordination System (GDACS), the project will analyze historical social media posts to identify patterns of misinformation. To achieve this, multimodal foundation models will be employed. These multimodal models, built upon the technology behind AI systems such as GPT, can simultaneously analyze text and images. By combining different types of data, they should outperform traditional methods, making misinformation detection faster, more accurate, and more effective at larger scales.
The final deliverable will be a user-friendly tool, such as a Google Chrome extension, that integrates fact-checking capabilities into daily web browsing, allowing users to quickly verify the accuracy of social media posts about extreme natural events.
Expected Impact
The impact of this project could be transformative. By improving the speed and accuracy of misinformation detection, it will help disaster management teams respond more effectively. Communities will be better equipped to identify credible information, reducing panic and confusion during emergencies. Additionally, the project will strengthen public trust in authorities and organizations by ensuring accurate communication during emergencies.
The integration of advanced AI technologies and a validated dataset represents a significant innovation in the field of digital misinformation management, moving beyond theoretical research to deliver a tangible, community-focused solution. Ultimately, the project aspires to create a safer and more informed digital environment during extreme natural events.

Damià
BENET
Institution
Université Paris Cité
Country
France
Nationality
Spain
Related articles
Artificial Intelligence
Post-Doctoral Fellowship
Australia
Confronting the Challenges of AI-Generated Misinformation
Misinformation has emerged as one of the most critical global challenges of the past decade, with far-reaching consequences for democracy,... Read more

Paul
MCLLHINEY
University of Western Australia
Artificial Intelligence
Cyber Security
Post-Doctoral Fellowship
Australia
Trust in the Fediverse: Leveraging Community Protocols and Automation to Combat Online Harms
Online communication has grown rapidly, but effective governance frameworks for such communication is still evolving. Instead of becoming a "democratic... Read more

Ashwin
NAGAPPA
Queensland University of Technology
Artificial Intelligence
AXA Chair
United Kingdom
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more

Thomas
LUKASIEWICZ