Artificial Intelligence
Cyber Security
Post-Doctoral Fellowships
Australia
Trust in the Fediverse: Leveraging Community Protocols and Automation to Combat Online Harms
Online communication has grown rapidly, but effective governance frameworks for such communication is still evolving. Instead of becoming a "democratic utopia," digital communication has seen a sharp rise in online harm, posing threats to social, political, and financial institutions worldwide. A few dominant platforms now control much of online communication and are at the centre of these issues. While various regulations have been introduced to address online harm, achieving global agreement on platform governance and online safety remains a challenge.
At the same time, nontraditional actors (developers at grassroot level) have created alternative social media (ASM) platforms to address the shortcomings of mainstream platforms. However, these alternatives often struggle with sustainable business and governance models. Despite these challenges, various communities strive towards sustaining these prosocial alternatives—spaces that amplify marginalized voices and foster discussions often censored or sidelined on mainstream platforms, as well as environments that encourage social bonding over polarization." Recently, decentralized technologies and artificial intelligence have gained attention as tools to create polycentric governance systems, that is, platform content and participation governed by decentralized groups, rather than centralized private entities. Decentralized social networking platforms like Mastodon and Bluesky, which have attracted millions of users, aim to balance free speech, trust, and safety. As their network and infrastructure widens, they offer valuable insights into new governance models.
Dr Ashwin Nagappa’s postdoctoral research project focuses on identifying the key features of constructive "polycentric governance" models , including network size, content moderation, harm mitigation strategies, and community participation. The project aims to explore the optimal conditions under which these models minimize harm while building social connections. And thereafter, develop frameworks to understand and address online harm—such as misinformation, hate speech, and polarization—on emerging decentralized social media platforms.
The research employs a four-phase methodology: (1) an extensive literature review of academic works, user-generated content, and trade press materials; (2) data collection through surveys, semi-structured interviews with stakeholders (e.g., users, moderators, and policy experts), and analysis of user-generated discussions on federated platforms; (3) data analysis using concept mapping, a mixed-methods approach that combines computational tools like Leximancer with manual analysis to identify themes and connections; and (4) dissemination of findings through academic publications, conferences, and a hybrid seminar.
By compiling the optimal characteristics of polycentric governance models, the research aims to provide actionable policy guidelines for minimizing online harm on future decentralized platforms. The findings of the study will be disseminated through academic conference presentations and peer reviewed journal publications. At the end of the project a final report will be produced and released through hybrid seminar. These findings will contribute to scholarly discourse, inform public policy, and guide the development of safer and more trustworthy digital communication systems. Additionally, the project aspires to influence the future design and governance of online platforms, ensuring they are more inclusive, resilient, and capable of addressing complex challenges like misinformation and polarization, creating safer and more trustworthy digital communication spaces.
June 2025

Ashwin
NAGAPPA
Institution
Queensland University of Technology
Country
Australia
Nationality
Indian
Related articles
Artificial Intelligence
AXA Chair
United Kingdom
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more

Thomas
LUKASIEWICZ
University of Oxford
Artificial Intelligence
Joint Research Initiative
Belgium
Fairness in AI: ending bias and discrimination
Garbage in, garbage out: how to ensure fairness-aware machine learning? When measuring fairness, a natural preliminary question to ask is... Read more

Toon
CALDERS
University of Antwerp
Artificial Intelligence
Joint Research Initiative
Belgium
Fulfilling the potential of AI: towards explainable deep learning
“In its approach of explainable AI, the project will investigate the use of instance-based explanations (explaining the model for one... Read more

David
MARTENS