Tools and online services are continuously being developed to support social media users, journalists, and policymakers to limit the harm caused by misinformation.

As we have experienced, the expected impact of online solutions is quite often hampered by limitations associated with lack of explainability, complex user interface, limited datasets, restricted accessibility, biased algorithms, among others issues that can confuse, overwhelm, or mislead users in their own ways. These limitations lead to ethical implications that are typically neglected when new digital solutions to tackle misinformation are conceived. As a wicked problem, limiting the harm caused by misinformation requires merging multiple perspectives to the design of digital interventions, including an understanding of human behaviour and motivations in judging and promoting false information, as well as strategies to detect and stop its propagation without unduly infringing on rights or freedoms of expression.

This hands-on workshop proposes to unpack the state-of-the-art on social, societal and political studies, as well as socio-technical solutions to stop misinformation, challenging the participants to first critically reflect upon limitations of existing approaches, to then co-create a future with integrating perspectives focusing on ethical aspects and societal impact.

A socio-technical issue with multiple perspectives

Among other social transformations, responses to the COVID-19 pandemic have modified the workflow adopted by major social platforms to assess misinformation, shifting from a human judgement to an approach heavily relying on automated detection through a machine learning-based approach. If on the one hand, this shift copes better with the volume and speed of information propagation online, on the other hand it exposes the fragility and limitations of existing technical solutions, even as they include humans at multiple places in the loop.

Conversely, some social platforms have also started challenging power structures, for example by promptly and intrepidly deleting posts by world leaders violating their policies on disinformation regarding to COVID-19. These ad-hoc solutions to regulating speech may have unintended consequences on the overall trust to the governance of any kind of media and authority.

This recent global scenario clearly evidences that purely technical solutions that disregard the social structures that lead to the spread of misinformation, or the different ways societal groups are affected, for instance, according to digital skills, and provide information assessment without promoting digital literacy are not only limited, they can also be harmful by obfuscating the problem.

More comprehensive answers to the problem can only emerge through an articulation of diverse perspectives and ideas, requiring an interdisciplinary approach that includes social scientists, computer scientists and technology designers in the co-creation of features and delivery methods considering both local and global contexts.

A socio-technical issue with multiple perspectives

Who we are

The team of organisers merges three EU-funded projects on misinformation: Co-inform, EUNOMIA and HERoS. With similar socio-technical approach but distinct perspectives, the four organisers share the common challenge of building technology to support credibility assessment, foster critical thinking and media information literacy.

Lara Schibelsky Godoy Piccolo - Photo

Lara Schibelsky Godoy Piccolo is a Research Fellow that investigates interaction design with a socio-technical and inclusive perspective, considering how technology can trigger a positive impact on people's lives. For Co-inform she is investigating how to communicate credibility signals.

Tracie Farrell

Tracie Farrell is a social scientist studying the proliferation of malicious content online, including misinformation and hate speech, at The Open University (as part of HERoS projects). With a background in critical education, her research interests focus on how technology can be used to promote critical awareness and reflection, at individual and societal levels.

Pinelopi Troullinou - Photo

Pinelopi Troullinou is a Research Analyst with an interdisciplinary background in social sciences. Her work focuses on the intersection of technology and society aiming to explore and foster awareness on socio-technical issues such as privacy. For EUNOMIA, she leads the privacy, social and ethical impact assessment, user engagement and liaison with stakeholders.

Diotima Bertel

Diotima Bertel is a Researcher and Project Coordinator with a background in social sciences and philosophy. In her work, she focuses on the individual, social and societal impacts of technology. In EUNOMIA, she is responsible for analysing user behaviour, as well as pilot evaluations.


The goal of the workshop is to propose an agenda for interdisciplinary research that incorporates robust knowledge of societal, political and psychological models, which can help not only explain reactions to the tools that we build, but to design them to be useful and ethical.

Participants will be invited to:

  • Discuss challenges and obstacles related to misinformation from the human and socio-technical perspectives;
  • Challenge existing approaches to tackle misinformation and identifying their limitations in socio-technical terms, including underlying assumptions, goals (e.g., preventing vs. correcting misinformation) and targeted users;
  • Co-creating innovative future scenarios with socio-technical solutions.

Aiming for a small and diverse group of participants from different disciplines to foster interaction and exchange of ideas, the agenda proposes engaging activities that challenge the status-quo and promote creative-thinking towards effectively advancing the state-of-the-art. Participants will be encouraged to experience the workshop as an opportunity to initiate synergies.

Examples of challenges and obstacles related to source data

Misinformation Challenges - Things to start striving for now!

Call for Papers and Participation

We invite researchers and practitioners aiming at actively engaging with social, societal and ethical problems of current socio-technical solutions to tackle misinformation to join us at the CHI 2021 Workshop - Opinions, Intentions, Freedom of Expression, ... , and Other Human Aspects of Misinformation Online.

Information disorder online is a multilayered problem; there is not a single and comprehensive solution capable of stopping misinformation, and existing approaches are all limited for different reasons such as end-user engagement, inadequate interaction design, lack of explainability, etc. Purely technical solutions that disregard the social structures that lead to the spread of misinformation can be harmful by obfuscating the problem.

More comprehensive, ethical and impacting answers to the problem can only emerge through an interdisciplinary approach that includes computer scientists, social scientists and technology designers in the co-creation of features and delivery methods. Topics of interest include, but are not limited to:

  • Censorship and freedom of speech
  • Social and political polarisation, partisanship
  • Disinformation campaigns and propaganda
  • Conspiracy theories and rumours
  • (Limitations of ) Automated tools for misinformation detection and notifications
  • Nudging strategies and persuasion
  • Social network analysis
  • Impact on communities or social groups
  • Fact-checking
  • Explainable AI
  • Credibility of online content
  • Behavioural studies
  • Human values
  • Legal and ethical aspects of socio-technical solutions

Participants will actively engage in activities for:

  • Identifying challenges and obstacles related to misinformation from human and socio-technical perspectives;
  • Challenging existing approaches and identifying their limitations in socio-technical terms, including underlying assumptions, goals, and targeted users;
  • Co-creating innovative future scenarios with socio-technical solutions addressing impact and ethical aspects.

How to submit

Prepare a motivation statement describing your approach towards fighting misinformation, acknowledged limits, or an envisioned future in which the societies are resilient to mis/disinformation supported by socio-technical solutions. The 2-4 pages paper should follow the ACM Master Article Submission Template. Send your submission to the email: Misinformation 2021 Group.

Submissions will be reviewed by the organisers according to their relevance to the problem and motivations to address ethical aspects and societal impact. Accepted papers will be published on the workshop website.

Important Dates

First Submission Date: 7th February 2021 (for early bird registration)

Notification: 21st February 2021

Second Submission Date: 15th March 2021

Notification: 1st April 2021

Workshop date: 7th May 2021 CET 15:00-19:00 / EST 09:00-13:00 / JST 21:00-02:00 (next day).

For more information, email the Misinformation 2021 Group

Programme (preliminary)

This online workshop will last 4 hours (with breaks!).

The activities will be split into 2 main parts: we will first debate and critically analyse existing approaches and solutions to tackle misinformation; in the second part, activities will include short presentations by the participants to inspire target future scenarios where digital innovations will support misinformation resilience:

Part I

  • Personal stories and Introductions: 1st round (15min): Participants share personal experience of misinformation that they feel particularly tricky.
  • Setting the stage (15 min): A short inspirational talk given by a keynote speaker that undercuts the discussions of the day.
  • Personal stories and Introductions: 2nd round (15 min)
  • Limits of existing tools (25 min): Group activity in which we will experiment with some existing tools to support credibility assessment of information by social media users and fact-checkers. We will share our perceptions on the limitations of the existing solutions in terms of accuracy and accessibility, but also in terms of their general approach, underlying assumptions, platform specifics etc.
  • Limits of behavioral studies (25 min): Debate on existing approaches and data-based investigations to understand human behavior on misinformation spreading.

Part II

  • Lightning Talks (50 min) with a break: Author of accepted papers will be invited to present a 5 min lightning talks on their area of research, providing inspirational content for building future scenarios.
  • Future Making (30 min): Within a scenario of global emergency (pandemic, climate change, etc.), in which citizens need to understand the new reality and how daily choices impact the society, the need for reliable information is urgent and constant.
  • We will discuss how to move forward with the existing solutions from technical and social perspectives, addressing challenges and limitations previously identified, as well as ethical aspects and societal impact.
  • Wrap up and Next Steps (10 min).

The programme will be adjusted to best fit the number (and perhaps timezone) of participants.

Accepted Papers

To be announced in February 2021