ABOUT

Online political discussions are increasingly perceived as negative, aggressive, and toxic. This is a worry, because exposure to toxic content undermines trust and fosters cynicism, leading to a polarized society. Defining what and how “toxic” content should be regulated online is therefore one of the most pressing challenges for researchers today, because such an approach can be used to develop (semi-) automated content moderation systems that ensure healthy political conversations on a global scale. However, the available research on toxic content and its moderation is elite-driven and imposes top-down definitions of what is “good” or “bad” on users. This has resulted in biased content moderation models, and it has damaged the reputation of those who have implemented them. More importantly, however, a top-down approach removes agency from citizens in a time when many already feel they have too little influence on their daily information intake. Therefore, the TACo Project proposes a novel user-centric approach towards automated content moderation. We (a) conduct exploratory social science research to learn what citizens themselves want, when it comes to content moderation. Then, we (b) develop toxicity detection systems and automated moderation models based on this knowledge, testing for usefulness and reliability. Finally, we test whether what citizens “want” truly has beneficial effects for them: we (c) conduct experiments that test the effects of these models on citizens’ political trust, knowledge, engagement, and well-being.

TEAM

RESEARCH & NEWS



Latest Peer-Reviewed Publications of TACo


Schäfer, S. & Planitzer, A. M.(2025). User comments. In Nai, A., Grömping, M., & Wirz, D. (Eds). Elgar Encyclopedia of Political Communication. Edward Elgar Publishing. Accepted version. DOI Link

Pachinger, P., Goldzycher, J., Planitzer, A. M., Kusa, W., Hanbury, A., Neidhardt, J. (2024). AustroTox: A Dataset for Target-Based Austrian German Offensive Language Detection. In Findings of the Association for Computational Linguistics: ACL 2024. DOI Link

Schäfer, S., Rebasso, I., Boyer, M. M., & Planitzer, A. M. (2023). Can We Counteract Hate? Effects of Online Hate Speech and Counter Speech on the Perception of Social Groups. Communication Research, 51(5), 553-579. DOI Link

Pachinger, P., Hanbury, A., Neidhardt, J., & Planitzer, A. M. (2023). Toward Disambiguating the Definitions of Abusive, Offensive, Toxic, and Uncivil Comments. In Proceedings of the 1st Workshop on Cross-Cultural Considerations in NLP (C3NLP). EACL 2023 (pp. 107-113). DOI Link

Stockinger, A., Schäfer, S., & Lecheler, S. (2023). Navigating the gray areas of content moderation: Professional moderators’ perspectives on uncivil user comments and the role of (AI-based) technological tools. New Media & Society. DOI Link



Latest Conference Activities of TACo


The TACo Project regularly presents its findings at leading conferences in Communication Science, Data Science, and their intersections – such as:

ICA (2022, 2023, 2025), ECREA (2024), ACL (2023, 2024), NAACL (2024), AoIR (2023), ECREA PolComm Section Interim Conference (2023), WAPOR (2023), EACL (2023), COMPTEXT (2025), and AlgoSoc (2025).



Latest Talks, Public & Policy Engagement of TACo (SELECTED)

Planitzer, A. M. (2025): Guest Lecture in Political Communication Lecture at University of Vienna – Platform Governance, Artificial Intelligence, and Power.

Planitzer, A. M. (2025): Hatred and Discrimination Online: What Is AI Really Capable Of?. Public Event: „Junge Wissenschaft“ within the Science Program of the Vienna Adult Education Center (VHS) Wien.

Planitzer, A. M. (2024): Project Presentation – Web@ngels. Counter-speech Project of Comment Sections of Austrian Newspapers. Hosted by ZARA – Civil Courage & Anti-Racism-Work, Vienna, Austria.

Planitzer, A. M. (2024): Invited Scientific Stakeholder at a two-day policy and research event on discrimination and combating hate speech online. Hosted by the EU and Council of Europe, Strasbourg, France.

Planitzer, A. M. (2024): Artificial Intelligence: Benefits and Risks in the Fight Against Online Harm. Public Event: AI – Opportunities and Risks. Hosted by the Adult Education Center Vienna (VHS).

Neidhart, J. (2024): Lecture Series: Digital Humanism – Rethinking Recommender Systems and AI for a Better Digital Future.

Lecheler, S. (2024): Lecture Series: Digital Humanism – Transparent Automated Content Moderation: Towards a User-Centric Approach.

Planitzer, A. M. (2023): Social Media Governance: Mitigating the Detrimental Effects of Hate Speech and Incivility. Hosted by the International Research Center for Social and Ethical Issues, Salzburg, Austria.

Pachinger, P. (2024): Natural Language Processing and Information Extraction – Faculty of Informatics, TU Vienna.

Pachinger, P. (2023): Advanced Information Retrieval – Faculty of Informatics, TU Vienna.

Pachinger, P. (2023): Language Technology and Language Data – Faculty of Linguistics, Paris Lodron University Salzburg.

Pachinger, P. (2023):University of Chile, Open Beauchef – Toxic Comment Detection in Social Media.