Apr 21
IYKYK (But AI Doesn't): Automated Content Moderation Does Not Capture Communities' Heterogeneous Attitudes Towards Reclaimed Language
★★★★★
significance 3/5
This research examines how automated content moderation tools fail to distinguish between hateful language and the use of reclaimed slurs within marginalized communities. The study highlights significant disagreement among human annotators regarding slur usage, which leads to the suppression of voices in LGBTQIA+, Black, and women's communities.
Why it matters
Algorithmic moderation failures risk systemic silencing of marginalized communities by misidentifying reclaimed language as prohibited speech.
Tags
#content moderation #reclaimed language #bias #nlp #social impactRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation