Apr 24
Do LLM Decoders Listen Fairly? Benchmarking How Language Model Priors Shape Bias in Speech Recognition
★★★★★
significance 3/5
This research investigates how large language model decoders impact fairness and bias in speech recognition across different demographics. The study evaluates nine models across various acoustic degradation conditions, finding that while LLMs don't necessarily amplify racial bias, certain models like Whisper exhibit significant hallucination issues under specific noise and accent conditions.
Why it matters
Language model priors can introduce systematic errors in speech-to-text accuracy, complicating the deployment of truly equitable voice interfaces.
Tags
#speech recognition #llm bias #fairness #acoustic degradation #speech modelsRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation