The 8088 The 8088 ← All news
arXiv cs.CL AI Research Apr 24

Do LLM Decoders Listen Fairly? Benchmarking How Language Model Priors Shape Bias in Speech Recognition

★★★★★ significance 3/5

This research investigates how large language model decoders impact fairness and bias in speech recognition across different demographics. The study evaluates nine models across various acoustic degradation conditions, finding that while LLMs don't necessarily amplify racial bias, certain models like Whisper exhibit significant hallucination issues under specific noise and accent conditions.

Why it matters Language model priors can introduce systematic errors in speech-to-text accuracy, complicating the deployment of truly equitable voice interfaces.
Read the original at arXiv cs.CL

Tags

#speech recognition #llm bias #fairness #acoustic degradation #speech models

Related coverage