Apr 22
Semantic Needles in Document Haystacks: Sensitivity Testing of LLM-as-a-Judge Similarity Scoring
★★★★★
significance 3/5
Researchers propose a framework to test how Large Language Models (LLMs) respond to subtle semantic changes in document comparison. The study reveals that LLMs exhibit positional biases and distinct scoring distributions when evaluating similarity between documents.
Why it matters
Positional biases and context sensitivity in LLM-as-a-judge frameworks threaten the reliability of automated quality benchmarks and evaluation pipelines.
Tags
#llm-as-a-judge #semantic sensitivity #positional bias #document similarityRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation