11h ago
JudgeSense: A Benchmark for Prompt Sensitivity in LLM-as-a-Judge Systems
★★★★★
significance 3/5
Researchers introduce JudgeSense, a new benchmark designed to measure how sensitive LLM-as-a-Judge systems are to prompt paraphrasing. The study reveals significant inconsistencies in how different models evaluate tasks like factuality and preference based on slight semantic changes.
Why it matters
Unreliable evaluation benchmarks threaten the reliability of automated model-tuning and quality assurance pipelines.
Tags
#llm-as-a-judge #benchmark #prompt sensitivity #evaluationsRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation