Apr 27
Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models
★★★★★
significance 3/5
Researchers introduce the concept of 'background temperature' to explain why large language models produce divergent outputs even when temperature is set to zero. The paper identifies implementation-level sources of non-determinism, such as floating-point non-associativity and kernel non-invariance, and proposes a protocol to estimate this effect.
Why it matters
Uncovering hardware-level sources of non-determinism challenges the reliability of zero-temperature sampling and the fundamental predictability of LLM outputs.
Tags
#llm #determinism #inference #reproducibility #stochasticityRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation