Apr 20
LLM Reasoning Is Latent, Not the Chain of Thought
★★★★★
significance 3/5
This paper argues that LLM reasoning is better understood as latent-state trajectory formation rather than surface-level chain-of-thought. The authors propose a new framework to distinguish between latent dynamics, explicit traces, and serial compute to improve how reasoning is studied and evaluated.
Why it matters
Redefining reasoning as a latent process challenges the reliability of surface-level chain-of-thought as a true metric for model intelligence.
Tags
#llm #reasoning #chain-of-thought #latent-states #interpretabilityRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation