Apr 21
Data Mixing for Large Language Models Pretraining: A Survey and Outlook
★★★★★
significance 3/5
This paper provides a comprehensive survey of data mixing techniques used during the pretraining of large language models. It introduces a taxonomy for static and dynamic mixing methods and analyzes how optimizing data composition impacts training efficiency and generalization.
Why it matters
Optimizing data composition is becoming as critical as compute scaling for achieving superior model generalization and training efficiency.
Tags
#llm #pretraining #data mixing #survey #machine learningRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation