The 8088 The 8088 ← All news
arXiv cs.CL AI Research Apr 21

Data Mixing for Large Language Models Pretraining: A Survey and Outlook

★★★★★ significance 3/5

This paper provides a comprehensive survey of data mixing techniques used during the pretraining of large language models. It introduces a taxonomy for static and dynamic mixing methods and analyzes how optimizing data composition impacts training efficiency and generalization.

Why it matters Optimizing data composition is becoming as critical as compute scaling for achieving superior model generalization and training efficiency.
Read the original at arXiv cs.CL

Tags

#llm #pretraining #data mixing #survey #machine learning

Related coverage