11h ago
Preserving Long-Tailed Expert Information in Mixture-of-Experts Tuning
★★★★★
significance 3/5
The paper introduces a new supervised fine-tuning framework for Mixture-of-Experts (MoE) models that addresses the fragility of router layers. It proposes a method using bias-driven sparsification and always-active gated condenser experts to preserve knowledge in rarely activated experts without the noise of traditional load-balancing losses.
Why it matters
Addressing router fragility in MoE fine-tuning is essential for maintaining specialized knowledge during model adaptation.
Tags
#moe #fine-tuning #sparse routing #machine learning #expert modelsRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation