11h ago
An Analysis of Active Learning Algorithms using Real-World Crowd-sourced Text Annotations
★★★★★
significance 2/5
This research examines how active learning algorithms perform when faced with imperfect or noisy labels from real-world crowd-sourced workers. The study compares eight common active learning techniques using text classification datasets to understand how human error and refusal to label affect model training.
Why it matters
Reliable model performance depends on understanding how algorithmic selection interacts with the inherent noise of human-provided data.
Tags
#active learning #crowdsourcing #machine learning #text classification #noisy labelsRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation