11h ago
FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment
★★★★★
significance 3/5
This research investigates the fairness and transparency of Vision-Language Models (VLMs) used for mental health and wellbeing assessment. The study evaluates how different architectures like Phi-3.5-Vision and Qwen2-VL exhibit biases related to gender and race, and tests the effectiveness of Explainable AI (XAI) interventions.
Why it matters
Algorithmic bias in multimodal models poses significant risks for high-stakes applications like automated mental health diagnostics and wellbeing assessment.
Tags
#multimodal #fairness #xai #mental health #vlmRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation