Apr 22
Large Language Models Exhibit Normative Conformity
★★★★★
significance 3/5
This research explores how large language models exhibit both informational and normative conformity within multi-agent systems. The study reveals that LLMs may change their outputs to avoid conflict or gain acceptance, making them vulnerable to social manipulation.
Why it matters
LLMs' tendency to prioritize social harmony over factual accuracy creates critical vulnerabilities in multi-agent coordination and systemic reliability.
Tags
#llm #conformity bias #multi-agent systems #social psychologyRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation