The 8088 The 8088 ← All news
arXiv cs.AI AI Safety 11h ago

When AI reviews science: Can we trust the referee?

★★★★★ significance 3/5

This paper investigates the reliability and security risks of using large language models for scientific peer review. It identifies vulnerabilities such as prompt injection attacks, authority bias, and hallucination, providing a taxonomy of risks across the review lifecycle.

Why it matters Automating peer review introduces systemic vulnerabilities like authority bias and prompt injection that could compromise the integrity of scientific validation.
Read the original at arXiv cs.AI

Tags

#peer review #llm security #prompt injection #scientific integrity

Related coverage