The 8088 The 8088 ← All news
arXiv cs.CL AI Research Apr 20

CiPO: Counterfactual Unlearning for Large Reasoning Models through Iterative Preference Optimization

★★★★★ significance 3/5

Researchers introduce CiPO, a new framework designed to selectively remove unwanted information from Large Reasoning Models (LRMs) without degrading their reasoning capabilities. The method uses counterfactual reasoning traces and iterative preference optimization to ensure both effective unlearning and stable performance.

Why it matters Selective unlearning preserves core reasoning capabilities while mitigating the risk of unintended cognitive degradation during model alignment.
Read the original at arXiv cs.CL

Tags

#machine unlearning #large reasoning models #preference optimization #chain-of-thought

Related coverage