GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs

  • Yue Wang
  • , Qizhou Wang
  • , Feng Liu
  • , Wei Huang
  • , Yali Du
  • , Xiaojiang Du
  • , Bo Han

Research output: Contribution to journalConference articlepeer-review

Abstract

Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious tradeoff between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks. Our code is available at https://github.com/

Original languageEnglish
Pages (from-to)64690-64710
Number of pages21
JournalProceedings of Machine Learning Research
Volume267
StatePublished - 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs'. Together they form a unique fingerprint.

Cite this