Knowledge editing for large language models can offer an efficient solution to alter a model’s behavior without negatively impacting the overall performance. However, the current approach encounters issues with limited generalizability across tasks, necessitating one distinct editor for each task, which significantly hinders the broader applications. To address this, we take the first step to analyze the multi-task generalization issue in knowledge editing. Specifically, we develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions. With only one unified editor for each LLM, we empirically demonstrate that InstructEdit can improve the editor's control, leading to an average 14.86% increase in Reliability in multi-task editing setting. Furthermore, experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines. To further investigate the underlying mechanisms of instruction-based knowledge editing, we analyze the principal components of the editing gradient directions, which unveils that instructions can help control optimization direction with stronger OOD generalization.
Figure 1: The overview of our proposed method InstructEdit.
Table 3: Multi-Task Editing Setting: Editors train on a hybrid of CounterFact, Recent, and ConvSent datasets, and test on their specific test sets. Hold Out Editing Setting: The abovementioned editors are tested on ZsRE (OOD data). All metrics are "the higher, the better". The best results of each model are marked in bold and the second-best results are marked with underline.
Figure 2: (a) Compares instruction effects on knowledge editing gradient \( \tilde{\nabla}_{u_\ell} \). Recent (InstructEdit) and Recent (Multi-Task) illustrate \( \tilde{\nabla}_{u_\ell} \) on Recent using InstructEdit and MEND in multi-task settings, respectively. Recent (Single-Task) shows MEND's results of training on Recent alone. (b) Demonstrates task scaling's impact on InstructEdit, with Recent \( \rightarrow \) ZsRE for training on Recent and testing on ZsRE, and Recent&CF \( \rightarrow \) ZsRE for joint training on Recent, CounterFact, and testing on ZsRE. (c) Illustrates the reliability and generalization performance across task scaling. (d) Balances ConvSent by extracting 1,427 entries for ConvSent (Balanced).
Figure 3: InstructEdit demonstrates proficiency in generalizing to Unseen instructions, achieving results comparable to Seen instructions.
@misc{tian2024instructedit,
title={InstructEdit: Instruction-based Knowledge Editing for Large Language Models},
author={Bozhong Tian and Siyuan Cheng and Xiaozhuan Liang and Ningyu Zhang and Yi Hu and Kouying Xue and Yanjie Gou and Xi Chen and Huajun Chen},
year={2024},
eprint={2402.16123},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.