InstructEdit: Instruction-Based Knowledge Editing for Large Language Models

Bozhong Tian♠* , Siyuan Cheng♣* , Xiaozhuan Liang♠* , Ningyu Zhang♠† , Yi Hu , Kouying Xue , Yanjie Gou , Xi Chen , Huajun Chen♠† ,

Zhejiang University Tencent
*Equal Contribution Corresponding Author
InstructEdit enhances the Multi-Task Editor by guiding it to choose the right "tool" for different tasks. Normally, the editor might not always pick the best approach on its own. With InstructEdit, when you give clear instructions, the editor gets better at understanding what you need and acts more effectively. Think of it as adding a smart assistant to the editor: you tell it what to do, and it does the job more efficiently and accurately .


Knowledge editing for large language models can offer an efficient solution to alter a model’s behavior without negatively impacting the overall performance. However, the current approach encounters issues with limited generalizability across tasks, necessitating one distinct editor for each task, which significantly hinders the broader applications. To address this, we take the first step to analyze the multi-task generalization issue in knowledge editing. Specifically, we develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions. With only one unified editor for each LLM, we empirically demonstrate that InstructEdit can improve the editor's control, leading to an average 14.86% increase in Reliability in multi-task editing setting. Furthermore, experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines. To further investigate the underlying mechanisms of instruction-based knowledge editing, we analyze the principal components of the editing gradient directions, which unveils that instructions can help control optimization direction with stronger OOD generalization.


Table 1: Motivating knowledge editing results in multi-task generalization. Directly transferring to the unseen task (CounterFact and ZsRE) can result in a significant performance decay.


Figure 1: The overview of our proposed method InstructEdit.

As shown in Figure 1, assuming access to multi-domain task data: Law, Geography, Medicine, and Math. Single-Task Editing Original editing is domain-specific (e.g., a Geography Editor edits geography-related knowledge but can't transfer it to Medicine). Multi-Task Editing Previous methods (Pre-Editor) trained across domains (Law, Geography, and Math) often misdirect In-Distribution Task Editing. For OOD Task Editing (Medicine), a lack of guidance leads to missing the correct edit region. Instructions enable precise editing and improve generalization. Instruction Construction We utilize GPT-4 to generate instructions through well-crafted prompts, evaluate metrics using the Trial Editor, and then employ GPT-4 for continuous Instruction Optimization, enhancing the instructions until there is no further improvement in metrics.

Table 2: Examples of the instructions. As for ConvSent, we need to replace [LABEL] and [TOPIC] according to the input.

Main Results

Table 3: Multi-Task Editing Setting: Editors train on a hybrid of CounterFact, Recent, and ConvSent datasets, and test on their specific test sets. Hold Out Editing Setting: The abovementioned editors are tested on ZsRE (OOD data). All metrics are "the higher, the better". The best results of each model are marked in bold and the second-best results are marked with underline.


Figure 2: (a) Compares instruction effects on knowledge editing gradient \( \tilde{\nabla}_{u_\ell} \). Recent (InstructEdit) and Recent (Multi-Task) illustrate \( \tilde{\nabla}_{u_\ell} \) on Recent using InstructEdit and MEND in multi-task settings, respectively. Recent (Single-Task) shows MEND's results of training on Recent alone. (b) Demonstrates task scaling's impact on InstructEdit, with Recent \( \rightarrow \) ZsRE for training on Recent and testing on ZsRE, and Recent&CF \( \rightarrow \) ZsRE for joint training on Recent, CounterFact, and testing on ZsRE. (c) Illustrates the reliability and generalization performance across task scaling. (d) Balances ConvSent by extracting 1,427 entries for ConvSent (Balanced).

Figure 3: InstructEdit demonstrates proficiency in generalizing to Unseen instructions, achieving results comparable to Seen instructions.


  title={InstructEdit: Instruction-based Knowledge Editing for Large Language Models}, 
  author={Bozhong Tian and Siyuan Cheng and Xiaozhuan Liang and Ningyu Zhang and Yi Hu and Kouying Xue and Yanjie Gou and Xi Chen and Huajun Chen},

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.