ECNU-ICALK / MELO

[AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA
21 stars 2 forks source link

MELO: Enhancing Model Editing with Neuron-Indexd Dynamic LoRA

This repo contains the source code of our proposed MELO, a plug-in model editing method, which routes models' behavoir by dynamically indexing LoRA blocks according to a inner vector databse. Seamlessly integrated in PEFT, MELO supports multiple LLMs such as BERT, T5 and GPT.

Updates

Table of Contents

Experiments

Comparison of MELO to prior editing methods on sequential editing tasks. Note that MELO edits all language models with a single RTX 3090 GPU. table

Prepare Environments

Required CUDA environment and library dependencies are listed in:

requirements.txt

Then you should install our modified PEFT:

🤗 PEFT-MELO

cd peft_egg
pip install -e .

Detailed implementation of MELO is in ./peft_egg/src/tuners/melo.py

Prepare Datasets

The zsRE experiments use data linked by the MEND repository. Download the data for NQ and zsRE from their Google Drive link and unzip each sub-directory into ./melo/data. SCOTUS and Hallucination data are loaded through huggingface.

Quick Start

The location of inner vector database and dynamic LoRA target modules can be modified in ./melo/model/config

Editing GPT2-XL on Hallucination with MELO

cd melo
python run.py +alg=lora +experiment=hallucination +model=gpt2xl

Editing BERT on SCOTUS with MELO

cd melo
python run.py +alg=lora +experiment=scotus +model=scotus-bert

Editing T5 on zsRE with MELO

cd melo
python run.py +alg=lora +experiment=qa +model=t5small

Important Tips

Acknowledgments

We would like to thank the following individuals and organizations for their contributions to this project:

Huggingface: for their support of the PEFT community and their development of the PEFT framework (https://github.com/huggingface/peft)

GRACE: for the development of the open-source library GRACE which inspired our work (https://github.com/Thartvigsen/GRACE)