huggingface / peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.46k stars 1.62k forks source link

Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings) #2067

Closed dongdongzhaoUP closed 3 weeks ago

dongdongzhaoUP commented 2 months ago

Feature request

Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings)

Motivation

To use the method EP

Your contribution

maybe

BenjaminBossan commented 2 months ago

If I understand your request correctly, you would like the option to train only, say, a single entry of the embedding matrix. This is not possible at the moment, neither with PEFT, nor for full fine-tuning. I'm sure there are some tricks out there, maybe masking the gradient or so, but it's not implemented in PEFT.

Having such a possibility would be nice for other purposes than the paper you linked. E.g. it's common to add new tokens to the embedding matrix when fine-tuning a model, but users typically don't want/need to train the whole embedding matrix, just the new tokens. As such, if you have an idea or know of some code that shows how to achieve this, it would be appreciated.

dongdongzhaoUP commented 2 months ago

Thank you for your reply. I have also considered the applicable scenarios you mentioned. It is indeed a necessary function.

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.