huggingface / peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.46k stars 1.62k forks source link

Change layers_pattern logic #2158

Open BenjaminBossan opened 1 month ago

BenjaminBossan commented 1 month ago

Addresses part of #2155.

Description

So far, the layers_pattern argument would only work if there was a prefix to the pattern. As an example, if the module name is:

decoder.layer.0.attn.to_q

and we pass layers_pattern="layer", this would match. However, if the module name was:

layer.0.attn.to_q

i.e. without prefix before "layer", it would not work.

Usually, when we create a model with AutoModelForFoo.from_pretrained, the "layer" part would never be first. However, if we load a model directly, e.g. through LlamaModel.from_pretrained, there is actually no prefix. As a consequence, we get no match there.

With this PR, the prefix is made optional, so that the second pattern also matches.

Status

I'm not sure yet if this should be merged, as it is technically backwards incompatible. Users can still target the desired modules by carefully crafting a regex for target_modules so that it only matches the desired layer indices. However, this is tedious and layers_pattern was introduced to avoid having to do this.

HuggingFaceDocBuilderDev commented 1 month ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.