After loading the model (llama 2 70b) via AutoDistributedModelForCausalLM, the following line
model = get_peft_model(model, config)
is crashing with a ValueError: Target modules [] not found in the base model. Please check the target modules and try again. in peft/tuners/tuners_utils.py:222 in inject_adapter.
The documentation states that Petals supports the peft library, but none of the examples are actually using it, FWICT.
Edit: The list is empty ([]) because a previous function that attempts to locate all LoRA trainable modules is coming up empty handed. The module list and its corresponding classes is:
is <class 'petals.models.llama.model.DistributedLlamaForCausalLM'>
model is <class 'petals.models.llama.model.DistributedLlamaModel'>
model.embed_tokens is <class 'torch.nn.modules.sparse.Embedding'>
model.layers is <class 'petals.client.remote_sequential.RemoteSequential'>
model.norm is <class 'transformers.models.llama.modeling_llama.LlamaRMSNorm'>
lm_head is <class 'petals.client.lm_head.LMHead'>
None of the usual LoRA modules are present, e.g. q_proj, v_proj, etc, so I am still thoroughly stuck.
After loading the model (llama 2 70b) via
AutoDistributedModelForCausalLM
, the following lineis crashing with a
ValueError: Target modules [] not found in the base model. Please check the target modules and try again.
inpeft/tuners/tuners_utils.py:222
ininject_adapter
.The documentation states that Petals supports the peft library, but none of the examples are actually using it, FWICT.
Edit: The list is empty ([]) because a previous function that attempts to locate all LoRA trainable modules is coming up empty handed. The module list and its corresponding classes is:
None of the usual LoRA modules are present, e.g. q_proj, v_proj, etc, so I am still thoroughly stuck.