Closed vrmer closed 3 weeks ago
I'm honestly at a loss here. This appears to be some implementation issue deep down in the PyTorch MPS code. As it says, this error should be reported to PyTorch.
If it's possible for you, you could share your adapter file (as safetensors format) and the loading code with me and I can check if I can successfully load it on my machine, but I don't have a MacBook for testing, so probably I won't be able to reproduce it.
One idea that comes to mind: Could you try loading on CPU and after it's finished move it to MPS?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
System Info
System: MacBook M1 Pro (MPS)
accelerate==0.33.0
peft==0.12.0
torch==2.4.1
transformers==4.44.0
Who can help?
No response
Information
Tasks
examples
folderReproduction
I have trained a LoRA adapter using MPS for both the BERT and RoBERTa models but while I can easily load the BERT LoRA after training, when loading it with RoBERTa, I get the following error messages:
For reference, this is where this error is thrown:
finetuned_model
is a path that points to the folder containing the relevantadapter_config.json
andadapter_model.safetensors
files. The size of theadapter_model.safetensors
files are 2.4 MB and 5 MB for BERT and RoBERTa respectively.Expected behavior
Successfully loading already trained RoBERTa adapters as it already works with the BERT adapters.