python punica/examples/textgen_lora.py --lora-weight model/table-peft.punica.pt --prompt "Tell me about yourself"
I am getting this error:
Traceback (most recent call last):
File "/home/bibekyess/punica_sandbox/punica/examples/textgen_lora.py", line 180, in <module>
main()
File "/home/bibekyess/anaconda3/envs/punica-12.1/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/bibekyess/punica_sandbox/punica/examples/textgen_lora.py", line 117, in main
lora_weight.copy_from_tensors(tmp)
File "/home/bibekyess/anaconda3/envs/punica-12.1/lib/python3.10/site-packages/punica/models/llama_lora.py", line 87, in copy_from_tensors
self.q.copy_from_tensor(ts["q.A"], ts["q.B"])
File "/home/bibekyess/anaconda3/envs/punica-12.1/lib/python3.10/site-packages/punica/utils/lora.py", line 33, in copy_from_tensor
self.wa.copy_(a.to(self.wa.device).to(self.wa.dtype))
RuntimeError: The size of tensor a (4096) must match the size of tensor b (5120) at non-singleton dimension 2
What maybe the reason? I didn't use your suggested script for finetuning, I used huggingface libraries for that and converted it to the punica format.
I noticed that convert_lora_weight.py only takes LoRA weight file, so do I need to pass the LoRA adapter config file also?
Hello, I followed your tutorial of converting the fine-tuned weights to punica format [reference] using:
Then, while running:
I am getting this error:
What maybe the reason? I didn't use your suggested script for finetuning, I used huggingface libraries for that and converted it to the punica format. I noticed that
convert_lora_weight.py
only takes LoRA weight file, so do I need to pass the LoRA adapter config file also?Thank you for your help! :)