Closed liujianwei2023 closed 11 months ago
As you can see, your adapter_model.bin is only 4.0K. The probable cause for this issue might be related to the version of the "peft" package. You may need to modify the code accordingly based on the version you are utilizing, or alternatively, revert to a previous version.
Thanks for your patience, I'll try again
Thanks, problem solved. I have a new question, what is the meaning of 8241 and 3782 in the code? labels_index = torch.argwhere(torch.bitwise_or(labels == 8241, labels == 3782))
Thanks, problem solved. I have a new question, what is the meaning of 8241 and 3782 in the code? labels_index = torch.argwhere(torch.bitwise_or(labels == 8241, labels == 3782))
Here we have taken a shortcut, in reality, it corresponds to the token ID of "Yes" and "No".
Thanks, I would like to further ask how to locate the index of a specific token id (yes/no), and then does this index must be in the response?
There is a lot of "Yes", "No" in LlaMA Tokenizer, you should check the token ID of your input & output
thanks
As you can see, your adapter_model.bin is only 4.0K. The probable cause for this issue might be related to the version of the "peft" package. You may need to modify the code accordingly based on the version you are utilizing, or alternatively, revert to a previous version.
How do I go about fixing this, I save these files as only 4k as well?
As you can see, your adapter_model.bin is only 4.0K. The probable cause for this issue might be related to the version of the "peft" package. You may need to modify the code accordingly based on the version you are utilizing, or alternatively, revert to a previous version.
How do I go about fixing this, I save these files as only 4k as well?
make sure your peft == 0.3.0 or you need to fix the code
Have you verified your stored LoRA model? Is the size of the model appropriate?
Originally posted by @SAI990323 in https://github.com/SAI990323/TALLRec/issues/20#issuecomment-1704585600