Open Nieysh opened 8 months ago
Thanks for pointing out our bug! Actually, alpaca_finetuning_v only support training on Alpaca dataset. To inference, you can use the model in our main dir: [https://github.com/OpenGVLab/LLaMA-Adapter/blob/main/llama/model.py]() in which the forward
function is for inference.
Thanks for pointing out our bug! Actually, alpaca_finetuning_v only support training on Alpaca dataset. To inference, you can use the model in our main dir: https://github.com/OpenGVLab/LLaMA-Adapter/blob/main/llama/model.py in which the
forward
function is for inference.
Thanks! But I find the model.py under https://github.com/OpenGVLab/LLaMA-Adapter/blob/main/llama/model.py is different from the one under alpaca_finetuning_v1/llama/model.py, I tried to use the forward function there to be the forward_only function so that I can inference while training, but it failed because alpaca_finetuning_v1/llama/model.py seems not take account for the situation"when seq_len >1 and mask is None"
You can add the forward_only
function in [https://github.com/OpenGVLab/LLaMA-Adapter/blob/db9fb8fb214c59f5bcd17bee3329f5b8c907290c/llama_adapter_v2_chat65b/llama/model.py#L335-L365]() to alpaca_finetuning_v1, we will then update the code.
You can add the
forward_only
function in https://github.com/OpenGVLab/LLaMA-Adapter/blob/db9fb8fb214c59f5bcd17bee3329f5b8c907290c/llama_adapter_v2_chat65b/llama/model.py#L335-L365 to alpaca_finetuning_v1, we will then update the code.
Thx! I add and still get the following bug. It seems that the attention module can't handle 'mask == None'
Traceback (most recent call last):
File "example_test_infer.py", line 114, in
Hello, could you please update the forward_only function?
I'm trying to use alpaca_finetuning_v1/llama to autoregressively generate text for validation during finetuning, however, in alpaca_finetuning_v1/llama/generation.py line 42: logits = self.model.forward_only(tokens[:, prev_pos:cur_pos], prev_pos) I find no forward_only function in alpaca_finetuning_v1/llama/model.py Could you please release the forward_only function?Or is there any way to use alpaca_finetuning_v1/llama/model to generate texts for validation?Because it's not convenient to load the trained model during training by llama/model.py under root folder