Closed FUTUREEEEEE closed 6 months ago
This seems strange; perhaps you need to debug and check if the code for saving model parameters was successfully executed during the training process.
This seems strange; perhaps you need to debug and check if the code for saving model parameters was successfully executed during the training process.
Hi,
I have done some debug, and find that adapter_model.safetensors seems contains the lora weight, and I change WEIGHTS_NAME
to SAFETENSORS_WEIGHTS_NAME
fix this issue.
Do you think this will load the lora weight correctly? I will update the result when I finish eval.
The eval results are as following, is this match the original results?
Total lines: 1639 Matched lines: 1225 Will Matched lines: 1500 Percentage of matched lines: 74.74% Percentage of will matched lines: 91.52%
`Loading data from: Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/generated_predictions.jsonl Dataset len: 1639
Start predicting total:1639, ex_cnt:1026, ex_rate:0.6259914582062233, real_ex_rate:0.6424546023794615, contains_ex_cnt:1227, contains_ex_rate:0.74862721171446 real_contains_ex_rate:0.7683155917345021
Prediction Finished`
This is just an intermediate result; we still need to evaluate the KBQA result with Retrieval.
Hi, Thanks for sharing the code, I was trying to run it, and get stack in
python -u LLMs/LLaMA/src/beam_output_eva.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_dir LLMs/data --dataset WebQSP_Freebase_NQ_test --template llama2 --finetuning_type lora --checkpoint_dir Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint --num_beams 15
I have finish the 100 epoch training on WebQSP and get following error:
AssertionError: Provided path (Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint) does not contain a LoRA weight.
here is the screen shot for the checkpoint I got after training:![image](https://github.com/LHRLAB/ChatKBQA/assets/52389798/65ba5769-f78b-474d-9b46-75da4ca9a763)
Looking forward to your response.
Best regards, Xiaqiang