Fine-tune the Llama2-7b model using the provided notebook.
Execute the model's predictions using the predict function with modified parameters, including setting skip_save_unprocessed_output to False and providing a specific output_directory.
Despite modifications, the token-level probabilities remain 0.0.
Describe the bug The token-level probabilities consistently appear as 0.0 when fine-tuning the Llama2-7b model using "Ludwig + DeepLearning.ai: Efficient Fine-Tuning for Llama2-7b on a Single GPU.ipynb". https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing
below thing is my code that has a problem... https://colab.research.google.com/drive/1OmbCKlPzlxm4__iThYqB9PSLUWZZVptz?usp=sharing
To Reproduce Steps to reproduce the behavior:
predict
function with modified parameters, including settingskip_save_unprocessed_output
toFalse
and providing a specificoutput_directory
.Expected behavior Token-level probabilities should reflect the model's confidence in predicting each token's output.
Screenshots N/A
Environment:
Additional context The logger within the predict function does not seem to function as expected.