ludwig-ai / ludwig

Low-code framework for building custom LLMs, neural networks, and other AI models
http://ludwig.ai
Apache License 2.0
11.21k stars 1.19k forks source link

Token-level Probability Always 0.0 When Fine-tuning Llama2-7b Model on Single GPU #3979

Closed MoOo2mini closed 1 month ago

MoOo2mini commented 7 months ago

Describe the bug The token-level probabilities consistently appear as 0.0 when fine-tuning the Llama2-7b model using "Ludwig + DeepLearning.ai: Efficient Fine-Tuning for Llama2-7b on a Single GPU.ipynb". https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing

below thing is my code that has a problem... https://colab.research.google.com/drive/1OmbCKlPzlxm4__iThYqB9PSLUWZZVptz?usp=sharing

To Reproduce Steps to reproduce the behavior:

  1. Fine-tune the Llama2-7b model using the provided notebook.
  2. Execute the model's predictions using the predict function with modified parameters, including setting skip_save_unprocessed_output to False and providing a specific output_directory.
  3. Despite modifications, the token-level probabilities remain 0.0.
ludwig.predict(
  dataset=None,
  data_format=None,
  split='full',
  batch_size=128,
  skip_save_unprocessed_output=True,
  skip_save_predictions=True,
  output_directory='results',
  return_type=<class 'pandas.core.frame.DataFrame'>,
  debug=False
)

Expected behavior Token-level probabilities should reflect the model's confidence in predicting each token's output.

Screenshots N/A

Environment:

Additional context The logger within the predict function does not seem to function as expected.

스크린샷 2024-04-02 오후 4 45 28
alexsherstinsky commented 4 months ago

Hello, @MoOo2mini -- thank you for using Ludwig's LLM fine-tuning capabilities and reporting your issue. We cannot reproduce your error, because we do not have access to your model:

FileNotFoundError: [Errno 2] No such file or directory: '/content/test/model_hyperparameters.json'

Could you please make your model available (e.g., on HuggingFace), and I will be happy to troubleshoot the problem.

Thank you very much.