modal-labs / llm-finetuning

Guide for fine-tuning Llama/Mistral/CodeLlama models and more
MIT License
539 stars 84 forks source link

Weights & Bias integration #5

Closed neverSettles closed 1 year ago

neverSettles commented 1 year ago

Hi, I'd like to integrate weights & biases into my training code. I'm a bit stuck on how to do that.

I've started w/ trying to set up wandb and log some default values, but after trying that it's producing and issue

Here's the start I have so far: https://github.com/modal-labs/llama-finetuning/commit/5d1b6e29e8ad889731d9b6f6ebe80367ef8885a5

Currently that is giving me:

│ /root/train.py:45 in train │ │ │ │ ❱ 45 from torch.distributed.run import elastic_launch, parse_args, config_from_args │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ModuleNotFoundError: No module named 'torch'

Any pointers would be super helpful!

neverSettles commented 1 year ago

Fixed it w/ the following code inside of llama-recipes: https://github.com/Llama2D/llama-recipes/commits/add_wandb_modal