-
i have successfully fine-tuned the model using QLORA for a custom use case. now i have the LoRA adapters and can you tell how to use it for the inference. maybe merge lora weights with the original mo…
-
### Describe the issue
Issue:
Hello, I'm running the finetuning job for llava-13B, but I keep getting the error message "RuntimeError: mat1 and mat2 must have the same dtype". It's possible that the…
-
Hi the team, great work!
QDoRA seems to be better than QLoRA, refer to [Efficient finetuning of Llama 3 with FSDP QDoRA](https://www.answer.ai/posts/2024-04-26-fsdp-qdora-llama3.html)
I wonder w…
-
If we following the script setting of long-llm, the parameter num_train_epoch is set to 1, it will give out really significant improvment over the original model. However, if we change the paramter to…
-
Any idea how to solve this:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the m…
-
I have used command `tune run generate --config custom_quantization.yaml prompt='Explain some topic'`to generate inference from finetuned phi3 model through torchtune
Config custom_quantization.y…
-
Hi,
I adopt the lora finetune for LLaMA-3.1-8B on the default alpaca clean dataset.
Then, I use the `generate.py` and `generation.yaml` for the test.
I found there is garbled output after lor…
-
```shell
09/10 [07:43:21] INFO | >> [*] Loading from local path `/code/Basemodel/ML-Mamba` …
-
HI!
1) Please put EXAMPLE Lora training with dataset and EXAMPLE lora using it. Thanx a lot.
2) Will it train only lora with train.py?
3) Whats the size of lora and how much epochs to train?
4) H…
-
Hello,
Based on your code, I added Korean tokens (using a Korean emotional dataset) to the tokenizer and fine-tuned the model with the LibriTTS R dataset. The Korean dataset is slightly less than 3…