-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
Study the LLM models trained on Spanish-language corpora, giving priority to those that have been built on the basis of:
- Llama-2
- Mistral
- Genmma
- Gpt-3.5
**Expected result:**
T…
-
Hi all,
Sorry Im new to hugging face/llama/ Alpaca, i encounter this error when run the finetune.py
Loading checkpoint shards: 100%|██████████| 33/33 [00:10
-
Today I updated the unsloth version for the first time, using 2024.8, and found a strange phenomenon. The fine-tuning results using the 2024.4 version were very good, but the fine-tuning results using…
-
Describe the bug
I am trying to finetune tiiuae/falcon-7b-instruct and I am getting this error.
`TypeError: where(): argument 'condition' (position 1) must be Tensor, not bool`
**To Reproduce**…
-
Dear authors of VideoLLaMA2,
Thanks for the great work. We tried to reproduce your results on vllava datasets using the latest version of the code. However, we observe a large discrepancy in the thre…
-
This is an amazing work. I have been working on something that would require me to evaluate the generated outputs of models like Mistral, using a prompt like:
`"Fill the [MASK] token in the sentence.…
-
I have used command `tune run generate --config custom_quantization.yaml prompt='Explain some topic'`to generate inference from finetuned phi3 model through torchtune
Config custom_quantization.y…
-
### 🐛 Describe the bug
Dear Community,
I am trying to fine one of Mistral AI model using the following code: https://github.com/mistralai/mistral-finetune.
It fails when running (University G…
-
Hi,
Thank for such an wonderful repo. I was trying to train the model with a custom dataset using the lora script and getting the below error:
```
[2024-10-29 17:59:25,985] [INFO] [real_accelerat…