-
I hope this message finds you well. I recently had the opportunity to experiment with the Codellama-7b-Instruct model from GitHub repository and was pleased to observe its promising performance. Encou…
-
Hi everyone,
I'm unable to fuse the model after finetuning, I got this error. Can someone please help? All paths are correct and the adapter works just fine.
antoine@Mac-Studio lora % python fus…
-
![345262460-cb6c8569-4307-4275-b536-21aa253d9eee](https://github.com/user-attachments/assets/f26f7074-315b-46e1-b712-c0eeeb98c2cf)
I have already fine-tuned the videollama2 for a custom dataset using…
-
Hi, I am new in the field and trying first time to finetune a model. I am working with torchtune on the lora_finetune_single_device . while i was able to do the finetuning using the alpaca built in da…
-
当进行多图推理时,会爆显存(单卡,32G显存)
提升使用的推理卡数量(2卡,4卡均有尝试),依然爆显存,请问为什么会出现该问题
-
### Bug description
When finetuning Llama3, the encoded data has:
* Duplicate at the start
* Tracked down to template + hf tokenizer both adding one.
* No at the end in training -> #1694
…
-
Hello! I used the following demo code but got a weird inference output: ['�������� |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |>+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b/b| …
-
If you are submitting a bug report, please fill in the following details and use the tag [bug].
**Describe the bug**
Gemma-2-{size} is not loadable using from_pretrained. I checked OFFICIAL_MODEL_…
-
![79a0dc26d4c531eda76a39266bf684d](https://github.com/h-zhao1997/cobra/assets/20516638/7ba1cf0d-95e7-4f09-86c2-85a35803b9ec)
没有能下载到 LVIS-Instruct-4V 的这个文件:llava_v1_5_lvis4v_lrv_mix1231k.json。
我在 h…
-
Hello,
We successfully fine-tuned the Mistral7b_v0.3 Instruct model using a single GPU, but we encountered issues when trying to utilize multiple GPUs.
The successful fine-tuning with one GPU (A…