-
-
![79a0dc26d4c531eda76a39266bf684d](https://github.com/h-zhao1997/cobra/assets/20516638/7ba1cf0d-95e7-4f09-86c2-85a35803b9ec)
没有能下载到 LVIS-Instruct-4V 的这个文件:llava_v1_5_lvis4v_lrv_mix1231k.json。
我在 h…
-
### Bug description
When finetuning Llama3, the encoded data has:
* Duplicate at the start
* Tracked down to template + hf tokenizer both adding one.
* No at the end in training -> #1694
…
-
Simply adding a newline to the default template in `chat_templates.construct_chat_template` causes a RuntimeError:
The template:
```
system
{SYSTEM}user
{INPUT}assistant
{OUTPUT}user
…
-
I am trying to finetune llama3.2 Vision Instruct, and I am using the distributed recipe and example (lora) config as a starting point. Eventually, I am looking to use a custom dataset, but first, I am…
-
For fine tuning we should only specify model that already contains "format" and "tokenizer" parameters.
-
I'd love to try to reproduce the model from pretraining to finetuning.
It's awesome that there are training and finetuning scripts.
However, there are so many parts of dataset, I'm not sure where to…
-
**Issue identified:** cuDNN SDPA JIT recompiles when the context length changes. This results in training that does not use packing to keep recompiling, resulting in the observed 500ms overhead.
--…
-
After finetuned the llama-3-8B-instruct with the same configuration ,as the code from:https://github.com/hiyouga/LLaMA-Factory/tree/3df986c6793a51ec2cb5f31fd1808cd3a9883bc4/examples/extrasexamples/ext…
-
Is there any expectation for compatibility with the newly released LLAMA3.2? As a developer I could help with the project?