-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.0
- Platform: Linux-5.4.143-2-velinux1-amd64-x86_64-with-glibc2.35
- Pyth…
-
```bash
# Single GPU training
sh finetune/finetune_lora_single_gpu.sh
# Distributed training
sh finetune/finetune_lora_ds.sh
```
DATA="path_to_data"
能否提供DATA样例,比如coco图片描述
另,finetune是单阶段训练就可以了(…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
LLaMA Factory, version 0.9.1.dev0
Ubuntu LTS 20.4
### Reproduction
Fine tune Qwen2-VL-72B-Instruct…
-
I am attempting to run the `finetune_onevision.sh` script. I've gotten many things sorted out but I am stumped by the `--pretrain_mm_mlp_adapter` argument.
The default value as provided in the scr…
-
### Community Guidelines
- [X] I have read and agree to the [HashiCorp Community Guidelines ](https://www.hashicorp.com/community-guidelines).
- [X] Vote on this issue by adding a 👍 [reaction](https:…
-
### 🚀 The feature, motivation and pitch
I have finetuned the linear layers of Pixtral on my own dataset and would like to host the LoRA adapters as is possible for Mistral. It would great if this wou…
-
I recently played with the new released model and code named PhotoMaker by TencentARC.
The repo is here:
https://github.com/TencentARC/PhotoMaker
The result is very impressive. It does an ext…
-
![image](https://github.com/QwenLM/Qwen-VL/assets/37207093/0a093984-b1b3-49fe-85f1-3d65ce95fdfd)
按照上图进行lora merge_save融合模型
出现以下报错:
size mismatch for base_model.model.lm_head.modules_to_save.default…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
[2024-10-19 11:31:58,444] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda…
-
What should happen when you click a number in the timeline?
A white page then comes to me.
Steps to reproduce the behavior:
2. Click on a number in the timeline
4. See error white page
…