-
Sample for Finetuning SD models will be supported in 2.16. Corresponding samples need to be supported in optimum-neuron
-
Check these amazing Resources
[Hugging Face Diffusers Documentation](https://huggingface.co/docs/diffusers/v0.13.0/en/training/text2image)
[Hugging Face Diffusers GitHub Repository (text_to_ima…
-
I use [tgdoc-13b-finetune-224](https://pan.baidu.com/s/1U85erwbxBD55cdMAu_9gCg) mm_projector.bin in llava-13b-pretrain as the --pretrain_mm_mlp_adapter in scrips/finetune_deep.sh , but errors happened…
-
Thanks for the solid work!
According to the paper, Stable Diffusion is finetuned to inpaint the masked area with background.
In src/datasets/finetune_dataset.py I found that the function "foba_datas…
-
The training does not start..my memory is completely occupied but GPU is at 0%.
Screenshot attached below. Pls help .
![image](https://github.com/X-PLUG/mPLUG-DocOwl/assets/74967139/a587f57a-8694-…
-
@ellie-sleightholm
Based on this notebook:
https://github.com/marqo-ai/fine-tuning-embedding-models-course/blob/main/10_fine_tuning_CLIP_models.ipynb
I went through the code, It's a nice write-…
-
### Feature request
Training code implementation for finetuning Whisper using prompts.
Hi All,
I’m trying to finetune Whisper by resuming its pre-training task and adding initial prompts as pa…
-
### 🐛 Describe the bug
Hi, I'm trying to create a Docker container with the following (**minimal reproducible**) CUDA `12.4.1` Dockerfile (host info: Driver Version: `550.107.02` CUDA Version: …
-
-
ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.…