-
Can you leave an example of how to do finetuning with triplet loss?
-
### Feature request
Training code implementation for finetuning Whisper using prompts.
Hi All,
I’m trying to finetune Whisper by resuming its pre-training task and adding initial prompts as pa…
-
Hi there.
I’m testing some training/finetuning with F5 with great success, I wonder if StyleTTS 2 has some gradio UI to make training easier.
thanks!
-
I am getting `[1] 1774566 floating point exception python main_finetune.py --batch_size 16 --model vit_large_patch16 --epochs 50` when trying to run your finetuning script. I also slightly changed…
-
When we do not finetune WavLM in the Pyannote implementation of ToTaToNet, the model does not update its weight due to that they set automatic_optimization = False in the constructor of the model, whi…
-
### Model/Pipeline/Scheduler description
Authors of the paper trained a base controlnet (with a new architecture if I'm not mistaken) on 9 different conditions to allow finetuning on new conditions…
-
Hello,
I am trying to fine-tune a pre-trained model on the AFHQ dataset for the dog_bear task using Colab.
I have successfully saved the pre-trained model and set up the dataset.
# data
# └── afhq…
-
I tried fine-tuning the llama-2-7b model using LoRa on an RTX3090 with 24GB, where the memory usage was only about 17GB. However, when I used the same configuration on an A100 with 80GB, the memory us…
-
# URL
- https://arxiv.org/abs/2402.17193
# Affiliations
- Biao Zhang, N/A
- Zhongtao Liu, N/A
- Colin Cherry, N/A
- Orhan Firat, N/A
# Abstract
- While large language models (LLMs) often ado…
-
Hello, I really appreciate your work as it got such a surprising performance in zero-shot setting. But I‘ve gotten a problem while fine-tuning using your command 'python -m cli.finetune run_name=examp…