-
I'm facing this error **TypeError: _batch_encode_plus() got an unexpected keyword argument 'tokenize_newline_separately'** while finetuning the paligemma, using the given notebook file.
Specificall…
-
Hello!
I am a beginner and I've wanted to play with your finetuning script so I decided to run it on Google Colab. Just to test it out, I prepared train, test and eval datasets which contained only…
-
Hello,
I try to finetune the yolox-l weights. My dataset containes only 8 classes while the pretrained weights of yolox-l are trained on 80 classes.
How can I change the classes on loaded model we…
-
Do we have a general sense on this? Has LoRA/QLoRA fine tuning been attempted on this, and if so, any guidance?
-
@PointsCoder thanks for sharing the code base !!! I had few queries
1. Can we finetune for custome data ?? In case yes then we should prepare similar to train.json
2. Can we use other LLM like …
-
Hello, I get the error in the title when finetuning Phi3.5.
I believe I'm on the latest unsloth (installed from git wit pip).
Context: finetuning Phi3.5 with code that already works with other u…
beniz updated
2 months ago
-
### Model/Pipeline/Scheduler description
Authors of the paper trained a base controlnet (with a new architecture if I'm not mistaken) on 9 different conditions to allow finetuning on new conditions…
-
The [original dataset](https://www.kaggle.com/datasets/thedevastator/synthetic-therapy-conversations-dataset) is in csv and has to be cleaned to get it to openai's finetuning [format](https://platform…
-
Hello,
I'm facing some difficulty in specifying gin files on the command line.
Can you provide an example format of the situation when we want to train and finetune the mt3 model?
Thanks
-
Hi! I am wondering if it is possible to do the pretraining/finetuning for unimol with other extra features (like atom attributes)? And how to do it?
I would appreciate it a lot if you can help answer…