-
Hi,
I recently fine-tuned the phi-3.5-moe-instruct model and phi-3.5-mini-instruct model using PEFT LORA. It seems the Moe model is performing way worse than 3.5 Mini Are there any specific things …
-
Hi, I have successfully run your repo.
1) Currently, I am trying to finetune the Zero123 part on my own dataset. Can you give some hints on how to do this? And how to prepare input data from 3D `ob…
-
Hi,
I fine tuned model by setting args.is_finetuning to 1. Then I tried to do testing only by loading the saved fine tuned model, which is saved under checkpoints/[modelstring]/checkpoint.pth. Howe…
-
I'm experiencing unusual results with the custom fine-tuned model `textattack/roberta-base-SST-2`. The clean accuracy is significantly lower than expected, at 50.69%. I've included the attack results …
-
Hey I have been fine tuning using pre-trained checkpoint . however Having issues for it to converge , I have 20k Mask and image set for background removal . I have run 30 Epochs and the results are wo…
-
Hello, I have a problem. After fine-tuning the generator and saving the model, I obtained a .pth file. When importing the model using generator = AudioSeal.load_generator('./checkpoints/generator_mode…
-
Can you write more about training, how the dataset should look like, etc.? I see that you are from Poland, do you plan to add more Polish voices? Because the current model struggles with accents and s…
-
Not quite sure how fine tune works yet, and I think I remember observing some differences in how it's done on version 1 files.
-
Thanks for your good job。
I am trying to fine-tune the videollama2 model with my own data. However, after fine-tuning, the model starts to repeatedly output the same content. Could you help me solv…
-
!pip install transformers datasets
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
from datasets import load_dataset, load_metric
tokenizer = GPT2Tokenizer.from_…