-
I'm trying to get fine-tuning working through the 3_sft.sh script but am encountering an error:
```
Traceback (most recent call last):
File "/root/VILA/llava/train/train_mem.py", line 36, in
…
lyluh updated
3 weeks ago
-
Hi, I have a corpus of about 500,000 protein sequences and would like to apply them to existing models like ESM2 or this one for predicting the fitness effect of changing an amino-acid for another.
H…
-
### 🚀 The feature, motivation and pitch
Is it possible to adapt the fine-tuning script for DPO finetuning? The current version seems to only work for next token prediction fine-tuning.
### Alternati…
-
Hi,
Is it possible to fine-tune or re-train the ESM-3 model with the protein sequence and structure or pdb file, and then apply the fine-tuned model to generate sequence or structure embeddings?
-
-
### Feature request / 功能建议
Thanks for open-sourcing such an amazing model and codebase. I would like to ask whether it is possible to open-source the pre-training or fully fine-tune configs for CogV…
-
Good work! Pre-train on the multispeaker dataset, then i fine-tune on a small single speaker datsaet. But, i found the synthesized speech a bit robotic. Is there any trick to fine-tune it?
-
Look into:
* Access to fine-tuning: API access for closed-source models
* Code availability for OS models
* Rate limits
* API costs for close-source models
* Cloud compute costs for OS …
-
How can I fine tune the emotion2vec+large model on another dataset without using the process that you have used for iemocap?
I have tried to use four features and your bash script train.sh but I …
-
This is also probably based on diameter size It might feel better it always have 6 platforms/spikes on a ring, or it might be better to dynamically increase the number of platforms/spikes on a ring ba…