-
Hello, I saw that the code used for dense retrieval in the fine-tuning documentation is the following code.
CUDA_VISIBLE_DEVICES=0 torchrun --nproc_per_node 1 -m FlagEmbedding.baai_general_embedd…
-
Great work! One quick question: in the paper you've reproduced the results from Llava. Additionally, for the Prismatic models experiments, you are fine-tuning the whole LM. I'm wondering did you try u…
-
[AutoGluon](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) is an AutoML framework that also offers support for time series forecasting. Most if not all traditional mod…
-
Related to #34.This would be the holy grail for this maintainer, but implied in this is that we would need a _trained_ local model, since local models are terrible with the tasks we need unless fine-t…
-
The pretrained models have names like: generator_v1
However, train.py looks for checkpoints with the following code:
```
if os.path.isdir(a.checkpoint_path):
cp_g = scan_checkpoi…
-
### Is your feature request related to a problem? Please describe.
The current IPL Prediction model in Project-Guidance/Machine Learning and Data Science/Intermediate/IPL Prediction/Regularisation - …
-
I tried to use pcm for sd3, but found that the value of d_loss was basically always 2, and the inference errors occurred after the saved lora was loaded. There was no problem when using the model veri…
-
Firstly I would like to state that this repo is great, so many models, all in pytorch and getting them to work on my machine was very easy.
Have you tried fine-tuning the models on the temporally-s…
-
Will the code for fine-tuning the models be released?
Thank you for your excellent work.
-
As in
https://github.com/daanzu/kaldi-active-grammar/issues/33
https://github.com/gooofy/zamia-speech/issues/106