-
Is it possible to do the fine tuning quantizing the models and using qlora?
-
Hi,
Is there anyway to fine tune the .pt available models for a custom dataset?
-
Experiments to quantify the improvement in accuracy when AMI-GBIF models fine-tuned with AMI-Traps data
-
- Paper name: Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor
- ArXiv Link: https://arxiv.org/abs/2212.09689
To close this issue open a PR with a paper report using t…
-
"Porównać różne modele z https://keras.io/api/applications/ . Może zabraknąć czasu na zrobienie wielu epok dla każdego modelu - w takiej sytuacji warto np. sprawdzić tylko jeden z całej klasy modeli, …
-
## 論文タイトル(原文まま)
ExVideo: Extending Video Diffusion Models via Parameter-Efficient Post-Tuning
## 一言でいうと
ビデオ拡散モデルをパラメータ効率の良い後調整により、より長いビデオ生成能力を拡張する手法ExVideoを提案
### 論文リンク
[論文](https://arxiv.org…
-
When using float32 as computing datatype, after training steps are complete I get the error message:
ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable a…
-
In axolotl, there's a config parameter you can set:
`train_on_inputs: false`
It changes the way the loss is calculated when training a lora -> i.e. it ignores the loss on input tokens and only tra…
-
Currently, every GitHub project and specially the ones that come under CNCF use independent processes for issue triage, bot replies and so on. At a broad level, the following patterns arise where proj…
-
Hello,
I've noticed that the ai-training module JSONL format being sent is for legacy fine tuning models such as `babbage-002 `and `davinci-002`. Will there be an OOB support for the [current fine-t…