-
The challenge-generation functionality should make it clear which AI model / API is being used. Ideally, with a nicely formatted model card that could also be used as a onebox (similar to Data Package…
loleg updated
2 weeks ago
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
Add finetuning layers and basic learning algs. SGD with clipping should be enough since we _are not_ currently planning to support learning from scratch.
-
Hi!
Recently, we have been training a large-scale dataset (> 50M audio segments) on the K2 platform.
To reduce IO operations through NFS, we decided to use the Lhotse Shar format.
I followed the …
-
When I click fine tune model where does this data come from? Is it collecting everything I type? Of course locally and hopefully not uploading it I would trust. I am personally fine with this and thin…
-
### feature
Hi, I wonder in the current code if it is possible to finetune both vision encoder part and the projector? Thanks.
-
Hello everyone,
When generating a bitstream for my CNN model I seem to run into the problem of no model being generated using vitis AI 3.5, however I do not seem to get any error message. Could any…
-
**Description:**
When running the start.sh and install.sh scripts, I encountered an AttributeError in xtts_demo.py. The error occurs due to an incorrect import of GradScaler from torch.amp.
**Step…
-
I want to first continue pre-training of `bge-en-icl` model before fine-tuning it. Could you please refer me to an example of how to do that? I think the examples are no longer in your repo.
-
To train a mm_grodunding_dino ,we need to load both BERT and Swin two pre-trained models。
To fine-tune a mm_grounding_dino using my dataset, I need to load a pre-trained MM_Grounding_DINO and the con…