-
Probably gonna shortlist some wonky idea, but hey if this tool will be workable anywhere it better be feature-full
- [ ] Finetuning and LoRA (or other PEFT type) training toolkit https://github.com…
-
### Describe the bug
The SDXL model was fine-tuned using the rslora method and the training process was fine-tuned.
After the training, the Lora model was saved and then the lora model was reloaded …
-
### Reference code
- Llama-recipes code
[https://github.com/meta-llama/llama-recipes/tree/b7fd81c71239c67345d897c0eb6529eba076e8b8](https://github.com/meta-llama/llama-recipes/tree/b7fd81c71239c…
-
### Model Series
Qwen2.5
### What are the models used?
Qwen2.5-7B
### What is the scenario where the problem happened?
transformers
### Is this a known issue?
- [X] I have followed [the GitHub …
-
Hi thanks for the library! https://github.com/unslothai/unsloth is a library that supports fast PEFT fine-tuning. Therefore, I wonder whether this is / will be compatible with that?
-
Creating an issue to track ideas for how we should build a criteria for a global base forecaster to be 'PeFT eligible' relates to #6968
Some obvious criterias:
1) Pytorch based model. `PeFT` ca…
-
# Error Message:
> ailab_OmniGen - Failed to import OmniGen. Please check if the code was downloaded correctly.
# ComfyUI Error Report
## Error Details
- **Node ID:** 14
- **Node Type:** ailab_…
-
I saw you used something like this:
```
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers = True, # False if not finetuning vision part
finetune_language_lay…
-
**Is your feature request related to a problem? Please describe.**
With increasing parameter sizes of pre-trained models, it has become necessary to adapt parameter efficient fine-tuning methods like…
-
Hi,
I encountered an issue after updating to unsloth=="2024.11.6". When training the `Qwen2.5-0.5B-Instruct` model without PEFT, I observed that the model's gradient norm is 0, resulting in no weig…