-
i tried to train the model again in machine_learning.ipynb and run vision.py but it still couldn't recognize kanji characters. One more thing, how and where to use the python code block you noted in R…
-
Hi, I'm interested in your work and try to reproduce it, but there are some details need to be confirmed.
The first one is the implementation of MVAE. The paper said,
> We copy the network archi…
-
Thank you very much for providing the code, but in step5_SPOT1DLM_run_inference.py you need model.pt, but there is no such file inside the code you provided, can you please provide the weights of the …
-
[GGUF](https://huggingface.co/docs/hub/en/gguf) is becoming a preferred means of distribution of FLUX fine-tunes.
Transformers recently added general support for GGUF and are slowly adding support …
-
I don't know why but Weight Percentage not had any different in image out put for me.
But my lora merge together nicely .But i want to adjust more with Weight Percentage.
-
Hi, after following the setup instructions, I tested the model by running the following command:
`PYTHONPATH=".":$PYTHONPATH python tools/visualize.py configs/finemogen/finemogen_t2m.py logs/fine…
-
Is there a script for this?
-
### System Info / 系統信息
CogVideoX-2B SAT LoRA finetuning
### Information / 问题信息
- [ ] The official example scripts / 官方的示例脚本
- [X] My own modified scripts / 我自己修改的脚本和任务
### Reproduction / 复现过程
Fin…
-
Hi, Thank you for your code. I used pytorch structure pruning for global pruning. I could find 0 in weight_mask, before prune.remove(module, 'weight')
My code is like this:
for name, module in m…
-
I have fine tuned "meta-llama-3.1-8b-bnb-4bit" model using unsloth. I have downloaded the lora weights and able to do inferencing using those on Colab GPU.
But i want use this fine tuned model for …