Closed ChoYongchae closed 3 weeks ago
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Thank you, that's very helpful! Could you also move the SAM model in the Composition block to the CPU? It doesn't need to be stored on the CUDA, and I'm encountering an OOM error without this change.
if low_vram:
# The sampling process uses more vram, so we offload everything except two modules to the cpu.
models_to(sam_model, device="cpu")
models_to(sam_model.sam, device="cpu")
models_to(models_rbm, device="cpu", excepts=["generator", "previewer"])
Thank you, @tema7707! Your suggestion has helped confirm that an additional 3GB of VRAM used by SAM can be saved. I have included this change in the fix.
Thanks for the PR.
Thanks for the research and code sharing. I’m submitting a PR to help the notebook run on GPUs with lower VRAM.
Changes:
Add 'low_vram' flag to rb-modulation.ipynb:
models.generator
to CPU during the plugin RB-modulation stagegenerator
andpreviewer
during the VRAM-intensive sampling stage(Minor) Fix typo in
README.md
:ffty
toftfy
.Benefits:
Disclaimer:
low_vram
toTrue
resolves out-of-memory (OOM) issues and ensures stable operation on GPUs like the RTX 4090, which have 24GB of VRAM.Please review the changes and let me know if any further adjustments are needed.