-
the fp8 storage type is introduced in https://github.com/comfyanonymous/ComfyUI/issues/2157 which significantly reduce vram usage, so I wonder whether pixart models have supported yet?
-
ampler:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
File "E:\AI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_out…
-
![lumina](https://github.com/kijai/ComfyUI-LuminaWrapper/assets/173285092/79f2dfad-363a-43b9-832c-2c52c2aaaa8c)
got prompt
[rgthree] Using rgthree's optimized recursive execution.
[rgthree] First…
-
/opt/conda/envs/pytorch/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume …
-
Can we have node supported for https://github.com/Alpha-VLLM/Lumina-T2X?
-
# Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers
> Sora unveils the potential of scaling Diffusion Transformer (DiT) for gener…
-
I know this feels a bit _anal retentive_, please bear with me…
It seems that if the last changed branch is not a tip, the whole line graph for that branch is thin and dashed.
![screen shot 2017-…
-
### Describe your use-case.
There are multiple simple models used in this repository: Blip, Clip and WD-taggers. However, when it comes to detailed description, they are all dwarfed by modern multi…
-
Hi the model is amazing to use but the inference speed is quite slow on an A10 GPU. I saw decent performance on A100 though.
Is there any optimisation method I can apply to speed it up ?
-
I have downloaded the new vae and stdit model from hugging face and change the config in sample.py, but when run the code errors shows:
File "Open-Sora/opensora/models/vae/vae.py", line 284, in Ope…