Closed blepping closed 3 months ago
Hi, please check the model sha256sum, for example, if you are using RGT with x4 scale, the used model is RGT_x4.pth
, and it is sha256sum
result is 8335d4d314cf778df0fba0b1a57d757dcbc80267afd08ab45647a9997a45133c
, please guarantee the model file not damaged.
in you case, try the following command in you terminal: sha256sum /path/ComfyUI/models/RGT/RGT/RGT_x4.pth
I tested this on a brand new environment, here is my comfyui temrinal post of example worklfow
To see the GUI go to: http://127.0.0.1:8388
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
FETCH DATA from: /root/code/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
[rgthree] Using rgthree's optimized recursive execution.
[rgthree] First run patching recursive_output_delete_if_changed and recursive_will_execute.
[rgthree] Note: If execution seems broken due to forward ComfyUI changes, you can disable the optimization from rgthree settings in ComfyUI.
2024-06-03 09:57:40.301 | DEBUG | ComfyUI-RGT.node:load_model:34 - image: torch.Size([1, 1440, 1080, 3])
2024-06-03 09:57:40.672 | DEBUG | ComfyUI-RGT.model:load_model:867 - checkpoints path:/root/code/ComfyUI/models/RGT/RGT/RGT_x2.pth
2024-06-03 09:57:41.090 | DEBUG | ComfyUI-RGT.model:__call__:878 - image: torch.Size([1, 3, 1440, 1080])
2024-06-03 09:58:21.374 | DEBUG | ComfyUI-RGT.node:load_model:54 - output: torch.Size([1, 2880, 2160, 3])
Prompt executed in 41.83 seconds
i'm actually dumb and somehow only managed to fetch the LFS objects but not check them out. sorry for wasting your time!
it seems like ComfyUI just natively supports RGT currently though:
that probably changed when it switched to using spandrel
i gave it a try but unfortunately unpickling the model always seems to fail. tested with PyTorch 2.3.0 as well as 2.4.0 nightly, 2x, 3x, 4x models and also RGT-S and it doesn't seem to matter what you choose: