Processing Samples: 0%| | 0/50 [00:00<?, ?it/s]
W1029 21:16:44.769000 20836 Lib\site-packages\torch\_dynamo\convert_frame.py:844] [1/8] torch._dynamo hit config.cache_size_limit (8)
W1029 21:16:44.769000 20836 Lib\site-packages\torch\_dynamo\convert_frame.py:844] [1/8] function: 'forward' (G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MochiWrapper\mochi_preview\dit\joint_model\asymm_models_joint.py:320)
W1029 21:16:44.769000 20836 Lib\site-packages\torch\_dynamo\convert_frame.py:844] [1/8] last reason: 1/0: L['packed_indices']['max_seqlen_in_batch_kv'] == 31868
W1029 21:16:44.769000 20836 Lib\site-packages\torch\_dynamo\convert_frame.py:844] [1/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W1029 21:16:44.769000 20836 Lib\site-packages\torch\_dynamo\convert_frame.py:844] [1/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
Not sure if it would do that if I just generated with same settings a ton. But I tend to change prompt etc a bit and it seems to compile something everytime (I get the ptxas info printouts). And so after a couple videos generated I need to restart ComfyUI or else generating will be slow as torch.compile gets deactivated.
Full error:
Not sure if it would do that if I just generated with same settings a ton. But I tend to change prompt etc a bit and it seems to compile something everytime (I get the ptxas info printouts). And so after a couple videos generated I need to restart ComfyUI or else generating will be slow as torch.compile gets deactivated.