Open GoDJr opened 2 days ago
The 3060 can run normally because NVIDIA's device supports this feature
pipe.enable_sequential_cpu_offload()
As far as I know, AMD devices are not supported, which is due to deeper reasons, likely related to PyTorch or more core algorithms. This is something we cannot intervene in. If this optimization is not enabled, the GPU memory used would be 26GB, instead of the current 5GB.
I can’t get cogvideox i2v to work I keep running out of memory with my 16gb amd. But ppl are reporting it working on a 3060 as well as even 6gb cards. Not sure what I’m doing wrong I’m already using q3 t5 to try and cut down on memory usage. also is there a new invite link to the discord the one on the main github is expired