Open leviathandragonflyer opened 1 year ago
Didn't mean to close this issue. But this issue is prevalent on wsl as well. I only forced it to work on windows natively in the vein hopes it was a problem with wsl emulation, it was clearly a problem with the source code unfortunately.
One possible cause is the num_workers=0 issue that keeps cropping up. Funilly enough this doesn't even appear to be a part of the software itself as I have replaced every single instance of num_workers=0 and num_workers=1 with 8 or 16 and the warning still pops up with 0 changes in performance.
Any updates on this?
One possible cause is the num_workers=0 issue that keeps cropping up. Funilly enough this doesn't even appear to be a part of the software itself as I have replaced every single instance of num_workers=0 and num_workers=1 with 8 or 16 and the warning still pops up with 0 changes in performance.
Look, I love what this is, but I feel whoever programmed this only let it work on tensor cores. I've had this thing running for 6 hours with near full vram, but my cuda core usage was completelty nonexistent the entire time. It's clearly doing things, but there's no resource activity other than ram bloating. I have been using windows resource monitor (it is extremely easy to get working natively on windows compared to wsl, and get better performance than wsl as well). which means it's possible it only measures cuda usage, but why would I only be using tensor cores? Especially since that could be the source of most of the vram bloat these require despite nvidia nerf and image gen at far higher resolutions on both requiring less vram. I feel like you could easily half or even quarter the required vram by utilizing the full gpu instead of only the incredibly small amount of tensor cores most modern GeForce cards have.