C0untFloyd / roop-unleashed

Evolved Fork of roop with Web Server and lots of additions
GNU Affero General Public License v3.0
2.33k stars 541 forks source link

Can the Onnxruntime and pytorch versions be safely upgraded? #994

Open codecowboy opened 2 days ago

codecowboy commented 2 days ago

Are you aware of specific reasons for pinning the versions of onnxruntime and pytorch to quite old versions? I've provisioned a server with a 3090 and it has drivers preinstalled which don't play nicely with cuda 11.8 which is the latest that the pinned onnxruntime supports.

Have you tried newer versions of onnxruntime? I'm guessing the whole thing will break so would rather avoid a world of pain if you've already been down this road. Thanks!

Elise96nl commented 1 day ago

info python: 3.10.6 • torch: 2.3.1+cu118 • gradio: 4.44.0 onnxruntime-directml 1.20.1 torch-directml 0.2.5.dev240914 Processing **** -_repair_03-18-35.mp4 took 111.07 secs, 4.44 frames/s

info python: 3.10.6 • torch: 2.4.1+cpu • gradio: 4.44.0 onnxruntime-directml 1.20.1 torch-directml 0.2.5.dev240914 Processing **** -_repair_03-18-35.mp4 took 112.20 secs, 4.39 frames/s

info python: 3.10.6 • torch: 2.3.1+cu118 • gradio: 4.44.0 onnxruntime-directml 1.20.1 Processing **** -_repair_03-18-35.mp4 took 112.40 secs, 4.39 frames/s

info python: 3.10.6 • torch: 2.1.2+cu118 • gradio: 4.44.0 onnxruntime-directml 1.15.1 Processing **** -_repair_03-18-35.mp4 took 120.54 secs, 4.09 frames/s

everything newer got me memory leaks, and crashes.

codecowboy commented 1 day ago

@Elise96nl Thanks, good to know. In this case, I'm running it on linux with a discreet GPU but I also use a Mac at home. This at least tells me that the roop code isn't going to break on those later onnx and pytorch versions. I'm guessing the crashes and memory leaks may well be mac-specific but difficult to say without the logs. Thanks for the info, though, much appreciated.