Open bukhalmae145 opened 5 months ago
IIRC, softsplat, what GMFSS is based on uses Cupy, there have been previous attempts to hackintosh a method in which it doesn't utilize it but I didn't hear much about it past that.
I think one could replace GMFSS with Rife.
If you disable metricnet in the model, you can use forward_warp2 instead of softsplat, the effect may be slightly reduced , but can avoid Cupy dependency
Oh that's interesting 🤔
If you disable metricnet in the model, you can use forward_warp2 instead of softsplat, the effect may be slightly reduced , but can avoid Cupy dependency
How do I disable it?
commit: e0ce5be updated support for no_cupy version The effect should not have much impact, but the speed will be much slower, about 1/3 of the original There is also a faster and better way to use cuda programming to avoid cupy dependencies, but this should make porting to mac difficult.
commit: e0ce5be updated support for no_cupy version The effect should not have much impact, but the speed will be much slower, about 1/3 of the original There is also a faster and better way to use cuda programming to avoid cupy dependencies, but this should make porting to mac difficult.
I'm so sorry for asking so many things, but will this repository be able to replace CUDA to MPS?
Sorry, I'm not familiar with MPS
Sorry, I'm not familiar with MPS
python interpolate_video_forward_anyfps.py -i /Users/workstation/Movies/AFI-ForwardDeduplicate/input.mov -o /Users/workstation/Movies/AFI-ForwardDeduplicate/frames -nf 2 -fps 60 -m gmfss -s -st 12 -scale 1.0 -stf -c Traceback (most recent call last): File "/Users/workstation/Movies/AFI-ForwardDeduplicate/interpolate_video_forward_anyfps.py", line 115, in <module> model.load_model('weights/train_log_pg104', -1) File "/Users/workstation/Movies/AFI-ForwardDeduplicate/models/model_pg104/GMFSS.py", line 55, in load_model self.flownet.load_state_dict(torch.load('{}/flownet.pkl'.format(path))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1025, in load return _load(opened_zipfile, ^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1446, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1416, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1390, in load_tensor wrap_storage=restore_location(storage, location), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 390, in default_restore_location result = fn(storage, location) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 265, in _cuda_deserialize device = validate_cuda_device(location) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 249, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
What about adapting RIFE-NCNN-Vulkan instead of GMFSS? Would that help me to use MPS, not CUDA?
If vs-mlrt integrates this method, supporting Mac should be much simpler
If vs-mlrt integrates this method, supporting Mac should be much simpler
I thought vs-mlrt supports M1 Mac with NCNN-vulkan
I'm trying to adjust this code to use it on M1 Mac. M1 Mac doesn't support Cupy, but is there any allternatives to replace Cupy?