Open mattdesl opened 11 months ago
@mattdesl it's not that simple, (the source code is not purely python). It turns out it uses CUDA directly, for instance in the scene.cpp
file.
[Upd] Agree, it should be possible to speedup PyTorch via mps.. I tried, but with this I got segfauls, bus errors etc:
import torch
+MPS_OR_CPU_BACKEND= 'mps' if torch.backends.mps.is_available() else 'cpu'
+
use_gpu = torch.cuda.is_available()
-device = torch.device('cuda') if use_gpu else torch.device('cpu')
+device = torch.device('cuda') if use_gpu else torch.device(MPS_OR_CPU_BACKEND)
+
def set_use_gpu(v):
global use_gpu
global device
use_gpu = v
if not use_gpu:
- device = torch.device('cpu')
+ device = torch.device(MPS_OR_CPU_BACKEND)
def get_use_gpu():
global use_gpu
Has anyone managed to get this running on Mac in any device other than
cpu
? I would like to try and usemps
device. For example using accelerate: https://github.com/huggingface/accelerateUnfortunately when I try to use
mps
device and then re-run setup.py, any programs withmps
will segfault. I'm on MBP M1 Max.