Open Fodark opened 8 months ago
Slow as mentioned, but it works
NotImplementedError: The operator 'aten::_upsample_bilinear2d_aa.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1
to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
And once I enable that I get this error RuntimeError: User specified an unsupported autocast device_type 'mps'
Edit: Ok it works if you clear your python env and downgrade the deps, just noticed in PR .
Detect the platform where the model is loaded and adjust
torch.device
andtorch.dtype
appropriately. I was able to run the model on an M1 Macbook Pro (with poor performance at the moment).