Thanks for your amazing work! I have modified your code implementation to enable the inference on Mac Silicon devices(e.g., MacBook Pro M1 and so on).
Specifically, .cuda() is replaced with .to(device), where I set device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu". Thus, this codebase is completely compatible with the original one.
Hi @cddlyf ,
Thanks for your amazing work! I have modified your code implementation to enable the inference on Mac Silicon devices (e.g., MacBook Pro M1 and so on).
Specifically,
.cuda()
is replaced with.to(device)
, where I setdevice = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
. Thus, this codebase is completely compatible with the original one.Hope this might help improving your work :)