Closed ralphrmartin closed 6 months ago
Hi @ralphrmartin ,
I have tested the code snippet and getting NotImplementedError
as per gist.
I'm not quite sure who needs to do what here. Is this a matter for the mps team? I'm just an end user trying to use this stuff, and I get the error given in my initial report when running on an Apple Silicon MacBook Pro, with the following versions of packages, using Python 3.12.2
absl-py 2.1.0
appnope 0.1.4
asttokens 2.4.1
comm 0.2.2
contourpy 1.2.1
cycler 0.12.1
debugpy 1.8.1
decorator 5.1.1
executing 2.0.1
filelock 3.13.3
fonttools 4.50.0
fsspec 2024.3.1
h5py 3.10.0
ipykernel 6.29.4
ipython 8.23.0
jedi 0.19.1
Jinja2 3.1.3
jupyter_client 8.6.1
jupyter_core 5.7.2
keras 3.1.1
kiwisolver 1.4.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.8.4
matplotlib-inline 0.1.6
mdurl 0.1.2
ml-dtypes 0.3.2
mpmath 1.3.0
namex 0.0.7
nest-asyncio 1.6.0
networkx 3.2.1
numpy 1.26.4
optree 0.11.0
packaging 24.0
parso 0.8.3
pexpect 4.9.0
pillow 10.3.0
pip 24.0
platformdirs 4.2.0
prompt-toolkit 3.0.43
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
Pygments 2.17.2
pyparsing 3.1.2
python-dateutil 2.9.0.post0
pyzmq 25.1.2
rich 13.7.1
six 1.16.0
stack-data 0.6.3
sympy 1.12
torch 2.2.2
torchvision 0.17.2
tornado 6.4
traitlets 5.14.2
typing_extensions 4.10.0
wcwidth 0.2.13
Some operations, such as the 'aten::random_' operator, are currently unsupported for the MPS device in the Torch backend. You can find more information about this issue at https://github.com/pytorch/pytorch/issues/77764. As a temporary solution, I recommend setting the environment variable PYTORCH_ENABLE_MPS_FALLBACK. This enables keras to automatically utilize the GPU, you don't need to set the default device in torch.
Hi @ralphrmartin ,
Could you please refer above comment of @M7Saad .Is It seems compatibility issue with Pytorch ?
Thank you.
Hi @ralphrmartin ,
Could you please confirm whether this issue is with pytorch compatibility? If so whether we can mark it as resolved ? Thanks!
Setting PYTORCH_ENABLE_MPS_FALLBACK 1 prevents the issue, thanks.
@ralphrmartin ,
Thanks for the response. Can we mark this as closed now?
I guess so, but maybe the documentation needs updating to prevent other users from tripping over this.
@ralphrmartin Hi Ralph, looking into this more it seems that PYTORCH_ENABLE_MPS_FALLBACK
might have been an experimental flag that is no longer needed. Have you run into this flag in pytorch in general? Specifically, I'm seeing no mention of it here: https://pytorch.org/docs/stable/notes/mps.html.
If so we can remove the flag check from https://github.com/keras-team/keras/blob/63586fa698cad7005f561fcdbb5ce590fb2484b1/keras/src/backend/torch/core.py#L24
I am lost at this point. Using
Keras: 3.3.2
Torch: 2.3.0
My original comment holds, that if I dont use
PYTORCH_ENABLE_MPS_FALLBACK to 1
and I do torch.set_default_device('mps')
as suggested at
https://pytorch.org/docs/stable/notes/mps.html),
Keras falls over as described in my initial message, failing to use an mps generator in randperm.
If I set
PYTORCH_ENABLE_MPS_FALLBACK to 1
then the mps device seems to be used to some extent, but I get
UserWarning: The operator 'aten::_foreach_mul_.Scalar' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications.
If I dont dotorch.set_default_device('mps')
, then it appears that the mps device is not used.
So, now what?
Looks like mps is stable enough that we can remove the experimental flag, will submit a separate PR. Thank you for flagging this Ralph.
Keras with pytorch backend and mps set to default needs to use an mps generator in randperm
The following code
produces the following error