jeya-maria-jose / Medical-Transformer

Official Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" - MICCAI 2021
MIT License
812 stars 176 forks source link

Does it support for mac os with M1? #90

Open pklim101 opened 1 year ago

pklim101 commented 1 year ago

I run it on mac os M1 and set device to CPU. but it error: [W NNPACK.cpp:53] Could not initialize NNPACK! Reason: Unsupported hardware. Traceback (most recent call last): File "/Users/libo56/Documents/my/hub/Medical-Transformer/train.py", line 143, in output = model(X_batch) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/Users/libo56/Documents/my/hub/Medical-Transformer/lib/models/axialnet.py", line 507, in forward return self._forward_impl(x) File "/Users/libo56/Documents/my/hub/Medical-Transformer/lib/models/axialnet.py", line 485, in _forward_impl x1 = self.layer1(x) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/nn/modules/container.py", line 204, in forward input = module(input) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/Users/libo56/Documents/my/hub/Medical-Transformer/lib/models/axialnet.py", line 331, in forward out = self.hight_block(out) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/Users/libo56/Documents/my/hub/Medical-Transformer/lib/models/axialnet.py", line 157, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "/Users/libo56/opt/anaconda3/envs/py397/lib/python3.9/site-packages/torch/functional.py", line 378, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): subscript i has size 64 for operand 1 which does not broadcast with previously seen size 500