kyegomez / zeta

Build high-performance AI models with modular building blocks
https://zeta.apac.ai
Apache License 2.0
320 stars 28 forks source link

[BUG] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) #197

Open dddlli opened 2 months ago

dddlli commented 2 months ago

File "/home/pete/PycharmProjects/Time-Series-Classification-master/model/mmm4tsc.py", line 224, in forward fused = self.visual_expert(concat) File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/zeta/nn/modules/visual_expert.py", line 106, in call normalized = self.norm(x) File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 196, in forward return F.layer_norm( File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)

Upvote & Fund

Fund with Polar

github-actions[bot] commented 3 weeks ago

Stale issue message

kyegomez commented 3 weeks ago

@dddlli hey can you publish the full code for me please?