xmu-xiaoma666 / External-Attention-pytorch

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
MIT License
11.36k stars 1.92k forks source link

部分模块在使用GPU计算时报错(Most modules report errors when using GPU) #94

Open ImcwjHere opened 1 year ago

ImcwjHere commented 1 year ago

报错内容类似于: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same 或者 Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)

修复方案: 将list修改为modulelist 如model/mlp/mlp_mixer.py文件中(49行): 将self.mlp_blocks=[]修改为self.mlp_blocks=nn.ModuleList([])

zeng-cy commented 1 year ago

改了还是不行

daviscsuft commented 1 year ago

请问还有别的方法吗,ccnet里没有list