Closed GreatV closed 1 year ago
你好,可以检查下网络结构有没有写对,我测试了一下,直接运行就会报错
import paddle
class SparseDownSampleCloseRaw(paddle.nn.Layer):
def __init__(self, stride):
super().__init__()
self.pooling = paddle.nn.MaxPool2D(stride, stride)
self.large_number = 600
def forward(self, d, mask):
encode_d = -(1 - mask) * self.large_number - d
d = -self.pooling(encode_d)
mask_result = self.pooling(mask)
d_result = d - (1 - mask_result) * self.large_number
return d_result, mask_result
if __name__ == "__main__":
model = SparseDownSampleCloseRaw(1)
paddle.summary(model, [(3, 320, 320, 1), (3, 320, 320, 1)])
会不会是因为它没有可训练参数
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
MaxPool2D-1 [[3, 320, 320, 1]] [3, 320, 320, 1] 0
===========================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 2.34
Forward/backward pass size (MB): 2.34
Params size (MB): 0.00
Estimated Total Size (MB): 4.69
---------------------------------------------------------------------------
你好,我刚刚测试了一下,torch会在backward报错,刚刚你回复的是paddle模型,paddle前向直接会报错(输入shape要求是4/5维的,但是实际是3维)
是的,我改了一下shape,还是报错。
torch不清楚,paddle没有可训练参数也可以backward的,这里前向可能是OP的问题,我帮你确认下
同学,这个是同时import了paddle和torch导致的错误,应该算bug😵
我换了张卡就跑过了。。刚刚好像是卡占满了报的cuda error,你再试试呢?
我换了电脑,改了shape,跑最上面的精度比较,还是会报相同的错误。这个是同时import了paddle和torch导致的错误
吗