Closed 83517769 closed 7 months ago
ann2snn中的原模型应该使用nn.relu,而不是用nn.functional里的relu函数。类似的问题还有relu层重用等等。ann2snn目前很难做一个general的转换,建议你根据你自己的模型写一个转换方法或者让你的模型适配spikingjelly的转换方法。
ann2snn中的原模型应该使用nn.relu,而不是用nn.functional里的relu函数。类似的问题还有relu层重用等等。ann2snn目前很难做一个general的转换,建议你根据你自己的模型写一个转换方法或者让你的模型适配spikingjelly的转换方法。
谢谢 我大概理解了 非常感谢
Read before creating a new issue
For faster response
You can @ the corresponding developers for your issue. Here is the division:
Yanqi-Chen
Yanqi-Chen
Lyu6PosHao
lucifer2859
AllenYolk
Lyu6PosHao
DingJianhao
Yanqi-Chen
fangwei123456
We are glad to add new developers who are volunteering to help solve issues to the above table.
Issue type
SpikingJelly version
0.0.0.0.14
Description 我试图将一个训练好的ResNet50进行转换成SNN,用的是:model_converter = ann2snn.Converter(mode='99.9%', dataloader=train_loader) snn_model = model_converter(model.encoder),但是发现转出来的模型不像教程中有IFnode类似的神经元。并且我将转换出的输出打印发现貌似并没有进行脉冲化,输出依旧是小数。 这是我打印出的结果:100%|██████████| 390/390 [00:33<00:00, 11.56it/s] ResNet( (conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (layer1): Module( (0): Module( (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) (shortcut): Module( (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) ) ) (1): Module( (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) ) (2): Module( (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) ) ) (layer2): Module( (0): Module( (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (shortcut): Module( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2)) ) ) (1): Module( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) (2): Module( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) (3): Module( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (layer3): Module( (0): Module( (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) (shortcut): Module( (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2)) ) ) (1): Module( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) ) (2): Module( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) ) (3): Module( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) ) (4): Module( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) ) (5): Module( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)) ) ) (layer4): Module( (0): Module( (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) (shortcut): Module( (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2)) ) ) (1): Module( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) ) (2): Module( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) ) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) )
def forward(self, x): conv1 = self.conv1(x); x = None relu = torch.nn.functional.relu(conv1, inplace = False); conv1 = None layer1_0_conv1 = getattr(self.layer1, "0").conv1(relu) relu_1 = torch.nn.functional.relu(layer1_0_conv1, inplace = False); layer1_0_conv1 = None layer1_0_conv2 = getattr(self.layer1, "0").conv2(relu_1); relu_1 = None relu_2 = torch.nn.functional.relu(layer1_0_conv2, inplace = False); layer1_0_conv2 = None layer1_0_conv3 = getattr(self.layer1, "0").conv3(relu_2); relu_2 = None layer1_0_shortcut_0 = getattr(getattr(self.layer1, "0").shortcut, "0")(relu); relu = None add = layer1_0_conv3 + layer1_0_shortcut_0; layer1_0_conv3 = layer1_0_shortcut_0 = None relu_3 = torch.nn.functional.relu(add, inplace = False); add = None layer1_1_conv1 = getattr(self.layer1, "1").conv1(relu_3) relu_4 = torch.nn.functional.relu(layer1_1_conv1, inplace = False); layer1_1_conv1 = None layer1_1_conv2 = getattr(self.layer1, "1").conv2(relu_4); relu_4 = None relu_5 = torch.nn.functional.relu(layer1_1_conv2, inplace = False); layer1_1_conv2 = None layer1_1_conv3 = getattr(self.layer1, "1").conv3(relu_5); relu_5 = None add_1 = layer1_1_conv3 + relu_3; layer1_1_conv3 = relu_3 = None relu_6 = torch.nn.functional.relu(add_1, inplace = False); add_1 = None layer1_2_conv1 = getattr(self.layer1, "2").conv1(relu_6) relu_7 = torch.nn.functional.relu(layer1_2_conv1, inplace = False); layer1_2_conv1 = None layer1_2_conv2 = getattr(self.layer1, "2").conv2(relu_7); relu_7 = None relu_8 = torch.nn.functional.relu(layer1_2_conv2, inplace = False); layer1_2_conv2 = None layer1_2_conv3 = getattr(self.layer1, "2").conv3(relu_8); relu_8 = None add_2 = layer1_2_conv3 + relu_6; layer1_2_conv3 = relu_6 = None relu_9 = torch.nn.functional.relu(add_2, inplace = False); add_2 = None layer2_0_conv1 = getattr(self.layer2, "0").conv1(relu_9) relu_10 = torch.nn.functional.relu(layer2_0_conv1, inplace = False); layer2_0_conv1 = None layer2_0_conv2 = getattr(self.layer2, "0").conv2(relu_10); relu_10 = None relu_11 = torch.nn.functional.relu(layer2_0_conv2, inplace = False); layer2_0_conv2 = None layer2_0_conv3 = getattr(self.layer2, "0").conv3(relu_11); relu_11 = None layer2_0_shortcut_0 = getattr(getattr(self.layer2, "0").shortcut, "0")(relu_9); relu_9 = None add_3 = layer2_0_conv3 + layer2_0_shortcut_0; layer2_0_conv3 = layer2_0_shortcut_0 = None relu_12 = torch.nn.functional.relu(add_3, inplace = False); add_3 = None layer2_1_conv1 = getattr(self.layer2, "1").conv1(relu_12) relu_13 = torch.nn.functional.relu(layer2_1_conv1, inplace = False); layer2_1_conv1 = None layer2_1_conv2 = getattr(self.layer2, "1").conv2(relu_13); relu_13 = None relu_14 = torch.nn.functional.relu(layer2_1_conv2, inplace = False); layer2_1_conv2 = None layer2_1_conv3 = getattr(self.layer2, "1").conv3(relu_14); relu_14 = None add_4 = layer2_1_conv3 + relu_12; layer2_1_conv3 = relu_12 = None relu_15 = torch.nn.functional.relu(add_4, inplace = False); add_4 = None layer2_2_conv1 = getattr(self.layer2, "2").conv1(relu_15) relu_16 = torch.nn.functional.relu(layer2_2_conv1, inplace = False); layer2_2_conv1 = None layer2_2_conv2 = getattr(self.layer2, "2").conv2(relu_16); relu_16 = None relu_17 = torch.nn.functional.relu(layer2_2_conv2, inplace = False); layer2_2_conv2 = None layer2_2_conv3 = getattr(self.layer2, "2").conv3(relu_17); relu_17 = None add_5 = layer2_2_conv3 + relu_15; layer2_2_conv3 = relu_15 = None relu_18 = torch.nn.functional.relu(add_5, inplace = False); add_5 = None layer2_3_conv1 = getattr(self.layer2, "3").conv1(relu_18) relu_19 = torch.nn.functional.relu(layer2_3_conv1, inplace = False); layer2_3_conv1 = None layer2_3_conv2 = getattr(self.layer2, "3").conv2(relu_19); relu_19 = None relu_20 = torch.nn.functional.relu(layer2_3_conv2, inplace = False); layer2_3_conv2 = None layer2_3_conv3 = getattr(self.layer2, "3").conv3(relu_20); relu_20 = None add_6 = layer2_3_conv3 + relu_18; layer2_3_conv3 = relu_18 = None relu_21 = torch.nn.functional.relu(add_6, inplace = False); add_6 = None layer3_0_conv1 = getattr(self.layer3, "0").conv1(relu_21) relu_22 = torch.nn.functional.relu(layer3_0_conv1, inplace = False); layer3_0_conv1 = None layer3_0_conv2 = getattr(self.layer3, "0").conv2(relu_22); relu_22 = None relu_23 = torch.nn.functional.relu(layer3_0_conv2, inplace = False); layer3_0_conv2 = None layer3_0_conv3 = getattr(self.layer3, "0").conv3(relu_23); relu_23 = None layer3_0_shortcut_0 = getattr(getattr(self.layer3, "0").shortcut, "0")(relu_21); relu_21 = None add_7 = layer3_0_conv3 + layer3_0_shortcut_0; layer3_0_conv3 = layer3_0_shortcut_0 = None relu_24 = torch.nn.functional.relu(add_7, inplace = False); add_7 = None layer3_1_conv1 = getattr(self.layer3, "1").conv1(relu_24) relu_25 = torch.nn.functional.relu(layer3_1_conv1, inplace = False); layer3_1_conv1 = None layer3_1_conv2 = getattr(self.layer3, "1").conv2(relu_25); relu_25 = None relu_26 = torch.nn.functional.relu(layer3_1_conv2, inplace = False); layer3_1_conv2 = None layer3_1_conv3 = getattr(self.layer3, "1").conv3(relu_26); relu_26 = None add_8 = layer3_1_conv3 + relu_24; layer3_1_conv3 = relu_24 = None relu_27 = torch.nn.functional.relu(add_8, inplace = False); add_8 = None layer3_2_conv1 = getattr(self.layer3, "2").conv1(relu_27) relu_28 = torch.nn.functional.relu(layer3_2_conv1, inplace = False); layer3_2_conv1 = None layer3_2_conv2 = getattr(self.layer3, "2").conv2(relu_28); relu_28 = None relu_29 = torch.nn.functional.relu(layer3_2_conv2, inplace = False); layer3_2_conv2 = None layer3_2_conv3 = getattr(self.layer3, "2").conv3(relu_29); relu_29 = None add_9 = layer3_2_conv3 + relu_27; layer3_2_conv3 = relu_27 = None relu_30 = torch.nn.functional.relu(add_9, inplace = False); add_9 = None layer3_3_conv1 = getattr(self.layer3, "3").conv1(relu_30) relu_31 = torch.nn.functional.relu(layer3_3_conv1, inplace = False); layer3_3_conv1 = None layer3_3_conv2 = getattr(self.layer3, "3").conv2(relu_31); relu_31 = None relu_32 = torch.nn.functional.relu(layer3_3_conv2, inplace = False); layer3_3_conv2 = None layer3_3_conv3 = getattr(self.layer3, "3").conv3(relu_32); relu_32 = None add_10 = layer3_3_conv3 + relu_30; layer3_3_conv3 = relu_30 = None relu_33 = torch.nn.functional.relu(add_10, inplace = False); add_10 = None layer3_4_conv1 = getattr(self.layer3, "4").conv1(relu_33) relu_34 = torch.nn.functional.relu(layer3_4_conv1, inplace = False); layer3_4_conv1 = None layer3_4_conv2 = getattr(self.layer3, "4").conv2(relu_34); relu_34 = None relu_35 = torch.nn.functional.relu(layer3_4_conv2, inplace = False); layer3_4_conv2 = None layer3_4_conv3 = getattr(self.layer3, "4").conv3(relu_35); relu_35 = None add_11 = layer3_4_conv3 + relu_33; layer3_4_conv3 = relu_33 = None relu_36 = torch.nn.functional.relu(add_11, inplace = False); add_11 = None layer3_5_conv1 = getattr(self.layer3, "5").conv1(relu_36) relu_37 = torch.nn.functional.relu(layer3_5_conv1, inplace = False); layer3_5_conv1 = None layer3_5_conv2 = getattr(self.layer3, "5").conv2(relu_37); relu_37 = None relu_38 = torch.nn.functional.relu(layer3_5_conv2, inplace = False); layer3_5_conv2 = None layer3_5_conv3 = getattr(self.layer3, "5").conv3(relu_38); relu_38 = None add_12 = layer3_5_conv3 + relu_36; layer3_5_conv3 = relu_36 = None relu_39 = torch.nn.functional.relu(add_12, inplace = False); add_12 = None layer4_0_conv1 = getattr(self.layer4, "0").conv1(relu_39) relu_40 = torch.nn.functional.relu(layer4_0_conv1, inplace = False); layer4_0_conv1 = None layer4_0_conv2 = getattr(self.layer4, "0").conv2(relu_40); relu_40 = None relu_41 = torch.nn.functional.relu(layer4_0_conv2, inplace = False); layer4_0_conv2 = None layer4_0_conv3 = getattr(self.layer4, "0").conv3(relu_41); relu_41 = None layer4_0_shortcut_0 = getattr(getattr(self.layer4, "0").shortcut, "0")(relu_39); relu_39 = None add_13 = layer4_0_conv3 + layer4_0_shortcut_0; layer4_0_conv3 = layer4_0_shortcut_0 = None relu_42 = torch.nn.functional.relu(add_13, inplace = False); add_13 = None layer4_1_conv1 = getattr(self.layer4, "1").conv1(relu_42) relu_43 = torch.nn.functional.relu(layer4_1_conv1, inplace = False); layer4_1_conv1 = None layer4_1_conv2 = getattr(self.layer4, "1").conv2(relu_43); relu_43 = None relu_44 = torch.nn.functional.relu(layer4_1_conv2, inplace = False); layer4_1_conv2 = None layer4_1_conv3 = getattr(self.layer4, "1").conv3(relu_44); relu_44 = None add_14 = layer4_1_conv3 + relu_42; layer4_1_conv3 = relu_42 = None relu_45 = torch.nn.functional.relu(add_14, inplace = False); add_14 = None layer4_2_conv1 = getattr(self.layer4, "2").conv1(relu_45) relu_46 = torch.nn.functional.relu(layer4_2_conv1, inplace = False); layer4_2_conv1 = None layer4_2_conv2 = getattr(self.layer4, "2").conv2(relu_46); relu_46 = None relu_47 = torch.nn.functional.relu(layer4_2_conv2, inplace = False); layer4_2_conv2 = None layer4_2_conv3 = getattr(self.layer4, "2").conv3(relu_47); relu_47 = None add_15 = layer4_2_conv3 + relu_45; layer4_2_conv3 = relu_45 = None relu_48 = torch.nn.functional.relu(add_15, inplace = False); add_15 = None avgpool = self.avgpool(relu_48); relu_48 = None flatten = torch.flatten(avgpool, 1); avgpool = None return flatten
opcode name target args kwargs
placeholder x x () {} call_module conv1 conv1 (x,) {} call_function relu <function relu at 0x000001368BBB10D0> (conv1,) {'inplace': False} call_module layer1_0_conv1 layer1.0.conv1 (relu,) {} call_function relu_1 <function relu at 0x000001368BBB10D0> (layer1_0_conv1,) {'inplace': False} call_module layer1_0_conv2 layer1.0.conv2 (relu_1,) {} call_function relu_2 <function relu at 0x000001368BBB10D0> (layer1_0_conv2,) {'inplace': False} call_module layer1_0_conv3 layer1.0.conv3 (relu_2,) {} call_module layer1_0_shortcut_0 layer1.0.shortcut.0 (relu,) {} call_function add (layer1_0_conv3, layer1_0_shortcut_0) {}
call_function relu_3 <function relu at 0x000001368BBB10D0> (add,) {'inplace': False}
call_module layer1_1_conv1 layer1.1.conv1 (relu_3,) {}
call_function relu_4 <function relu at 0x000001368BBB10D0> (layer1_1_conv1,) {'inplace': False}
call_module layer1_1_conv2 layer1.1.conv2 (relu_4,) {}
call_function relu_5 <function relu at 0x000001368BBB10D0> (layer1_1_conv2,) {'inplace': False}
call_module layer1_1_conv3 layer1.1.conv3 (relu_5,) {}
call_function add_1 (layer1_1_conv3, relu_3) {}
call_function relu_6 <function relu at 0x000001368BBB10D0> (add_1,) {'inplace': False}
call_module layer1_2_conv1 layer1.2.conv1 (relu_6,) {}
call_function relu_7 <function relu at 0x000001368BBB10D0> (layer1_2_conv1,) {'inplace': False}
call_module layer1_2_conv2 layer1.2.conv2 (relu_7,) {}
call_function relu_8 <function relu at 0x000001368BBB10D0> (layer1_2_conv2,) {'inplace': False}
call_module layer1_2_conv3 layer1.2.conv3 (relu_8,) {}
call_function add_2 (layer1_2_conv3, relu_6) {}
call_function relu_9 <function relu at 0x000001368BBB10D0> (add_2,) {'inplace': False}
call_module layer2_0_conv1 layer2.0.conv1 (relu_9,) {}
call_function relu_10 <function relu at 0x000001368BBB10D0> (layer2_0_conv1,) {'inplace': False}
call_module layer2_0_conv2 layer2.0.conv2 (relu_10,) {}
call_function relu_11 <function relu at 0x000001368BBB10D0> (layer2_0_conv2,) {'inplace': False}
call_module layer2_0_conv3 layer2.0.conv3 (relu_11,) {}
call_module layer2_0_shortcut_0 layer2.0.shortcut.0 (relu_9,) {}
call_function add_3 (layer2_0_conv3, layer2_0_shortcut_0) {}
call_function relu_12 <function relu at 0x000001368BBB10D0> (add_3,) {'inplace': False}
call_module layer2_1_conv1 layer2.1.conv1 (relu_12,) {}
call_function relu_13 <function relu at 0x000001368BBB10D0> (layer2_1_conv1,) {'inplace': False}
call_module layer2_1_conv2 layer2.1.conv2 (relu_13,) {}
call_function relu_14 <function relu at 0x000001368BBB10D0> (layer2_1_conv2,) {'inplace': False}
call_module layer2_1_conv3 layer2.1.conv3 (relu_14,) {}
call_function add_4 (layer2_1_conv3, relu_12) {}
call_function relu_15 <function relu at 0x000001368BBB10D0> (add_4,) {'inplace': False}
call_module layer2_2_conv1 layer2.2.conv1 (relu_15,) {}
call_function relu_16 <function relu at 0x000001368BBB10D0> (layer2_2_conv1,) {'inplace': False}
call_module layer2_2_conv2 layer2.2.conv2 (relu_16,) {}
call_function relu_17 <function relu at 0x000001368BBB10D0> (layer2_2_conv2,) {'inplace': False}
call_module layer2_2_conv3 layer2.2.conv3 (relu_17,) {}
call_function add_5 (layer2_2_conv3, relu_15) {}
call_function relu_18 <function relu at 0x000001368BBB10D0> (add_5,) {'inplace': False}
call_module layer2_3_conv1 layer2.3.conv1 (relu_18,) {}
call_function relu_19 <function relu at 0x000001368BBB10D0> (layer2_3_conv1,) {'inplace': False}
call_module layer2_3_conv2 layer2.3.conv2 (relu_19,) {}
call_function relu_20 <function relu at 0x000001368BBB10D0> (layer2_3_conv2,) {'inplace': False}
call_module layer2_3_conv3 layer2.3.conv3 (relu_20,) {}
call_function add_6 (layer2_3_conv3, relu_18) {}
call_function relu_21 <function relu at 0x000001368BBB10D0> (add_6,) {'inplace': False}
call_module layer3_0_conv1 layer3.0.conv1 (relu_21,) {}
call_function relu_22 <function relu at 0x000001368BBB10D0> (layer3_0_conv1,) {'inplace': False}
call_module layer3_0_conv2 layer3.0.conv2 (relu_22,) {}
call_function relu_23 <function relu at 0x000001368BBB10D0> (layer3_0_conv2,) {'inplace': False}
call_module layer3_0_conv3 layer3.0.conv3 (relu_23,) {}
call_module layer3_0_shortcut_0 layer3.0.shortcut.0 (relu_21,) {}
call_function add_7 (layer3_0_conv3, layer3_0_shortcut_0) {}
call_function relu_24 <function relu at 0x000001368BBB10D0> (add_7,) {'inplace': False}
call_module layer3_1_conv1 layer3.1.conv1 (relu_24,) {}
call_function relu_25 <function relu at 0x000001368BBB10D0> (layer3_1_conv1,) {'inplace': False}
call_module layer3_1_conv2 layer3.1.conv2 (relu_25,) {}
call_function relu_26 <function relu at 0x000001368BBB10D0> (layer3_1_conv2,) {'inplace': False}
call_module layer3_1_conv3 layer3.1.conv3 (relu_26,) {}
call_function add_8 (layer3_1_conv3, relu_24) {}
call_function relu_27 <function relu at 0x000001368BBB10D0> (add_8,) {'inplace': False}
call_module layer3_2_conv1 layer3.2.conv1 (relu_27,) {}
call_function relu_28 <function relu at 0x000001368BBB10D0> (layer3_2_conv1,) {'inplace': False}
call_module layer3_2_conv2 layer3.2.conv2 (relu_28,) {}
call_function relu_29 <function relu at 0x000001368BBB10D0> (layer3_2_conv2,) {'inplace': False}
call_module layer3_2_conv3 layer3.2.conv3 (relu_29,) {}
call_function add_9 (layer3_2_conv3, relu_27) {}
call_function relu_30 <function relu at 0x000001368BBB10D0> (add_9,) {'inplace': False}
call_module layer3_3_conv1 layer3.3.conv1 (relu_30,) {}
call_function relu_31 <function relu at 0x000001368BBB10D0> (layer3_3_conv1,) {'inplace': False}
call_module layer3_3_conv2 layer3.3.conv2 (relu_31,) {}
call_function relu_32 <function relu at 0x000001368BBB10D0> (layer3_3_conv2,) {'inplace': False}
call_module layer3_3_conv3 layer3.3.conv3 (relu_32,) {}
call_function add_10 (layer3_3_conv3, relu_30) {}
call_function relu_33 <function relu at 0x000001368BBB10D0> (add_10,) {'inplace': False}
call_module layer3_4_conv1 layer3.4.conv1 (relu_33,) {}
call_function relu_34 <function relu at 0x000001368BBB10D0> (layer3_4_conv1,) {'inplace': False}
call_module layer3_4_conv2 layer3.4.conv2 (relu_34,) {}
call_function relu_35 <function relu at 0x000001368BBB10D0> (layer3_4_conv2,) {'inplace': False}
call_module layer3_4_conv3 layer3.4.conv3 (relu_35,) {}
call_function add_11 (layer3_4_conv3, relu_33) {}
call_function relu_36 <function relu at 0x000001368BBB10D0> (add_11,) {'inplace': False}
call_module layer3_5_conv1 layer3.5.conv1 (relu_36,) {}
call_function relu_37 <function relu at 0x000001368BBB10D0> (layer3_5_conv1,) {'inplace': False}
call_module layer3_5_conv2 layer3.5.conv2 (relu_37,) {}
call_function relu_38 <function relu at 0x000001368BBB10D0> (layer3_5_conv2,) {'inplace': False}
call_module layer3_5_conv3 layer3.5.conv3 (relu_38,) {}
call_function add_12 (layer3_5_conv3, relu_36) {}
call_function relu_39 <function relu at 0x000001368BBB10D0> (add_12,) {'inplace': False}
call_module layer4_0_conv1 layer4.0.conv1 (relu_39,) {}
call_function relu_40 <function relu at 0x000001368BBB10D0> (layer4_0_conv1,) {'inplace': False}
call_module layer4_0_conv2 layer4.0.conv2 (relu_40,) {}
call_function relu_41 <function relu at 0x000001368BBB10D0> (layer4_0_conv2,) {'inplace': False}
call_module layer4_0_conv3 layer4.0.conv3 (relu_41,) {}
call_module layer4_0_shortcut_0 layer4.0.shortcut.0 (relu_39,) {}
call_function add_13 (layer4_0_conv3, layer4_0_shortcut_0) {}
call_function relu_42 <function relu at 0x000001368BBB10D0> (add_13,) {'inplace': False}
call_module layer4_1_conv1 layer4.1.conv1 (relu_42,) {}
call_function relu_43 <function relu at 0x000001368BBB10D0> (layer4_1_conv1,) {'inplace': False}
call_module layer4_1_conv2 layer4.1.conv2 (relu_43,) {}
call_function relu_44 <function relu at 0x000001368BBB10D0> (layer4_1_conv2,) {'inplace': False}
call_module layer4_1_conv3 layer4.1.conv3 (relu_44,) {}
call_function add_14 (layer4_1_conv3, relu_42) {}
call_function relu_45 <function relu at 0x000001368BBB10D0> (add_14,) {'inplace': False}
call_module layer4_2_conv1 layer4.2.conv1 (relu_45,) {}
call_function relu_46 <function relu at 0x000001368BBB10D0> (layer4_2_conv1,) {'inplace': False}
call_module layer4_2_conv2 layer4.2.conv2 (relu_46,) {}
call_function relu_47 <function relu at 0x000001368BBB10D0> (layer4_2_conv2,) {'inplace': False}
call_module layer4_2_conv3 layer4.2.conv3 (relu_47,) {}
call_function add_15 (layer4_2_conv3, relu_45) {}
call_function relu_48 <function relu at 0x000001368BBB10D0> (add_15,) {'inplace': False}
call_module avgpool avgpool (relu_48,) {}
call_function flatten <built-in method flatten of type object at 0x00007FFF4D3E95E0> (avgpool, 1) {}
output output output (flatten,) {}
out_fr: tensor([[7.7281e-08, 2.3426e-01, 1.8818e-07, ..., 0.0000e+00, 5.0723e-08,
3.2106e-08],
[7.7271e-08, 2.7506e-02, 1.8817e-07, ..., 0.0000e+00, 5.0718e-08,
3.2107e-08],
[7.7284e-08, 1.0869e-02, 1.8817e-07, ..., 0.0000e+00, 5.0717e-08,
3.2106e-08],
...,
[7.7286e-08, 0.0000e+00, 1.8817e-07, ..., 0.0000e+00, 5.0728e-08,
3.2105e-08],
[7.7281e-08, 4.5656e-02, 1.8817e-07, ..., 0.0000e+00, 5.0728e-08,
3.2106e-08],
[7.7276e-08, 1.5018e-01, 1.8816e-07, ..., 0.0000e+00, 5.0734e-08,
3.2107e-08]], device='cuda:0')
...
Minimal code to reproduce the error/bug