Closed linchundan88 closed 4 years ago
keras code: x = GlobalAveragePooling3D()(x) x = Dense(1024, activation='relu')(x)
why pytorch using two dense layers? self.dense_1 = nn.Linear(512, 1024, bias=True) self.dense_2 = nn.Linear(1024, n_class, bias=True)
view(self.base_out.size( [0],-1) should be view(self.base_out.size(0), -1)?
keras code: x = GlobalAveragePooling3D()(x) x = Dense(1024, activation='relu')(x)
why pytorch using two dense layers? self.dense_1 = nn.Linear(512, 1024, bias=True) self.dense_2 = nn.Linear(1024, n_class, bias=True)
Keras code uses two dense layers between the output of GlobalAveragePooling3D layer and the final output as well.
x = GlobalAveragePooling3D()(x)
x = Dense(1024, activation='relu')(x)
output = Dense(num_class, activation=activate)(x)
view(self.base_out.size( [0],-1) should be view(self.base_out.size(0), -1)?
It's actually a missing of another half parenthesis. It should be.view(self.classification_feature.size()[0],-1)
. Thank you for pointing it out, it's been fixed now.
Yes, I neglected the next dense layer in your keras sample code.
demo in file "ModelsGenesis/tree/master/pytorch"
self.out_glb_avg_pool = F.avg_pool3d(self.base_out, kernel_size=self.base_out.size()[2:]).view(self.base_out.size( [0],-1)
view(self.base_out.size( [0],-1) ???