In the tensorflow implementation, the temporal_conv_layer result is the product of conv and sigmoid.
“ return (x_conv[:, :, :, 0:c_out] + x_input) * tf.nn.sigmoid(x_conv[:, :, :, -c_out:]) ”
could you explain why the add operation is used here?
temp = self.conv1(X) + torch.sigmoid(self.conv2(X))
out = F.relu(temp + self.conv3(X))
In the tensorflow implementation, the temporal_conv_layer result is the product of conv and sigmoid. “ return (x_conv[:, :, :, 0:c_out] + x_input) * tf.nn.sigmoid(x_conv[:, :, :, -c_out:]) ”
could you explain why the add operation is used here?
temp = self.conv1(X) + torch.sigmoid(self.conv2(X)) out = F.relu(temp + self.conv3(X))