sshan-zhao / ACMNet

Adaptive Context-Aware Multi-Modal Network for Depth Completion
64 stars 11 forks source link

Not using BN? #1

Closed mzy97 closed 4 years ago

mzy97 commented 4 years ago

Thank you for sharing your work.

In your model, conv + Relu formulate a conv block, but in common practice, it is conv + BN + Relu, why you not using BN in the whole network?

sshan-zhao commented 4 years ago

Thank you for sharing your work.

Hi, Actually, I tried to use the BatchNorm, but I got worse results. BTW, some works also do not use the operation.

mzy97 commented 4 years ago

another question: in your code here image if downsample set True, it will get 4x smaller spatial size than the input. But in the graph attention part, 1/2 size feature maps are enhanced. and in the end, 1/2 and 1/4 spatial size feature maps are added. Am I missing some key operation?

Plus, how to decide the number of samples used by knn graph attention operation? And samples and knn points are all observed points (valid points in input)?

sshan-zhao commented 4 years ago

another question: in your code here image if downsample set True, it will get 4x smaller spatial size than the input. But in the graph attention part, 1/2 size feature maps are enhanced. and in the end, 1/2 and 1/4 spatial size feature maps are added. Am I missing some key operation?

Plus, how to decide the number of samples used by knn graph attention operation? And samples and knn points are all observed points (valid points in input)?

"it will get 4x smaller spatial size than the input." => how? "how to decide the number of samples used by knn graph attention operation" => pls refer to the ablation study

mzy97 commented 4 years ago

another question: in your code here image if downsample set True, it will get 4x smaller spatial size than the input. But in the graph attention part, 1/2 size feature maps are enhanced. and in the end, 1/2 and 1/4 spatial size feature maps are added. Am I missing some key operation? Plus, how to decide the number of samples used by knn graph attention operation? And samples and knn points are all observed points (valid points in input)?

"it will get 4x smaller spatial size than the input." => how? "how to decide the number of samples used by knn graph attention operation" => pls refer to the ablation study

if downsample set true, stride variable will be 2, in each modality branch, 2 stride conv layer applied to it. (eg. d_conv0 and d_conv1) so 1/4 size output will get.But graph operation operate in 1/2 spatial size scale

sshan-zhao commented 4 years ago

The two operations are applied into the input respectively 2020年9月16日 +1000 PM11:40 mzy97 notifications@github.com,写道:

another question: in your code here

if downsample set True, it will get 4x smaller spatial size than the input. But in the graph attention part, 1/2 size feature maps are enhanced. and in the end, 1/2 and 1/4 spatial size feature maps are added. Am I missing some key operation? Plus, how to decide the number of samples used by knn graph attention operation? And samples and knn points are all observed points (valid points in input)? "it will get 4x smaller spatial size than the input." => how? "how to decide the number of samples used by knn graph attention operation" => pls refer to the ablation study if downsample set true, stride variable will be 2, in each modality branch, 2 stride conv layer applied to it. (eg. d_conv0 and d_conv1) so 1/4 size output will get.But graph operation operate in 1/2 spatial size scale — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.