Closed Cold-Winter closed 5 years ago
Training the BagNets from scratch is very simple: just use the standard Pytorch ImageNet training code (https://github.com/pytorch/examples/tree/master/imagenet) and load the BagNet as model. My code is a bit more adapted to the infrastructure that we use so I'd guess this is the most direct way.
Thanks for your quick reply. By the way, can you please tell me how do inference during test time. Given an image, how can we get the logits of each class.
The models are standard Pytorch and Keras models, so they return logits be default.
Is it only good for 2D images? Is it possible to extend to 3D voxel? Thanks for your great work.
@wielandbrendel, The inference code creates all possible patches. Should this also be the case when training BagNet? Also, I still need a patch generation and heatmap averaging layer in addition to the above PyTorch script for training BagNet, right? The paper is deceptively simple (really cool)! So, I just want to make sure that I haven't misunderstood anything.
@xuzhang5788 The same approach is applicable to 3D voxel data. @chigur There is no patch generation. BagNets are normal ResNet-50 but with many 3x3 convolutions replaced by 1x1 convolutions, which limits the receptive field size of the top-most convolutional layer.
Aah, I see. Thanks for the prompt clarification. I was confused because the generate_heatmap_pytorch
method creates patches.
The generate_heatmap_pytorch
function is only for the heatmap visualisation because there we want a denser sampling (normal BagNets have a stride of 8).
@wielandbrendel Thank you for your response. Would you mind telling us how to apply to 3D voxel data?
The principle behind BagNets is applicable to 3D voxel data, but of course you would have to modify an architecture designed for 3D voxel data. In any case, this discussion is unrelated to this issue and so I am closing this thread for now. Feel free to open a new issue.
@Cold-Winter how did you manage to train from scratch? The code here (https://github.com/pytorch/examples/tree/master/imagenet) has very specific models you can train (Resnet etc.), how is it possible to train a custom model? I am new to pytorch, so I would very much appreciate your help. Thanks!
Hi, How are you doing the training in patches. Are you taking q x q pixels and then training them through Resnet-50 model? As you mentioned in your paper fig. 1, I thought you are doing training patch wise. If not then I dont understand what the summation block is doing during inference, before the softmax. i assumed they are the summation of logits for each class from all the patches. Please help me understand.
@wielandbrendel Is your BagNets only for imageNet dataset? I want to get my own weights of the BagNets with my own dataset, so I need to train from scratch. Althogh you said that you changes several 3X3 conv to 1X1 conv in ResNet, I don't know which 3X3 conv. Could you please show your original model which is modified version of ResNet? Mnay thanks.
@xuzhang5788 You find the full model description in https://github.com/wielandbrendel/bag-of-local-features-models/blob/master/bagnets/pytorchnet.py.
Great work. It give me a lot insights. Can you please release the training code (Get the BagNet by myself)?