lukemelas / EfficientNet-PyTorch

A PyTorch implementation of EfficientNet
Apache License 2.0
7.92k stars 1.53k forks source link

Use It as a backbone for different purposes ? #246

Open ertugrulsmz opened 4 years ago

ertugrulsmz commented 4 years ago

I would like to use efficientnet as a backbone for object detection. However non of my attempts yield results. What do you suggest ? (In resnet implementation this code snippet works ...)

from efficientnet_pytorch import EfficientNet model = EfficientNet.from_pretrained('efficientnet-b6') modules=list(model.children())[:-2] #remove last 2 layers only need feature extractor modelbackbone =nn.Sequential(*modules)

img = torch.Tensor(3, 299, 299).normal_() # random image img = torch.unsqueeze(img, 0) # Add dimension 0 to tensor
img_var = Variable(img) # assign it to a variable features_var = modelbackbone(img_var)

ERROR : forward() takes 1 positional argument but 2 were given

wangdomg commented 3 years ago

I would like to use efficientnet as a backbone for object detection. However non of my attempts yield results. What do you suggest ? (In resnet implementation this code snippet works ...)

from efficientnet_pytorch import EfficientNet model = EfficientNet.from_pretrained('efficientnet-b6') modules=list(model.children())[:-2] #remove last 2 layers only need feature extractor modelbackbone =nn.Sequential(*modules)

img = torch.Tensor(3, 299, 299).normal_() # random image img = torch.unsqueeze(img, 0) # Add dimension 0 to tensor img_var = Variable(img) # assign it to a variable features_var = modelbackbone(img_var)

ERROR : forward() takes 1 positional argument but 2 were given

I met the same problem, have you solved it?

ertugrulsmz commented 3 years ago

I would like to use efficientnet as a backbone for object detection. However non of my attempts yield results. What do you suggest ? (In resnet implementation this code snippet works ...) from efficientnet_pytorch import EfficientNet model = EfficientNet.frompretrained('efficientnet-b6') modules=list(model.children())[:-2] #remove last 2 layers only need feature extractor modelbackbone =nn.Sequential(*modules) img = torch.Tensor(3, 299, 299).normal() # random image img = torch.unsqueeze(img, 0) # Add dimension 0 to tensor img_var = Variable(img) # assign it to a variable features_var = modelbackbone(img_var) ERROR : forward() takes 1 positional argument but 2 were given

I met the same problem, have you solved it?

I have used different network instead of this but there were some way that let the model work in terms of giving right output shape.

You can try something like that :

backbone = EfficientNet.from_pretrained('efficientnet-b2',include_top = False) a = list(backbone.children())[0] b = list(backbone.children())[1] c = list(backbone.children())[2] cx = nn.Sequential(*list(c.children()))

backbone_t = nn.Sequential(*[a,b,cx])

Here I have used b2. But the number of children might change in different versions, so you should care it. And check a,b,c maybe the children of d if there is more main children you need in the different version.

avishka40 commented 3 years ago

Hi, I have been trying your approach this seems to work but I have few doubts I would like to clarify when porting this as a backbone, is it possible to contact you @ertugrulsmz in any way. Would really appreciate since I am a beginner for datascience

ertugrulsmz commented 3 years ago

Hey, @avishka40 as I said I have used different networks rather than this one for the design choice, yet if you mail me and show me where you stuck, maybe I would be able to help. Gmail : ertugrulsmz55@gmail.com

Emmunaf commented 3 years ago

This seems to be solved by this. Just answered this on SO. If you want it as features extractor you can just use the include_top parameter in the constructor.

veeara282 commented 3 years ago

I would use EfficientNet.extract_endpoints(). It returns the intermediate values of the neural network before each reduction in image size (224 × 224 → 112 × 112, etc.). You can use the intermediate values (either in the middle or toward the end of the network) to build a segmentation branch.

I'm trying to build a segmentation branch on top of a middle layer using extract_endpoints(), but I need to know the number of channels, width, and height of each of the endpoints so I can build convolutional layers of the right size. Is there a method I can use to get the dimensions without generating the output itself?

dovietchinh commented 3 years ago

i got the same problem, here is my code , you can try and see the result: ` class Model(torch.nn.Module): def init(self,): super(Model, self).init() self.backbone = EfficientNet.from_name('efficientnet-b7')
self.pool = torch.nn.AvgPool2d(4) self.dropout = torch.nn.Dropout(0.4) self.linear1 = torch.nn.Linear(2560,512) self.linear2 = torch.nn.Linear(512,8) def forward(self,x): out = self.backbone.extract_features(x) out = self.pool(out) out = torch.flatten(out,1,-1) out = self.drop(out) out = self.linear1(out) out = self.linear2(out) return out

`

if you want to get the shape of last convolution :

` from efficientnet-pytorch import EfficientNet

input_shape = (1 ,3 ,112,112)

x = toch.rand(input_shape)

backbone = EfficientNet.from_name('efficientnet-b7')

y = backbone.extract_features(x)

print(y.shape)

`` i hope my solution can help you

itzAmirali commented 3 years ago

I had the same problem, and using include_Top=False was not helping me. The reason was that by using that flag, all last layers were created, and it was shown in the model's parameter.

For anyone else that has my problem, you can use my edited version fork that by setting include_Top=False, you can get rid of those layers, and use it as a backbone.