onnx / tutorials

Tutorials for creating and using ONNX models
Apache License 2.0
3.4k stars 631 forks source link

I think it's a general problem when the input to the functional layer is dynamic. #248

Open khadijabef opened 3 years ago

khadijabef commented 3 years ago

I think it's a general problem when the input to the functional layer is dynamic. I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.

Does anybody know how can I make the kernel size static here ?

class GeM(nn.Module): def init(self, p=3, eps=1e-6): super(GeM,self).init() self.p = nn.Parameter(torch.ones(1)*p) self.eps = eps

def forward(self, x):
    return self.gem(x, p=self.p, eps=self.eps)

def gem(self, x, p=3, eps=1e-6):
    return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)

def __repr__(self):
    return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'