I think it's a general problem when the input to the functional layer is dynamic.
I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.
Does anybody know how can I make the kernel size static here ?
I think it's a general problem when the input to the functional layer is dynamic. I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.
Does anybody know how can I make the kernel size static here ?
class GeM(nn.Module): def init(self, p=3, eps=1e-6): super(GeM,self).init() self.p = nn.Parameter(torch.ones(1)*p) self.eps = eps