I ran into issue, while trying to run train_supernet_lite code on my setup. SuperConv2d outputs variable number of channels (and doesn't depend on out_channels arg) While norm_layer's channels are always constant. So when I run SuperMobileResnetGenerator I always get running_mean error because of different number of channels in Batchnorm and SuperConv2d. How is this supposed to work ?
from models.modules.resnet_architecture.super_mobile_resnet_generator import SuperMobileResnetGenerator
from configs.resnet_configs import get_configs as get_super_configs
import torch
net = SuperMobileResnetGenerator(4,3, ngf=64, n_blocks=9)
x = torch.rand(1,4,256,256)
super_config = get_super_configs("channels-64-pix2pix")
net.configs = super_config.sample()
y = net(x)
It raises error because Batchnorm has 64 channels, while output from Superconv has 48 channels (RuntimeError: running_mean should contain 48 elements not 64)
Hi! Nice paper!
I ran into issue, while trying to run train_supernet_lite code on my setup.
SuperConv2d
outputs variable number of channels (and doesn't depend onout_channels
arg) Whilenorm_layer
's channels are always constant. So when I runSuperMobileResnetGenerator
I always get running_mean error because of different number of channels in Batchnorm and SuperConv2d. How is this supposed to work ?It raises error because Batchnorm has 64 channels, while output from Superconv has 48 channels (
RuntimeError: running_mean should contain 48 elements not 64
)