Closed DubiousCactus closed 2 years ago
For MiniImageNet, the CNN baseline should use 32 convolution filters per block but the current implementation uses 64. It is even noted in the code:
class ConvBase(torch.nn.Sequential): # NOTE: # Omniglot: hidden=64, channels=1, no max_pool # MiniImagenet: hidden=32, channels=3, max_pool def __init__(self, hidden=64, channels=1, max_pool=False, layers=4, max_pool_factor=1.0): ...
but not actually taken into consideration:
class CNN4(torch.nn.Module): def __init__( self, output_size, hidden_size=64, layers=4, channels=3, max_pool=True, embedding_size=None, ): ... MiniImagenetCNN = CNN4
ProtoNet uses 64 channels, MAML uses 32. From experience, MAML also works with 64 but ProtoNet doesn't with 32, so that's why it's the default.
For MiniImageNet, the CNN baseline should use 32 convolution filters per block but the current implementation uses 64. It is even noted in the code:
but not actually taken into consideration: