Open duggalrahul opened 7 years ago
raise ValueError(err.message) ValueError: Negative dimension size caused by subtracting 11 from 3 for 'conv_1/convolution' (op: 'Conv2D') with input shapes: [?,3,227,227], [11,11,227,96].
Do you know how to debug this? It occurred when I tried to use the alexnet.
Hi @yueseW. I'm not sure which code you are referring to. If you're interested in performing transfer learning using AlexNet, you can have a look at my project. If this does not help, then please post the code that you are trying to run.
Hi @yueseW
Actually it's because I guess you are using tensorflow with keras so you have to change the dimension of input shape to (w, h, ch) instead of default (ch, w, h) For e.g. in AlexNet here
def AlexNet(weights_path=None, heatmap=False):
if heatmap:
inputs = Input(shape=(3,None,None))
else:
inputs = Input(shape=(3,227,227))
You have to change to
def AlexNet(weights_path=None, heatmap=False):
if heatmap:
inputs = Input(shape=(None,None, 3))
else:
inputs = Input(shape=(227,227,3))
Hi,
First of all, many thanks for creating this library !
On training the alexnet architecture on a medical imaging dataset from scratch, I get ~90% accuracy. Now I am wanting to use the pre-trained weights and do finetuning. The problem I am facing is explained below -
While training alexnet from scratch, the only pre-processing I did was to scale the pixels by 255. So the pixel values belonged in [0,1]. The model converged beautifully while training.
While using the pre-trained weights, I've performed channelwise mean subtraction as specified in the code. However, the model fails to converge. My question is - Do I need to scale the pixels (by 255) after performing the mean subtraction?
It would be helpful if someone could explain the exact pre-processing steps that were carried out while training on the original images from imagenet.