Open doaaobeidat opened 3 months ago
@ZFTurbo
No, weights just converted from 2D variant. You can train without freezing I guess.
@ZFTurbo Yes, you right. I tried with two options, and I found I have to unfreeze. Thank you for your reply.
thank-you in advance, I have a question related to this: When loading models, do I need to specify particular input shape values for each model to get ImageNet weights? If so, could you please list the required input image shapes for each model? For example: Model1 ->image shape 1 [x,y,z,3]
while doing: model3d, preprocess_input = Classifiers.get('efficientnetv2-b0') model = model3d(input_shape=(x, y, z, 3), include_top=False, weights='imagenet', pooling='max')
like in your paper you have used dense model with [96,128,128,3]
If you don't include top, you can use any shape on input (with some limitations like divisible by 32 etc).
Yes, I understand that, but I want to confirm whether the ImageNet weights are loaded correctly for my custom input shape. This way, I can fine-tune the model afterward instead of training it from scratch.
Weights are not dependent on input shape. And they were just converted from 2D variant. Mostly from 224x224 version. So I suppose something like 224x224xN will be the best - but it's usually too much for 3D variant. So I'd propose to use something like 128x128x128 - not more.
Thank you so much! That's exactly what I wanted to know. Thanks again!
At first, thank you for your effort in this work. I would like to ask about the nature of these models, are those pretrained like 2D pretrained models that trained on imageNet or what, in other words, when I want to use them can i write this by freeze layers or what: for params in self.model.parameters(): params.requires_grad=False