Open YingqianWang opened 5 years ago
In Notebook 5, the model is loaded by using: model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)
But where can I download the trained model? I only want to run the test and do not want to perform training.
have u got the pre-trained model or weights ? i use the pconv_imagenet model to test , but i got a bad prediction.
You can find the link to download weights in readme. `_Pre-trained weights_
I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.
Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`
You can find the link to download weights in readme. `_Pre-trained weights_
I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.
Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')
ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')
ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.
The possible cause could be the TF version. What version do you currently use?
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')
ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.
Based on @mrkeremyilmaz reply, I tried pip install -r requirements.txt with all versions specified by the author. And this problem no longer exists.
You can find the link to download weights in readme. `_Pre-trained weights_
I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.
Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`
Hello I have tried to train model with(pconv_imagenet.h5) but got this error: ValueError Traceback (most recent call last)
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Hello I have tried to train model with(pconv_imagenet.h5) but got this error: ValueError Traceback (most recent call last) in () 1 # Instantiate the model 2 model = PConvUnet(vgg_weights='./data/logs/pytorch_vgg16.h5') ----> 3 model.load("/content/gdrive/MyDrive/Partial_Conv/pconv_imagenet.h5", train_bn=False)
in load(self, filepath, train_bn, lr) 238 239 # Load weights into model --> 240 epoch = int(os.path.basename(filepath).split('.')[1].split('-')[0]) 241 assert epoch > 0, "Could not parse weight file. Should include the epoch" 242 self.current_epoch = epoch
ValueError: invalid literal for int() with base 10: 'h5'.
is there any way to correct this error?
It seems that the file name of the saved weights must be of the form "
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
My situation is the same as yours. Have you found a solution?
You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]
Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?
My situation is the same as yours. Have you found a solution?
Hello, i have the same problem. did u find a solution? thanks
In Notebook 5, the model is loaded by using: model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)
But where can I download the trained model? I only want to run the test and do not want to perform training.