smichalowski / google_inception_v3_for_caffe

Google Inception (deepdream) v3 for Caffe
161 stars 111 forks source link

Cannot run google inception v3 on any caffe #4

Closed engharat closed 8 years ago

engharat commented 8 years ago

Hi, Wen I try to make a run with that google inception v3 I get the following error: ERROR: == bottom[i]->shape(j) (13 vs. 12) All inputs must have the same shape, except at concat_axis. The latest caffe log lines are:

Setting up mixed_3_tower_conv_2_relu
Top shape: 24 96 12 12 (331776)
Memory required for data: 1402748352
Creating layer max_pool_mixed_3_pool
Creating Layer max_pool_mixed_3_pool
max_pool_mixed_3_pool <- ch_concat_mixed_2_chconcat_ch_concat_mixed_2_chconcat_0_split_2
max_pool_mixed_3_pool -> max_pool_mixed_3_pool
Setting up max_pool_mixed_3_pool
Top shape: 24 288 13 13 (1168128)
Memory required for data: 1407420864
Creating layer ch_concat_mixed_3_chconcat
Creating Layer ch_concat_mixed_3_chconcat
ch_concat_mixed_3_chconcat <- max_pool_mixed_3_pool
ch_concat_mixed_3_chconcat <- mixed_3_conv_conv2d_relu
ch_concat_mixed_3_chconcat <- mixed_3_tower_conv_2_conv2d_relu
ch_concat_mixed_3_chconcat -> ch_concat_mixed_3_chconcat
== bottom[i]->shape(j) (13 vs. 12) All inputs must have the same shape, except at concat_axis.

I've tried it on both nvcaffe 0.14 and nvcaffe 0.15 Do you have any idea?

ducha-aiki commented 8 years ago

Have you changed input layer size? This problem is usually cause because conv and pool layer have different formula for output size, so when concat conv+pool, one need to find proper padding\input size for it.

engharat commented 8 years ago

Yes I use 256x256 bit data! So that's why it is written something like "meant to be used on 299x299 data"! So the network is specifically built this particular image size. Do you think it could be adapted for other resolutions working only on these padding /input sizes on concat layer?

smichalowski commented 8 years ago

@engharat Ive just tested on nvcaffe-0.14 and had no shape error. Please check that your input data is correct. 299x299 is required (as in paper).

engharat commented 8 years ago

Yes the problem is that I tried to use it with data different w.r.t the 299x299 required!

ducha-aiki commented 8 years ago

@engharat the easiest way will be to adjust crop size. If you using kind of standard 256x256 -> crop 224X224, try to play with 224, change it in range 219..231, some value should fit

engharat commented 8 years ago

Very thanks I will try that soon!

rajiv235 commented 8 years ago

@engharat I am having similar problem with concat layer. Where you able to fix it? Thanks

engharat commented 8 years ago

Yes I have been able to solve my problems choosing the image crop size and the convolutions parameters. What problem do you have? Post your caffe log ;)

2016-09-23 20:04 GMT+02:00 rajiv235 notifications@github.com:

@engharat https://github.com/engharat I am having similar problem with concat layer. Where you able to fix it? Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smichalowski/google_inception_v3_for_caffe/issues/4#issuecomment-249262181, or mute the thread https://github.com/notifications/unsubscribe-auth/AP6Pp4zRDjimpe7_IPSl_bqbHxrw4szNks5qtBTDgaJpZM4ICnaF .

rajiv235 commented 8 years ago

@engharat Good to know that you solved the issue. I am trying to train Faster rcnn with pvanet. https://github.com/sanghoon/pva-faster-rcnn. Below is a snippet of caffe log while trying to train.

I0923 18:58:11.487720 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.487735 26638 net.cpp:165] Memory required for data: 579115860 I0923 18:58:11.487751 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_0/relu I0923 18:58:11.487768 26638 net.cpp:100] Creating Layer conv4_1/incep/2_0/relu I0923 18:58:11.487782 26638 net.cpp:434] conv4_1/incep/2_0/relu <- conv4_1/incep/2_0 I0923 18:58:11.487794 26638 net.cpp:395] conv4_1/incep/2_0/relu -> conv4_1/incep/2_0 (in-place) I0923 18:58:11.487808 26638 net.cpp:150] Setting up conv4_1/incep/2_0/relu I0923 18:58:11.487826 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.487836 26638 net.cpp:165] Memory required for data: 579575508 I0923 18:58:11.487851 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/conv I0923 18:58:11.487869 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/conv I0923 18:58:11.487886 26638 net.cpp:434] conv4_1/incep/2_1/conv <- conv4_1/incep/2_0 I0923 18:58:11.487902 26638 net.cpp:408] conv4_1/incep/2_1/conv -> conv4_1/incep/2_1 I0923 18:58:11.488270 26638 net.cpp:150] Setting up conv4_1/incep/2_1/conv I0923 18:58:11.488294 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.488306 26638 net.cpp:165] Memory required for data: 580035156 I0923 18:58:11.488320 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/relu I0923 18:58:11.488342 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/relu I0923 18:58:11.488358 26638 net.cpp:434] conv4_1/incep/2_1/relu <- conv4_1/incep/2_1 I0923 18:58:11.488371 26638 net.cpp:395] conv4_1/incep/2_1/relu -> conv4_1/incep/2_1 (in-place) I0923 18:58:11.488390 26638 net.cpp:150] Setting up conv4_1/incep/2_1/relu I0923 18:58:11.488404 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.488415 26638 net.cpp:165] Memory required for data: 580494804 I0923 18:58:11.488425 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/pool I0923 18:58:11.488440 26638 net.cpp:100] Creating Layer conv4_1/incep/pool I0923 18:58:11.488456 26638 net.cpp:434] conv4_1/incep/pool <- conv3_4_conv3_4_0_split_3 I0923 18:58:11.488471 26638 net.cpp:408] conv4_1/incep/pool -> conv4_1/incep/pool I0923 18:58:11.488524 26638 net.cpp:150] Setting up conv4_1/incep/pool I0923 18:58:11.488546 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.488559 26638 net.cpp:165] Memory required for data: 581669332 I0923 18:58:11.488574 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/conv I0923 18:58:11.488592 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/conv I0923 18:58:11.488610 26638 net.cpp:434] conv4_1/incep/poolproj/conv <- conv4_1/incep/pool I0923 18:58:11.488623 26638 net.cpp:408] conv4_1/incep/poolproj/conv -> conv4_1/incep/poolproj I0923 18:58:11.488951 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/conv I0923 18:58:11.488976 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.488986 26638 net.cpp:165] Memory required for data: 582843860 I0923 18:58:11.489001 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/relu I0923 18:58:11.489019 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/relu I0923 18:58:11.489032 26638 net.cpp:434] conv4_1/incep/poolproj/relu <- conv4_1/incep/poolproj I0923 18:58:11.489045 26638 net.cpp:395] conv4_1/incep/poolproj/relu -> conv4_1/incep/poolproj (in-place) I0923 18:58:11.489065 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/relu I0923 18:58:11.489079 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.489089 26638 net.cpp:165] Memory required for data: 584018388 I0923 18:58:11.489100 26638 layer_factory.hpp:77] Creating layer conv4_1/incep I0923 18:58:11.489112 26638 net.cpp:100] Creating Layer conv4_1/incep I0923 18:58:11.489140 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/0 I0923 18:58:11.489151 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/1_0 I0923 18:58:11.489168 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/2_1 I0923 18:58:11.489179 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/poolproj I0923 18:58:11.489197 26638 net.cpp:408] conv4_1/incep -> conv4_1/incep F0923 18:58:11.489228 26638 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (38 vs. 37) All inputs must have the same shape, except at concat_axis.

Apparently pooling layer produces an output size of 37,62 instead of 38,63(last 2 dimensions). The concerned part of train.prototxt is as below: layer { name: "conv4_1/incep/0/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 64 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/0/relu" type: "ReLU" bottom: "conv4_1/incep/0" top: "conv4_1/incep/0" } layer { name: "conv4_1/incep/1_reduce/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/1_reduce" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/1_reduce/relu" type: "ReLU" bottom: "conv4_1/incep/1_reduce" top: "conv4_1/incep/1_reduce" } layer { name: "conv4_1/incep/1_0/conv" type: "Convolution" bottom: "conv4_1/incep/1_reduce" top: "conv4_1/incep/1_0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 128 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/1_0/relu" type: "ReLU" bottom: "conv4_1/incep/1_0" top: "conv4_1/incep/1_0" } layer { name: "conv4_1/incep/2_reduce/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/2_reduce" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 24 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/2_reduce/relu" type: "ReLU" bottom: "conv4_1/incep/2_reduce" top: "conv4_1/incep/2_reduce" } layer { name: "conv4_1/incep/2_0/conv" type: "Convolution" bottom: "conv4_1/incep/2_reduce" top: "conv4_1/incep/2_0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/2_0/relu" type: "ReLU" bottom: "conv4_1/incep/2_0" top: "conv4_1/incep/2_0" } layer { name: "conv4_1/incep/2_1/conv" type: "Convolution" bottom: "conv4_1/incep/2_0" top: "conv4_1/incep/2_1" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/2_1/relu" type: "ReLU" bottom: "conv4_1/incep/2_1" top: "conv4_1/incep/2_1" } layer { name: "conv4_1/incep/pool" type: "Pooling" bottom: "conv3_4" top: "conv4_1/incep/pool" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 0 } } layer { name: "conv4_1/incep/poolproj/conv" type: "Convolution" bottom: "conv4_1/incep/pool" top: "conv4_1/incep/poolproj" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 128 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/poolproj/relu" type: "ReLU" bottom: "conv4_1/incep/poolproj" top: "conv4_1/incep/poolproj" } layer { name: "conv4_1/incep" type: "Concat" bottom: "conv4_1/incep/0" bottom: "conv4_1/incep/1_0" bottom: "conv4_1/incep/2_1" bottom: "conv4_1/incep/poolproj" top: "conv4_1/incep" }

The problem is with conv4_1/incep. Any help would be appreciated.

Thanks

engharat commented 8 years ago

yes, the problem is in the pooling layer. Have you tried to change pooling layer stride to other values?

2016-09-23 21:04 GMT+02:00 rajiv235 notifications@github.com:

@engharat https://github.com/engharat Good to know that you solved the issue. I am trying to train Faster rcnn with pvanet. https://github.com/sanghoon/pva-faster-rcnn. Below is a snippet of caffe log while trying to train.

I0923 18:58:11.487720 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.487735 26638 net.cpp:165] Memory required for data: 579115860 I0923 18:58:11.487751 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_0/relu I0923 18:58:11.487768 26638 net.cpp:100] Creating Layer conv4_1/incep/2_0/relu I0923 18:58:11.487782 26638 net.cpp:434] conv4_1/incep/2_0/relu <- conv4_1/incep/2_0 I0923 18:58:11.487794 26638 net.cpp:395] conv4_1/incep/2_0/relu -> conv4_1/incep/2_0 (in-place) I0923 18:58:11.487808 26638 net.cpp:150] Setting up conv4_1/incep/2_0/relu I0923 18:58:11.487826 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.487836 26638 net.cpp:165] Memory required for data: 579575508 I0923 18:58:11.487851 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/conv I0923 18:58:11.487869 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/conv I0923 18:58:11.487886 26638 net.cpp:434] conv4_1/incep/2_1/conv <- conv4_1/incep/2_0 I0923 18:58:11.487902 26638 net.cpp:408] conv4_1/incep/2_1/conv -> conv4_1/incep/2_1 I0923 18:58:11.488270 26638 net.cpp:150] Setting up conv4_1/incep/2_1/conv I0923 18:58:11.488294 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.488306 26638 net.cpp:165] Memory required for data: 580035156 I0923 18:58:11.488320 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/relu I0923 18:58:11.488342 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/relu I0923 18:58:11.488358 26638 net.cpp:434] conv4_1/incep/2_1/relu <- conv4_1/incep/2_1 I0923 18:58:11.488371 26638 net.cpp:395] conv4_1/incep/2_1/relu -> conv4_1/incep/2_1 (in-place) I0923 18:58:11.488390 26638 net.cpp:150] Setting up conv4_1/incep/2_1/relu I0923 18:58:11.488404 26638 net.cpp:157] Top shape: 1 48 38 63 (114912) I0923 18:58:11.488415 26638 net.cpp:165] Memory required for data: 580494804 I0923 18:58:11.488425 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/pool I0923 18:58:11.488440 26638 net.cpp:100] Creating Layer conv4_1/incep/pool I0923 18:58:11.488456 26638 net.cpp:434] conv4_1/incep/pool <- conv3_4_conv3_4_0_split_3 I0923 18:58:11.488471 26638 net.cpp:408] conv4_1/incep/pool -> conv4_1/incep/pool I0923 18:58:11.488524 26638 net.cpp:150] Setting up conv4_1/incep/pool I0923 18:58:11.488546 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.488559 26638 net.cpp:165] Memory required for data: 581669332 I0923 18:58:11.488574 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/conv I0923 18:58:11.488592 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/conv I0923 18:58:11.488610 26638 net.cpp:434] conv4_1/incep/poolproj/conv <- conv4_1/incep/pool I0923 18:58:11.488623 26638 net.cpp:408] conv4_1/incep/poolproj/conv -> conv4_1/incep/poolproj I0923 18:58:11.488951 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/conv I0923 18:58:11.488976 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.488986 26638 net.cpp:165] Memory required for data: 582843860 I0923 18:58:11.489001 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/relu I0923 18:58:11.489019 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/relu I0923 18:58:11.489032 26638 net.cpp:434] conv4_1/incep/poolproj/relu <- conv4_1/incep/poolproj I0923 18:58:11.489045 26638 net.cpp:395] conv4_1/incep/poolproj/relu -> conv4_1/incep/poolproj (in-place) I0923 18:58:11.489065 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/relu I0923 18:58:11.489079 26638 net.cpp:157] Top shape: 1 128 37 62 (293632) I0923 18:58:11.489089 26638 net.cpp:165] Memory required for data: 584018388 I0923 18:58:11.489100 26638 layer_factory.hpp:77] Creating layer conv4_1/incep I0923 18:58:11.489112 26638 net.cpp:100] Creating Layer conv4_1/incep I0923 18:58:11.489140 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/0 I0923 18:58:11.489151 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/1_0 I0923 18:58:11.489168 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/2_1 I0923 18:58:11.489179 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/poolproj I0923 18:58:11.489197 26638 net.cpp:408] conv4_1/incep -> conv4_1/incep F0923 18:58:11.489228 26638 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (38 vs. 37) All inputs must have the same shape, except at concat_axis.

Apparently pooling layer produces an output size of 37,62 instead of 38,63(last 2 dimensions). The concerned part of train.prototxt is as below: layer { name: "conv4_1/incep/0/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 64 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/0/relu" type: "ReLU" bottom: "conv4_1/incep/0" top: "conv4_1/incep/0" } layer { name: "conv4_1/incep/1_reduce/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/1_reduce" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/1_reduce/relu" type: "ReLU" bottom: "conv4_1/incep/1_reduce" top: "conv4_1/incep/1_reduce" } layer { name: "conv4_1/incep/1_0/conv" type: "Convolution" bottom: "conv4_1/incep/1_reduce" top: "conv4_1/incep/1_0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 128 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/1_0/relu" type: "ReLU" bottom: "conv4_1/incep/1_0" top: "conv4_1/incep/1_0" } layer { name: "conv4_1/incep/2_reduce/conv" type: "Convolution" bottom: "conv3_4" top: "conv4_1/incep/2_reduce" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 24 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 2 stride_w: 2 } } layer { name: "conv4_1/incep/2_reduce/relu" type: "ReLU" bottom: "conv4_1/incep/2_reduce" top: "conv4_1/incep/2_reduce" } layer { name: "conv4_1/incep/2_0/conv" type: "Convolution" bottom: "conv4_1/incep/2_reduce" top: "conv4_1/incep/2_0" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/2_0/relu" type: "ReLU" bottom: "conv4_1/incep/2_0" top: "conv4_1/incep/2_0" } layer { name: "conv4_1/incep/2_1/conv" type: "Convolution" bottom: "conv4_1/incep/2_0" top: "conv4_1/incep/2_1" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 48 bias_term: true weight_filler { type: "xavier" } pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/2_1/relu" type: "ReLU" bottom: "conv4_1/incep/2_1" top: "conv4_1/incep/2_1" } layer { name: "conv4_1/incep/pool" type: "Pooling" bottom: "conv3_4" top: "conv4_1/incep/pool" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 0 } } layer { name: "conv4_1/incep/poolproj/conv" type: "Convolution" bottom: "conv4_1/incep/pool" top: "conv4_1/incep/poolproj" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 128 bias_term: true weight_filler { type: "xavier" } pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 stride_h: 1 stride_w: 1 } } layer { name: "conv4_1/incep/poolproj/relu" type: "ReLU" bottom: "conv4_1/incep/poolproj" top: "conv4_1/incep/poolproj" } layer { name: "conv4_1/incep" type: "Concat" bottom: "conv4_1/incep/0" bottom: "conv4_1/incep/1_0" bottom: "conv4_1/incep/2_1" bottom: "conv4_1/incep/poolproj" top: "conv4_1/incep" }

The problem is with conv4_1/incep. Any help would be appreciated.

Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smichalowski/google_inception_v3_for_caffe/issues/4#issuecomment-249276626, or mute the thread https://github.com/notifications/unsubscribe-auth/AP6Pp-YvX6wGmGantDf0Pk65ybNfIcLOks5qtCLFgaJpZM4ICnaF .

rajiv235 commented 8 years ago

I tried changing the value of stride but unfortunately no luck.

michaelholm-ce commented 7 years ago

@engharat -- Will you please share what crop size and convolution parameter changes you made to work with 256x256 images?

kmonachopoulos commented 7 years ago

Any solutions on that? My db is already resized to 256x256 therefore I have dimension problem in the CNN!

engharat commented 7 years ago

Hi guys the inception V3 model should accept both 299x299 and 231x231(or 235x235 one of them I don't remember) resolution, so the best thing you could do with your 256x256 dataset is to perform random crop at 231x231/ 235x235 size :)

2017-06-27 16:31 GMT+02:00 Konstantinos Monachopoulos < notifications@github.com>:

Any solutions on that? My db is already resized to 256x256 therefore I have dimension problem in the CNN!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smichalowski/google_inception_v3_for_caffe/issues/4#issuecomment-311376637, or mute the thread https://github.com/notifications/unsubscribe-auth/AP6Pp397OtcNWFn-mV17ZuovPZHAhtzDks5sIRIwgaJpZM4ICnaF .

kmonachopoulos commented 7 years ago

It depends of the size of the kernels that are described in the .caffemodel of the pretrained model. The sizes that you gave me did not work. Any other suggestions??

Thanks

engharat commented 7 years ago

Yes, so if you play with the first convolution you can use a lot of different resolutions. Anyway, if someone wanna use pretrained weights, using this kernel-size network: https://github.com/smichalowski/google_inception_v3_for_caffe/blob/master/train_val.prototxt I managed to use 235x235 resolution, using pretrained weights.

2017-06-27 17:04 GMT+02:00 Konstantinos Monachopoulos < notifications@github.com>:

It depends of the size of the kernels that are described in the .caffemodel of the pretrained model. The sizes that you gave me did not work. Any other suggestions??

Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smichalowski/google_inception_v3_for_caffe/issues/4#issuecomment-311387362, or mute the thread https://github.com/notifications/unsubscribe-auth/AP6Pp-lIkLy2heQ4vnAi0JtklN-rxxzqks5sIRoLgaJpZM4ICnaF .

kmonachopoulos commented 7 years ago

Hello,

The trick did the job. Thanks!