MaybeShewill-CV / lanenet-lane-detection

Unofficial implemention of lanenet model for real time lane detection
Apache License 2.0
2.34k stars 885 forks source link

If I want to use mobilenet as an encoder, which layers would you suggest? #181

Closed zacario-li closed 5 years ago

zacario-li commented 5 years ago

If I want to use mobilenet as an encoder, which layers would you suggest? In VGG16, you use 3rd, 4th, 5th maxpooling layers, If I want to use mobilenet, which layers should I choose?

MaybeShewill-CV commented 5 years ago

@zacario-li I have not tested that. You may test it yourself:)

zacario-li commented 5 years ago

@MaybeShewill-CV I noticed that, in "train_lanenet.py", you load vgg16 pre-trained weights, there is one line "weights_key = vv.name.split('/')[-3]", what does this mean?

MaybeShewill-CV commented 5 years ago

@zacario-li Find the weights' corresponding parameter:)

zacario-li commented 5 years ago

@MaybeShewill-CV I have add a mobilenetV2 encoder into this model, tf.trainable_variables get all the names as below.

when I try to load mobilenetV2 pre-trained weights, it always go into exception case ` try: weights = pretrained_weights[weights_key][0]

                    _op = tf.assign(vv, weights)

                    sess.run(_op)                    

except Exception as e:

                    continue

` I don't know why.

And I switch the encoder network to VGG, I found it go into exception case too. I download the vgg16.npy from here


` lanenet_model/inference/encode/MobilenetV2/Conv/weights:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/depthwise_weights:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/project/weights:0
...

lanenet_model/inference/encode/MobilenetV2/Conv_1/BatchNorm/beta:0 lanenet_model/inference/decode/score_origin/W:0 lanenet_model/inference/decode/deconv_1/deconv_1/kernel:0 lanenet_model/inference/decode/score_1/W:0 lanenet_model/inference/decode/deconv_2/deconv_2/kernel:0 lanenet_model/inference/decode/score_2/W:0 lanenet_model/inference/decode/deconv_final/deconv_final/kernel:0 lanenet_model/inference/decode/score_final/W:0 lanenet_model/pix_embedding_conv/W:0 `

MaybeShewill-CV commented 5 years ago

@zacario-li You may check if the op's name does not share the same one:)

yinhai86924 commented 5 years ago

Hello! how many layers of data are you suggesting for feature extraction with MoblieNet_V2? VGG in Lanenet uses the 3 4 5 pooling layer, which is recommended with MobileNet? Thank you !

yinhai86924 commented 5 years ago

@MaybeShewill-CV I have add a mobilenetV2 encoder into this model, tf.trainable_variables get all the names as below.

when I try to load mobilenetV2 pre-trained weights, it always go into exception case ` try: weights = pretrained_weights[weights_key][0]

                    _op = tf.assign(vv, weights)

                    sess.run(_op)                    

except Exception as e:

                    continue

` I don't know why.

And I switch the encoder network to VGG, I found it go into exception case too. I download the vgg16.npy from here

` lanenet_model/inference/encode/MobilenetV2/Conv/weights:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/depthwise_weights:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/project/weights:0 ...

lanenet_model/inference/encode/MobilenetV2/Conv_1/BatchNorm/beta:0 lanenet_model/inference/decode/score_origin/W:0 lanenet_model/inference/decode/deconv_1/deconv_1/kernel:0 lanenet_model/inference/decode/score_1/W:0 lanenet_model/inference/decode/deconv_2/deconv_2/kernel:0 lanenet_model/inference/decode/score_2/W:0 lanenet_model/inference/decode/deconv_final/deconv_final/kernel:0 lanenet_model/inference/decode/score_final/W:0 lanenet_model/pix_embedding_conv/W:0 `

@MaybeShewill-CV I have add a mobilenetV2 encoder into this model, tf.trainable_variables get all the names as below.

when I try to load mobilenetV2 pre-trained weights, it always go into exception case ` try: weights = pretrained_weights[weights_key][0]

                    _op = tf.assign(vv, weights)

                    sess.run(_op)                    

except Exception as e:

                    continue

` I don't know why.

And I switch the encoder network to VGG, I found it go into exception case too. I download the vgg16.npy from here

` lanenet_model/inference/encode/MobilenetV2/Conv/weights:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/Conv/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/depthwise_weights:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/gamma:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/depthwise/BatchNorm/beta:0 lanenet_model/inference/encode/MobilenetV2/expanded_conv/project/weights:0 ...

lanenet_model/inference/encode/MobilenetV2/Conv_1/BatchNorm/beta:0 lanenet_model/inference/decode/score_origin/W:0 lanenet_model/inference/decode/deconv_1/deconv_1/kernel:0 lanenet_model/inference/decode/score_1/W:0 lanenet_model/inference/decode/deconv_2/deconv_2/kernel:0 lanenet_model/inference/decode/score_2/W:0 lanenet_model/inference/decode/deconv_final/deconv_final/kernel:0 lanenet_model/inference/decode/score_final/W:0 lanenet_model/pix_embedding_conv/W:0 `

Hello! how many layers of data are you suggesting for up-sampling with MoblieNet_V2? VGG in Lanenet uses the 3 4 5 pooling layer, which is recommended with MobileNet? Thank you !