keras-team / keras

Deep Learning for humans
http://keras.io/
Apache License 2.0
61.94k stars 19.46k forks source link

Using Middle Layer of Keras Application MobileNetv2 #10113

Closed tanakataiki closed 6 years ago

tanakataiki commented 6 years ago

I am trying to use mobilenetv2 middle layer for MobileNetv2-ssdlite So I want to use middle layer as Feature extractor and localization

The purpose 1 Set input layer of bn13_conv_bn_expand output layer of Conv_1 2 Apply width and height multiplier for different resolution like 224224, 300300 or so 3 To understand get_3rd_layer_output([x])[0] function

//I defined model mobilenetv2=MobileNetV2(input_shape=mobilenetv2_input_shape, include_top=False,weights='imagenet')

//this works in my environment FeatureExtractor=Model(inputs=mobilenetv2.input, outputs =mobilenetv2.get_layer('mobl13_conv_expand').output)

//I set mid model (I am not sure weather this is ok or not maybe not ok ) mobilenetv2_mid13=mobilenetv2_mid.get_layer('bn13_conv_bn_expand')

//K function to get mid layer 114= bn13_conv_bn_expand and 147=Conv_1 get_3rd_layer_output = K.function([mobilenetv2.layers[114].input, K.learning_phase()], [mobilenetv2.layers[147].output])

//this produce error below conv_1 = get_3rd_layer_output(net[['mbl13_conv'],1])[0] ValueError: In order to feed symbolic tensors to a Keras model in TensorFlow, you need tensorflow 1.8 or higher.

Can anybody help me to know how to solve this?

Thanks

tanakataiki commented 6 years ago

@taehoonlee @JonathanCMitchell Some of this question is not related to MobilenetV2 ,but I'm really glad if you help me some part with your knowledge. .

taehoonlee commented 6 years ago

@tanakataiki, I think that the official docs will help you.

tanakataiki commented 6 years ago

@taehoonlee I refered https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer but I couldn't make middle i/o layer. So I made issue.

taehoonlee commented 6 years ago

@tanakataiki, I can not figure out what the problem and the purpose are.

taehoonlee commented 6 years ago

Please give your whole codes and describe your problems in detail.

tanakataiki commented 6 years ago

@taehoonlee Sorry for late. Code is here and I want to add middle layer wrapper? that has imagenet weights instead of Inverted function redefined.

https://github.com/tanakataiki/ssd_kerasV2/blob/master/model/ssd300MobileNetV2Lite.py

x= FeatureExtractor(Input0) x,pwconv3 = _isb4conv13(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=13) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=14) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=15) x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1,expansion=6, block_id=16)

What I want to do is to define k function and make it smarter like below. block_id14_block_id16 = K.function([mobilenetv2.layers[114].input,K.learning_phase()],mobilenetv2.layers[147].output]) x= FeatureExtractor(Input0) x=block_id14_block_id16(x)

Thanks

taehoonlee commented 6 years ago

@tanakataiki, You can not feed a keras tensor into block_id14_block_id16. It is also impossible to make a keras.models.Model with inputs=[mobilenetv2.layers[114].input] because mobilenetv2.layers[114].input is not a placeholder. You'd better to use the first form.

tanakataiki commented 6 years ago

@taehoonlee

  1. Could you explain the place holder? Do I have to define input shape for mobilenetv2.layers[114].input first? and what do you mean first form?

  2. By the way I tried Middle Layer=Model(inputs=mobilenetv2..get_layer('mobl13_conv_expand').input, outputs =mobilenetv2.get_layer('Conv_1').output) But this didnt work too. Is there any way to use this?

Thanks

taehoonlee commented 6 years ago

@tanakataiki,

  1. Input layer creates a placeholder. The first form means:

    x= FeatureExtractor(Input0)
    x,pwconv3 = _isb4conv13(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=13)
    x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=14)
    x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,expansion=6, block_id=15)
    x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1,expansion=6, block_id=16)
  2. You can not make a keras.models.Model with inputs=[mobilenetv2.get_layer('mobl13_conv_expand').input] because mobilenetv2.get_layer('mobl13_conv_expand').input is not a placeholder.

tanakataiki commented 6 years ago

@taehoonlee
I want middle layer cutter like blockid14_blockid16 if possible inthe future. Anyway ,I am allright thanks.

JonathanCMitchell commented 6 years ago

@tanakataiki you can create a new Input tensor, and you may select the specific layer by name from the MobileNetV2 model you created, and build that into a new model with the code below:

from keras.model import Model
input_t = Input(shape=(224, 224, 3))
temp = MobileNetV2(weights='imagenet', input_tensor=input_t)

desired_layer = temp.get_layer(<`name`>)
newmodel = Model(input = input_t, output=desired_layer.output)

Now you can add that model to whatever network graph you are using.

Note that your input_t should be one of the shapes allowed for a pre-trained imagenet weight resolution

If it isn't then you have to modify the MobileNetV2 code to take in your tensor explicitly, and still use one of the pre-trained weights.

tanakataiki commented 6 years ago

@JonathanCMitchell Thanks. If I set "input_t" to 14 then there are some of layer with 14 layers , and I thought I need to claer input layer name to specify weights for certain layer. As you have suggested to create a new model, It's From top to bottom and not one I can insert in the middle, so I want to make layer like below.

newmodel = Model(input = input_t,output=middle_layer.input , output=desired_layer.output)

to modify original code would be hard job for me... yet thanks

MegaCreater commented 1 year ago

@tanakataiki


# load pre-trained mobile net model 
mobilev2_core=tf.keras.applications.MobileNetV2(input_shape=(256,256,3),include_top=False)# load mobileNetV2 base 
# inputs : <KerasTensor: shape=(None, 128, 128, 32) dtype=float32 (created by layer 'Conv1_relu')>
mobilev2_core=tf.keras.Model(inputs=[mobilev2_core.layers[4].input],outputs=mobilev2_core.outputs)# setup model for custom layer as inputs
# inputs must be of same shape as required by layer 4, i.e. `expanded_conv_depthwise` : (None, 128, 128, 32)

# bulid custom model using customized mobileNetV2 
# define extractor model layers (inputs)
input_A=tf.keras.Input(shape=input_shape,batch_size=None,name='input_A',dtype=None)# cover image input
input_B=tf.keras.Input(shape=input_shape,batch_size=None,name='input_B',dtype=None)# watermarked image input
model_x=tf.keras.layers.Concatenate(axis=-1,name='concatinate_features_01')([input_A,input_B])# concatenate inputs
model_x=tf.keras.layers.Conv2D(32,kernel_size=(3,3),strides=(2,2))(model_x)
model_x=tf.keras.layers.BatchNormalization()(model_x)
model_x=tf.keras.layers.ReLU()(model_x)# add activation layer - outputs shape - (None, 128, 128, 32) (same required for mobileNetV2 custom
model_x=mobilev2_core(model_x)
model_x=tf.kears.....# what ever you want ....