Closed tanakataiki closed 6 years ago
@ahrnbom I knew that but the standard ones in keras train in the image size 224 by 224 and It's resolution is different from 300 so I made my own but if it's possible and works. It would be ideal. I will try it
@tanakataiki It's possible with just a few lines of code, to get the weights trained for 224x224 and use them in a new network with input size of 300x300. It might not be ideal, but since the weights are trained for classification tasks where objects are typically larger than in object detection, it should work OK probably.
@ahrnbom Thanks for telling me would be like yolo training. but what i also found is original caffe implemention uses net work 11 that is in the middle of mobile-net so it is necessary to define or decide name in keras application to extract feature for classification and location right?
@tanakataiki Well, you can do it like this:
from keras.layers import Input
from keras.models import Model, Sequential
from keras.applications import MobileNet
width = 300
height = 300
input_shape = (height, width, 3)
mobilenet_input_shape = (224,224,3)
mobilenet = MobileNet(input_shape=mobilenet_input_shape, include_top=True)
net = {}
net['input'] = Input(input_shape)
prev = net['input']
for layer in mobilenet.layers:
net_key = 'mobilenet_{}'.format(layer.name)
net[net_key] = layer(prev)
prev = net[net_key]
if layer.name == "some layer you are looking for":
pass # do something with the layer, save if for use later etc.
This way, you get access to the layers you want, without defining the whole network yourself.
@ahrnbom Cheers for a transfer learning! Thanks a lot.
@ahrnbom Some weights are available here if you want , and I'm going to add some more 😄 https://github.com/tanakataiki/ssd_kerasV2
Just wondering, why are you reimplementing MobileNet and VGG19 instead of using the standard ones in Keras? You could greatly reduce the number of lines (and risk of making mistakes) by using
from keras.applications import MobileNet
and using MobileNet from there, withinclude_top=False
. That way, you get pretrained weights from ImageNet without any effort (in your code, it will be somebody else's responsibility to find good weights).