Open SaschaHornauer opened 7 years ago
Thanks for letting me know. I will fix this when I figure out what's are changes of the API in the new version.
Why is this marked closed? I ran into same problem. Since there is no error i spent quite some time trying to find out whats going on. Everything seems to be ok, but no boxes. With small threshold there are boxes but only wrong ones. I did look into the open Issues early, but it took me long to find @SaschaHornauer solution (Thanks a lot!). Besides opening the issue again, I would suggest to add a heads up comment in the script (eg. import keras % broken for keras >= 2.0, use 1.2.2)
Otherwise: great ipynb! Learned a lot from this. Very compact and clear. Thanks.
@im2ex I'll add the comment, thanks for the reminder.
I had same problem, so I downgraded to keras=1.2.2 then saved entire model with model.save_model
, then upgrade to keras=2.0.4 load the model and it works fine...with some warning to migrate to new API.
This issue is due to the difference in the conv layer weights implementation in Keras 1 and 2.
In Keras 2, weights are stored in the same way regardless of the image dimension ordering set ('th' or 'tf') - channels last: i.e. (3,3,3,16) for a conv layer with a 3x3 kernel, 3 input channels and 16 output channels.
Whereas in Keras 1, depending on the image dimension ordering set, the conv layer weights take on different dimensions, wrt to above example: (3,3,3,16) for 'tf' and (16,3,3,3) for 'th'.
A bit ugly, but code below should work for Keras 2:
p.s. Remember to set keras.backend.set_image_dim_ordering('th')
for Keras 2 as well....
def load_weights(model,yolo_weight_file):
tiny_data = np.fromfile(yolo_weight_file,np.float32)[4:]
index = 0
for layer in model.layers:
weights = layer.get_weights()
if len(weights)>0:
filter_shape, bias_shape = [w.shape for w in weights]
if len(filter_shape)>2: #For convolutional layers
filter_shape_i = filter_shape[::-1]
bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape)
index += np.prod(bias_shape)
filter_weight= tiny_data[index:index+np.prod(filter_shape_i)].reshape(filter_shape_i)
filter_weight= np.transpose(filter_weight,(2,3,1,0))
index += np.prod(filter_shape)
layer.set_weights([filter_weight,bias_weight])
else: #For regular hidden layers
bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape)
index += np.prod(bias_shape)
filter_weight= tiny_data[index:index+np.prod(filter_shape)].reshape(filter_shape)
index += np.prod(filter_shape)
layer.set_weights([filter_weight,bias_weight])
@marc-chan Thanks for your help!
Thanks for your solution
This issue is due to the difference in the conv layer weights implementation in Keras 1 and 2.
In Keras 2, weights are stored in the same way regardless of the image dimension ordering set ('th' or 'tf') - channels last: i.e. (3,3,3,16) for a conv layer with a 3x3 kernel, 3 input channels and 16 output channels.
Whereas in Keras 1, depending on the image dimension ordering set, the conv layer weights take on different dimensions, wrt to above example: (3,3,3,16) for 'tf' and (16,3,3,3) for 'th'.
A bit ugly, but code below should work for Keras 2:
p.s. Remember to set
keras.backend.set_image_dim_ordering('th')
for Keras 2 as well....def load_weights(model,yolo_weight_file): tiny_data = np.fromfile(yolo_weight_file,np.float32)[4:] index = 0 for layer in model.layers: weights = layer.get_weights() if len(weights)>0: filter_shape, bias_shape = [w.shape for w in weights] if len(filter_shape)>2: #For convolutional layers filter_shape_i = filter_shape[::-1] bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape_i)].reshape(filter_shape_i) filter_weight= np.transpose(filter_weight,(2,3,1,0)) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight]) else: #For regular hidden layers bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape)].reshape(filter_shape) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight])
@marc-chan Hello, sorry to resurrect this old post, but your help would be very very appreciated :-) I've tried to use your alternative function for loading the weights, still, the problems remain the same: No bounding boxes, unless one lower the threshold almost to zero...(they are in the wrong position, nevertheless) Oh, obviously I did not forget to use
keras.backend.set_image_dim_ordering('th')
I'm using Keras 2.2.4 version.
Thank you!
This issue is due to the difference in the conv layer weights implementation in Keras 1 and 2.
In Keras 2, weights are stored in the same way regardless of the image dimension ordering set ('th' or 'tf') - channels last: i.e. (3,3,3,16) for a conv layer with a 3x3 kernel, 3 input channels and 16 output channels.
Whereas in Keras 1, depending on the image dimension ordering set, the conv layer weights take on different dimensions, wrt to above example: (3,3,3,16) for 'tf' and (16,3,3,3) for 'th'.
A bit ugly, but code below should work for Keras 2:
p.s. Remember to set
keras.backend.set_image_dim_ordering('th')
for Keras 2 as well....def load_weights(model,yolo_weight_file): tiny_data = np.fromfile(yolo_weight_file,np.float32)[4:] index = 0 for layer in model.layers: weights = layer.get_weights() if len(weights)>0: filter_shape, bias_shape = [w.shape for w in weights] if len(filter_shape)>2: #For convolutional layers filter_shape_i = filter_shape[::-1] bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape_i)].reshape(filter_shape_i) filter_weight= np.transpose(filter_weight,(2,3,1,0)) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight]) else: #For regular hidden layers bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape)].reshape(filter_shape) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight])
This issue is due to the difference in the conv layer weights implementation in Keras 1 and 2.
In Keras 2, weights are stored in the same way regardless of the image dimension ordering set ('th' or 'tf') - channels last: i.e. (3,3,3,16) for a conv layer with a 3x3 kernel, 3 input channels and 16 output channels.
Whereas in Keras 1, depending on the image dimension ordering set, the conv layer weights take on different dimensions, wrt to above example: (3,3,3,16) for 'tf' and (16,3,3,3) for 'th'.
A bit ugly, but code below should work for Keras 2:
p.s. Remember to set
keras.backend.set_image_dim_ordering('th')
for Keras 2 as well....def load_weights(model,yolo_weight_file): tiny_data = np.fromfile(yolo_weight_file,np.float32)[4:] index = 0 for layer in model.layers: weights = layer.get_weights() if len(weights)>0: filter_shape, bias_shape = [w.shape for w in weights] if len(filter_shape)>2: #For convolutional layers filter_shape_i = filter_shape[::-1] bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape_i)].reshape(filter_shape_i) filter_weight= np.transpose(filter_weight,(2,3,1,0)) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight]) else: #For regular hidden layers bias_weight = tiny_data[index:index+np.prod(bias_shape)].reshape(bias_shape) index += np.prod(bias_shape) filter_weight= tiny_data[index:index+np.prod(filter_shape)].reshape(filter_shape) index += np.prod(filter_shape) layer.set_weights([filter_weight,bias_weight])
这才是大神
@Alteregoxxx Hi, i had the same problem, did you find a solution?
@Alteregoxxx Hi, i had the same problem, did you find a solution?
did you find a solution??
Why i use this method but it did't work? 为什么我加上之后,也没有作用
Hi All, I have the same problem. Is there a solution for keras 2.2.4?
The new API of any Keras version bigger than 2.0 seems to lead to unconnected networks. No error of any kind can be seen apart from that the model.summary() will not show any "Connected to" entry and the network will not find any vehicle.
Downgrading to version 1.2.2. resolves the issue.
Thanks for the great work creating this, I would fix the issue myself though I am not yet familiar enough with Keras and do not yet know how.