Closed ZhuweiQin closed 6 years ago
It's caused by a change in the keras API. This code was written at the time of keras 1.xx so convolutional layers are named convolution2d_xxx. Now they are named cond2d_xxx.
At some point in KerasDeconv.py, there is a check on the layer name, which does not recognize conv2d. So the simple fix is to change the check to something like if "conv2d" in layer_name instead of if "convolution2d" in layer name.
That problem solved. I replaced all the convolution2d with conv2d and maxpooling2d with max_pooling2d. Thanks a lot! Still, there are some other issues caused by the API version mismatch. Finally I installed the Keras API 1.2. And I can get the results.
Hi, Thnks for your good job. I am trying to run the VGG_deconv.py flowed the README. When I was loading the weight, I have encountered issue that
File "VGG_deconv.py", line 173, in
model = load_model('./Data/vgg16_weights.h5')
File "VGG_deconv.py", line 86, in load_model
model = VGG_16(weights_path)
File "VGG_deconv.py", line 72, in VGG_16
model.load_weights(weights_path)
File "/home/zhuwei/anaconda3/envs/theano2/lib/python2.7/site-packages/keras/models.py", line 719, in load_weights
topology.load_weights_from_hdf5_group(f, layers)
File "/home/zhuwei/anaconda3/envs/theano2/lib/python2.7/site-packages/keras/engine/topology.py", line 3056, in load_weights_from_hdf5_group
layer_names = [n.decode('utf8') for n in f.attrs['layer_names']]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/zhuwei/anaconda3/envs/theano2/lib/python2.7/site-packages/h5py/_hl/attrs.py", line 60, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5a.pyx", line 77, in h5py.h5a.open
KeyError: "Can't open attribute (can't locate attribute: 'layer_names')"
So I replaced the vgg16_weights.h5 with vgg16_weights_th_dim_ordering_th_kernels.h5 from https://github.com/fchollet/deep-learning-models/releases eliminated the error. And can run the forward pass. But I encountered another issue that
Traceback (most recent call last): File "VGG_deconv.py", line 184, in
plot_deconv(img_index, data, Dec, target_layer, feat_map)
File "/home/zhuwei/visualization/DeepLearningImplementations/DeconvNet/utils.py", line 404, in plot_deconv
X_deconv = Dec.get_deconv(data[img_index], target_layer, feat_map=feat_map)
File "/home/zhuwei/visualization/DeepLearningImplementations/DeconvNet/KerasDeconv.py", line 187, in get_deconv
X_out = self._backward_pass(X, target_layer, d_switch, feat_map)
File "/home/zhuwei/visualization/DeepLearningImplementations/DeconvNet/KerasDeconv.py", line 135, in _backward_pass
"Invalid layer name: %s \n Can only handle maxpool and conv" % target_layer)
ValueError: Invalid layer name: conv2d_13
Can only handle maxpool and conv
I change the layer name since that was defined in the new weight files. Can you give me some hint what's going on?