When inspecting the code, I found a copy-paste error.
The implementation for pruning dense layers has the L1 norm
avg_neuron_w.append(np.average(np.abs(new_layer_param[0][:, i])))
and the L2 norm
avg_neuron_w.append(np.linalg.norm(new_layer_param[0][:, i]))
implemented correctly.
However, when inspecting the implementation for conv layers, the code currently is the following:
L1 norm:
avg_filter_w.append(np.average(np.abs(filters[0][:, :, :, i])))
L2 norm:
avg_filter_w.append(np.average(np.abs(filters[0][:, :, :, i])))
This looks like a copy-paste error. According to the dense layer implementation, the L2 norm for conv layers should be
avg_filter_w.append(np.linalg.norm(filters[0][:, :, :, i]))
When inspecting the code, I found a copy-paste error. The implementation for pruning dense layers has the L1 norm
avg_neuron_w.append(np.average(np.abs(new_layer_param[0][:, i])))
and the L2 normavg_neuron_w.append(np.linalg.norm(new_layer_param[0][:, i]))
implemented correctly.However, when inspecting the implementation for conv layers, the code currently is the following: L1 norm:
avg_filter_w.append(np.average(np.abs(filters[0][:, :, :, i])))
L2 norm:avg_filter_w.append(np.average(np.abs(filters[0][:, :, :, i])))
This looks like a copy-paste error. According to the dense layer implementation, the L2 norm for conv layers should be
avg_filter_w.append(np.linalg.norm(filters[0][:, :, :, i]))
I provide a pull request with a fix.