Closed netw0rkf10w closed 5 years ago
The original repository uses has a redundancy in the weights, both the weights and compatibility matrix apply a linear transformation on the output of the filter. This is not the correct formulation of a the fully connected CRF as defined by Krähenbühl in his paper. This shouldn't cause any issue but still... Since our formulations are different, even if you could load the weights into my model (by removing some of them) the results would be completely different because the models were trained differently.
Regards,
Miguel
Thanks for your answer, @MiguelMonteiro !
Hi Miguel,
Could you please give an example on how to use the Keras layer?
I tried replace the original CRF layer in the original repo, on this line:
output = CrfRnnLayer(image_dims=(height, weight),
num_classes=21,
theta_alpha=160.,
theta_beta=3.,
theta_gamma=3.,
num_iterations=10,
name='crfrnn')([upscore, img_input])
with your CRF_RNN_Layer
output = CRF_RNN_Layer(image_dims=(height, weight),
num_classes=21,
theta_alpha=160.,
theta_beta=3.,
theta_gamma=3.,
num_iterations=10,
name='crfrnn')([upscore, img_input])
but then I obtained an error:
ValueError: Tried to convert 'shape' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor: (?, ?, ?, 21)
Thank you so much in advance for your help!
First, the way it is implemented the batch dimension must be known: issue 2. Second, that is a Tensorflow and/or Keras error not related to the implementation I think.
Hi Miguel. Thank you very much for your answer!
You were right, that was a syntax error. I replaced unaries.get_shape()
with tf.shape(unaries)
and it seems to work now.
I still have a question please. I observe that in the original repo, the kernels are normalized at each iteration:
spatial_out = spatial_out / spatial_norm_vals
bilateral_out = bilateral_out / bilateral_norm_vals
while in your code there is no such step. Could you please clarify this a little bit as why it is not necessary?
Thank you again!
This is because the authors of the original repository fail to realise that the permutohedral lattice already outputs a normalized filter. In fact, by design, it cannot output the unnormalized filter. Because of this, when you feed all ones into the lattice you get back all ones. Hence, their normalization is doing nothing but wasting compute. However, there is a bug with their filter which I think they haven't fixed yet which makes it output the filtered image multiplied by some constants (locally speaking) so their normalization might be mitigating this bug.
Best,
Miguel
On Mon, 12 Nov 2018, 22:03 D. Khuê Lê-Huu <notifications@github.com wrote:
Hi Miguel. Thank you very much for your answer!
You were right, that was a syntax error. I replaced unaries.get_shape() with tf.shape(unaries) and it seems to work now.
I still have a question please. I observe that in the original repo, the kernels are normalized at each iteration:
spatial_out = spatial_out / spatial_norm_vals bilateral_out = bilateral_out / bilateral_norm_vals
while in your code there is no such step. Could you please clarify this a little bit as why it is not necessary?
Thank you again!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/MiguelMonteiro/CRFasRNNLayer/issues/9#issuecomment-438044000, or mute the thread https://github.com/notifications/unsubscribe-auth/AcMye_9QzDytkh0riye3dpoBU3ibGS-0ks5uufAOgaJpZM4YQsOf .
@MiguelMonteiro Awesome! Thanks a lot!
Hi Miguel,
Thanks a lot for the great work!
I would like to ask some questions regarding the difference compared to the original repo.
Could you please give more detail on this?
Thank you in advance for your answers! Best regards.