Closed dvorotnev closed 4 years ago
Thanks for catching these differences, I remember these but forgot about them in the latest converter migration. I added a review to the pull request.
In the meantime I have realized that this is not the right approach to fix the problem of transposing. The NNEF version of LRN is independent of the dimension order, it can do the normalization on any (number of) dimensions, so there is no need to force the channel to be the second dim. Instead, the converter needs to check which is the channel dim and generate the size
accordingly.
Furthermore, the NNEF -> TF direction has to be fixed as well, along with the TFLite versions for consistency.
It's easier for me to fix these myself than to explain them for you to fix the pull request. Can you please first just file the bugs and let me fix them quickly if they are just small things?
I have added a fix, that should handle the size
properly, can you check now?
I checked a new fix. Now a conversion result is:
version 1.0;
graph G(external1) -> (copy1)
{
external1 = external<scalar>(shape = [3, 4, 5, 6]);
local_response_normalization1 = local_response_normalization(external1, size = [1, 1, 1, 5], alpha = 1.0000000149011612, beta = 0.5, bias = 1.0);
copy1 = copy(local_response_normalization1);
}
This is according to the tf documentation. Thank you!
Great! Also, if you turn on --optimize
, the copy
should disappear from the end.
I am trying to save and to convert a simple python network:
using commands:
the conversion result is:
But the documentation for the
tf.nn.local_response_normalization
function has some differences from the NNEF documentation for thelocal_response_normalization
operation:According to these differences the conversion result should be:
I have created a pull request that fixes these differences, but I am not sure about correctness of a tensors transposing.