KhronosGroup / NNEF-Tools

The NNEF Tools repository contains tools to generate and consume NNEF documents
https://www.khronos.org/nnef
222 stars 57 forks source link

Conversion results for tf.nn.batch_normalization and tf.clip_by_value differ from nnef_tools documentation #141

Closed dvorotnev closed 3 years ago

dvorotnev commented 3 years ago

I am trying to save and to convert a neural network from TF to NNEF using the nnef-tools converter. But layers tf.nn.batch_normalization and tf.clip_by_value are converted not according to operation_mapping.md.

I am using these commands to convert the network:

python ./test.py
python -m nnef_tools.convert --input-format=tf --output-format=nnef --input-model=./model.pb --output-model=model.nnef

Also I tried to add --optimize flag, but result didn't change. Below are two simple examples to reproduce this conversion bugs:

tf.nn.batch_normalization A simple python example:
import tensorflow as tf
import nnef_tools.io.tf.graphdef as graphdef

def testnet_batch_normalization():
    x = tf.placeholder(tf.float32, shape=[6, 32, 32, 3], name='input')
    with tf.variable_scope('batch_normalization'):
        mean = tf.get_variable('mean', shape=[3], initializer=tf.constant_initializer(4))
        variance = tf.get_variable('variance', shape=[3], initializer=tf.constant_initializer(8))
        offset = tf.get_variable('offset', shape=[3], initializer=tf.constant_initializer(15))
        scale = tf.get_variable('scale', shape=[3], initializer=tf.constant_initializer(16))
        variance_epsilon = 0.1
        return tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon)

tf.reset_default_graph()
with tf.Session() as sess:
    result = testnet_batch_normalization()
    sess.run(tf.global_variables_initializer())
    graphdef.save_default_graph("model.pb", session=sess, outputs={result: "output"})
A conversion result:
version 1.0;

graph G(external1) -> (copy1)
{
    external1 = external(shape = [6, 32, 32, 3]);
    variable1 = variable(shape = [3], label = 'batch_normalization/batchnorm/mul');
    unsqueeze1 = unsqueeze(variable1, axes = [0, 1, 2]);
    mul1 = mul(external1, unsqueeze1);
    variable2 = variable(shape = [3], label = 'batch_normalization/batchnorm/sub');
    unsqueeze2 = unsqueeze(variable2, axes = [0, 1, 2]);
    add1 = add(mul1, unsqueeze2);
    copy1 = copy(add1);
}
tf.clip_by_value A simple python example:
import tensorflow as tf
import nnef_tools.io.tf.graphdef as graphdef

def testnet_clamp():
    x = tf.placeholder(tf.float32, shape=[6, 32, 32, 3], name='input')
    with tf.variable_scope('clamp'):
        min = tf.get_variable('min', shape=[6, 32, 32, 3], initializer=tf.constant_initializer(0.0))
        max = tf.get_variable('max', shape=[6, 32, 32, 3], initializer=tf.constant_initializer(0.5))
        return tf.clip_by_value(x, min, max)

tf.reset_default_graph()
with tf.Session() as sess:
    result = testnet_clamp()
    sess.run(tf.global_variables_initializer())
    graphdef.save_default_graph("model.pb", session=sess, outputs={result: "output"})
A conversion result:
version 1.0;

graph G(external1) -> (copy1)
{
    external1 = external(shape = [6, 32, 32, 3]);
    variable1 = variable(shape = [6, 32, 32, 3], label = 'clamp/min');
    variable2 = variable(shape = [6, 32, 32, 3], label = 'clamp/max');
    min1 = min(external1, variable2);
    max1 = max(min1, variable1);
    copy1 = copy(max1);
}
gyenesvi commented 3 years ago

These seem to be correct conversions for me. I guess you expected those ops to be mapped 1-1, but the problem is that when they are saved to TF protobuf, they are already separated into parts, so the converter converts the parts. Furthermore, in case of batch_norm, constant folding merges the constants.

dvorotnev commented 3 years ago

Okey, thanks for the answer!