KhronosGroup / NNEF-Tools

The NNEF Tools repository contains tools to generate and consume NNEF documents
https://www.khronos.org/nnef
222 stars 57 forks source link

defining custom fragment, shape inference #81

Closed syedaffanhamdani closed 5 years ago

syedaffanhamdani commented 5 years ago

Do we need to define a shape inference function as well if we define a custom fragment in the graph.nnef file? I am getting the following error trying to define a custom quantization fragment.

code fragment


version 1.0;
extension KHR_enable_fragment_definitions;
extension KHR_enable_operator_expressions;

fragment custom_quantize(x: tensor<scalar>, min: tensor<scalar>,max: tensor<scalar>,bits: integer ) -> ( y: tensor<scalar> )
{ r=scalar(2 ^ bits - 1);
  z = clamp(x, min, max);
  q = round((z - min) / (max - min) * r);
  y = q / r * (max - min) + min;
}

graph network(input_tensor) -> (output0, output1)
{ // and further graph definition

stack trace

Traceback (most recent call last):
  File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 554, in <module>
    convert_using_argv(sys.argv)
  File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 542, in convert_using_argv
    conversion_info=args.conversion_info)
  File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 394, in convert
    custom_converters=custom_converters))
  File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 310, in convert_using_premade_objects
    source_graph = reader(in_filename)
  File "/home/ubuntu/NNEF-Tools/nnef_tools/io/nnef/nnef_io.py", line 432, in __call__
    return read(filename, parser_configs=self._parser_configs)
  File "/home/ubuntu/NNEF-Tools/nnef_tools/io/nnef/nnef_io.py", line 77, in read
    return _read(parser_graph=parser_config.infer_shapes(parser_config.load_graph(path_to_load)),
  File "/home/ubuntu/NNEF-Tools/nnef_tools/io/nnef/parser_config.py", line 54, in infer_shapes
    nnef.infer_shapes(graph=graph, custom_shapes=self._shapes)
  File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/nnef-0.2-py3.6-linux-x86_64.egg/nnef/shapes.py", line 483, in infer_shapes
    raise nnef.Error("shape inference function is not defined for operation '{}'".format(op.name))
_nnef.Error: shape inference function is not defined for operation 'custom_quantize'

Many thanks in advance!

tdanyluk commented 5 years ago

I'm afraid in this version if you actually use the fragment in the nnef file, then yes, you need to define a shape inference function.

The conversion of custom operations is not yet documented, but let me write you an example to get this working. I'm not sure what would you like to achieve, but here is the example:

I assume you have this custom_quantize.nnef file:

extension KHR_enable_fragment_definitions;
extension KHR_enable_operator_expressions;

fragment custom_quantize(x: tensor<scalar>, min: tensor<scalar>,max: tensor<scalar>,bits: integer ) -> ( y: tensor<scalar> )
{
  r=scalar(2 ^ bits - 1);
  z = clamp(x, min, max);
  q = round((z - min) / (max - min) * r);
  y = q / r * (max - min) + min;
}

graph network(i) -> (o)
{
  i = external(shape=[1, 2, 3]);
  o = custom_quantize(i, min=0.0, max=1.0, bits=3);
}

To convert this to for example tensorflow python code, you have to create a custom module (for example: custom_nnef_ops.py)

# no need to define here, if the fragment is present in the nnef file
NNEF_OP_DEFINITIONS = ""

# we have to lower it if we don't want to write custom converter.
NNEF_LOWERED_OPS = ["custom_quantize"]

def custom_quantize_prop(x, min, max, bits):
    return x

NNEF_SHAPE_PROPAGATORS = {
    "custom_quantize": custom_quantize_prop,
}

And you have to use this command to convert:

 ./nnef_tools/convert.py \
    --input-format nnef \
    --output-format tensorflow-py \
    --input-model custom_quantize.nnef \
    --custom-converters custom_nnef_ops

This is the output of the converter:

from __future__ import division, print_function, absolute_import
from collections import OrderedDict
import tensorflow as tf

def network():
    t_Sub = tf.subtract(x=1.0, y=0.0)
    t_Sub_1 = tf.subtract(x=1.0, y=0.0)
    t_i = tf.placeholder(shape=[1, 2, 3], dtype=tf.float32, name='i')
    t_clip_by_value = tf.clip_by_value(t=t_i, clip_value_min=0.0, clip_value_max=1.0)
    t_Sub_2 = tf.subtract(x=t_clip_by_value, y=0.0)
    t_truediv = tf.divide(x=t_Sub_2, y=t_Sub_1)
    t_Mul = tf.multiply(x=t_truediv, y=7.0)
    t_Round = tf.round(x=t_Mul)
    t_truediv_1 = tf.divide(x=t_Round, y=7.0)
    t_Mul_1 = tf.multiply(x=t_truediv_1, y=t_Sub)
    t_Add = tf.add(x=t_Mul_1, y=0.0)

    return OrderedDict([
        ("o", t_Add)
    ])
syedaffanhamdani commented 5 years ago

A bundle of thanks but I only want to quantize the weights using different algorithms(not with a static scale). My understanding was that NNEF quantizes binary weights stored in DAT files and then just exports in the TensorFlow. Having the quantization algorithm in the exported TensorFlow model would make it more clumsy.

I wish to read the min and max of each tensor and chose the scale accordingly.

syedaffanhamdani commented 5 years ago

Is there a way I can just quantize the weights stored in .dat files using the NNEF tool? Many thanks in advance.

gyenesvi commented 5 years ago

NNEF itself is a storage format, not a toolset for manipulating models. The tools only convert models to store them in NNEF format, they do not manipulate models or do training for you. If you want quantized networks first you have to train them that way, or apply post-training quantization, and then convert them to NNEF format.

Possibly in the future, we will introduce tools for manipulating models, such as post-training quantization, but that work is not yet done.

It is not clear to us what exactly you want to achieve, can you elaborate the process you would like?

gyenesvi commented 5 years ago

Any notes on this? Can it be closed?