tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.51k stars 1.94k forks source link

Incorrect handling of quantized efficientdet models #4408

Closed rafallukasik123 closed 3 years ago

rafallukasik123 commented 3 years ago

System information

Describe the current behavior

At the beginning i donwload efficientdet from https://tfhub.dev/tensorflow/efficientdet/d1/1?tf-hub-format=compressed After that i use following command : tensorflowjs_converter --input_format=tf_saved_model --signature_name=serving_default --quantize_float16 --output_format=tfjs_graph_model ./efficiendet_from_tfhub ./webformat Next i predict image in my script and receives the following resaults : image

In resault i have three the same bb. Is depending on the results of the prediction - such this cases may be more

Additionally when i set flag tf.ENV.set('WEBGL_CHECK_NUMERICAL_PROBLEMS', true) i have error : "Error: The value 65504 cannot be represented on this device."

It is important in my script i set the following tf flags : tf.ENV.set('WEBGL_RENDER_FLOAT32_CAPABLE', false); tf.ENV.set('WEBGL_RENDER_FLOAT32_ENABLED', false);

Describe the expected behavior If i set tf flags as following - we will use the float32 its work fine : tf.ENV.set('WEBGL_RENDER_FLOAT32_CAPABLE', true); tf.ENV.set('WEBGL_RENDER_FLOAT32_ENABLED', true);

image

Its looks like quantization is work bad

I must set tf flags in the first way because i try to run efficiendet model on my ipad mini 5 and webkit doesn't support 32-bit float.

rthadur commented 3 years ago

it depends on the device which you are running , please check here for more info as how to use the flag https://www.tensorflow.org/js/guide/platform_environment#precision

rafallukasik123 commented 3 years ago

I understand. But my device doesn't support flaot32. So my flags WEBGL_RENDER_FLOAT32_CAPABLE and WEBGL_RENDER_FLOAT32_ENABLED have false value. I wanted to run model so I quantized model with option --quantize_float16 during the conversion. so if it doesn't work it's not a bug? i think quantization should solve this problem. Or is there any other way to run model on a device that doesn't support float 32?

pyu10055 commented 3 years ago

@rafallukasik123 I am curious what type of device you are having that does not support float32? on a side note, for device that does not support float32 texture, quantization the model would not help address the precision problem. Since the model is not trained with float16 weights, you will still get the accuracy loss. One possible work around is to use web assembly backend instead.

rafallukasik123 commented 3 years ago

I wrote above, I try to run model on the Apple iPad mini 5. Unfortunately webkit on mobile devices doesn't support float32 texture.

rafallukasik123 commented 3 years ago

@pyu10055 I trained model with parameter use_bfloat16: true(folder saved_model). Unfortunately it still didn't help.

google-ml-butler[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you.

google-ml-butler[bot] commented 3 years ago

Closing as stale. Please @mention us if this needs more attention.

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No