Open annxingyuan opened 4 years ago
@dsmilkov Reading over https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/ops/image_ops_impl.py#L1477-L1519, it looks like the TensorFlow op per_image_standardization
properly normalizes by subtracting the input mean / dividing by the standard deviation, whereas in our case we just want to scale the input to [-1, 1].
It would still be great to combine these ops into a single kernel for models like Blazeface - what do you think about adding this as a custom op on a per-model basis rather than to the library?
Good point. In that case, custom op on a per-model basis SGTM (we can't add it to official API until TF Python has it)
Perhaps this should be part of the future models utilities library.
Hi, @annxingyuan
Thank you for opening this issue for tracking purposes. Since this issue has been open for a long time, the code/debug information for this issue may not be relevant with the current state of the code base.
The TFJs team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TFJs version with the latest compatible hardware configuration which could potentially resolve the issue. We can keep the issue open if it is still relevant. Please confirm if we need to keep the issue open.
Thank you for your support and cooperation.
We often process images in the following way before feeding them to models:
tf.mul(tf.sub(inputImage.toFloat().div(255), 0.5), 2)
This could be achieved with a single kernel, following TensorFlow's example: https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization