I tried to run a batch of data with "get_image_embedding", but I found different batch sizes will cause different outputs.
For example,
I run self.get_image_embedding(tf.zeros((1,127,127,3)))[0] and self.get_image_embedding(tf.zeros((30,127,127,3)))[0] in inference_wrapper.py, the results will be slightly different.
I guess batch_norm layers cause this problem, but I don't know how to fix it.
I wonder whether it have any solutions? Thanks!
I tried to run a batch of data with "get_image_embedding", but I found different batch sizes will cause different outputs. For example, I run
self.get_image_embedding(tf.zeros((1,127,127,3)))[0]
andself.get_image_embedding(tf.zeros((30,127,127,3)))[0]
in inference_wrapper.py, the results will be slightly different.I guess batch_norm layers cause this problem, but I don't know how to fix it. I wonder whether it have any solutions? Thanks!