This PR contains my implementation of issue #5 -- reading and writing SavedModel files.
There are limitations on the use of variables when writing to SavedModel files as described in #5.
I added tests of the new functionality.
I modified the example batch_size_example.py to read and write SavedModel files. After the modification, I noticed that, although the example script correctly returns a single row in its result, the script infers a size of (64, 1001) for the softmax_tensor output, i.e.:
AFTER:
Input tensor is Tensor("input_tensor:0", shape=(?, 224, 224, 3), dtype=float16)
Softmax tensor is Tensor("softmax_tensor:0", shape=(64, 1001), dtype=float32)
This result appears to be technically correct --- the graph contains a batch_normalization meta-operator that hard-codes a batch size of 64. Some follow-on work will be needed to make this example set the batch size properly all the way to the end.
This PR contains my implementation of issue #5 -- reading and writing SavedModel files.
There are limitations on the use of variables when writing to SavedModel files as described in #5.
I added tests of the new functionality.
I modified the example
batch_size_example.py
to read and write SavedModel files. After the modification, I noticed that, although the example script correctly returns a single row in its result, the script infers a size of (64, 1001) for thesoftmax_tensor
output, i.e.:This result appears to be technically correct --- the graph contains a
batch_normalization
meta-operator that hard-codes a batch size of 64. Some follow-on work will be needed to make this example set the batch size properly all the way to the end.