Closed mixaz closed 4 years ago
Currently, the tensorflow-serving-arm images contain no shell and use the exec
form of the Docker ENTRYPOINT
, so I believe that variable expansion is not possible here. The upstream version uses a full distro base image, where this project uses "distroless" base images. I'll see what I can do to include a shell (probably dash
) in the next release. In the meantime, you can still use a custom model path with version 1.14.0
if you wish:
git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm --init -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
emacski/tensorflow-serving:1.14.0-linux_arm_armv7-a_neon_vfpv3 \
--model_name=half_plus_two \
--model_base_path=/models/half_plus_two
I see the point. I think it will be enough just mention that in README
Thank you for the great project and support.
Just FYI, in the 1.15.0 update (6170dc57db2ab11f3b762b61e2c43eb77d8d4ca1) I have added a posix shell to the docker images for variable expansion in the startup command and should now be functionally equivalent to the upstream.
Running TF Serving container with test data from https://github.com/tensorflow/serving README:
shows following error:
And the last error repeats.
It happens because the container doesn't handle
MODEL_NAME
environment variable. Mapping the model data to/models/model
solves the issue.I think it would be right to support it, so newbies like me could launch TF Serving example using sample from mainline README. We love to cut-and-paste )