emacski / tensorflow-serving-arm

TensorFlow Serving ARM - A project for cross-compiling TensorFlow Serving targeting popular ARM cores
Apache License 2.0
99 stars 16 forks source link

TF Serving container doesn't handle MODEL_NAME environment variable #4

Closed mixaz closed 4 years ago

mixaz commented 4 years ago

Running TF Serving container with test data from https://github.com/tensorflow/serving README:

git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    emacski/tensorflow-serving:1.14.0-linux_arm_armv7-a_neon_vfpv3

shows following error:

2019-11-30 19:00:29.050035: I external/tf_serving/tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config:  model_name: model model_base_path: /models/model
2019-11-30 19:00:29.055819: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-11-30 19:00:29.059314: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:561]  (Re-)adding model: model
2019-11-30 19:00:29.066273: E external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model
2019-11-30 19:00:30.066100: E external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model

And the last error repeats.

It happens because the container doesn't handle MODEL_NAME environment variable. Mapping the model data to /models/model solves the issue.

I think it would be right to support it, so newbies like me could launch TF Serving example using sample from mainline README. We love to cut-and-paste )

emacski commented 4 years ago

Currently, the tensorflow-serving-arm images contain no shell and use the exec form of the Docker ENTRYPOINT, so I believe that variable expansion is not possible here. The upstream version uses a full distro base image, where this project uses "distroless" base images. I'll see what I can do to include a shell (probably dash) in the next release. In the meantime, you can still use a custom model path with version 1.14.0 if you wish:

git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm --init -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    emacski/tensorflow-serving:1.14.0-linux_arm_armv7-a_neon_vfpv3 \
        --model_name=half_plus_two \
        --model_base_path=/models/half_plus_two
mixaz commented 4 years ago

I see the point. I think it will be enough just mention that in README

Thank you for the great project and support.

emacski commented 4 years ago

Just FYI, in the 1.15.0 update (6170dc57db2ab11f3b762b61e2c43eb77d8d4ca1) I have added a posix shell to the docker images for variable expansion in the startup command and should now be functionally equivalent to the upstream.