I am trying to build TF serving with the TFRA custom ops, using instructions on the README:
## If enable GPU OPs
export SERVING_WITH_GPU=1
## Specifiy the branch of TFRA
export TFRA_BRANCH="master" # The `master` and `r0.6` are available.
## Create workspace, modify the directory as you prefer to.
export TFRA_SERVING_WORKSPACE=~/tfra_serving_workspace/
mkdir -p $TFRA_SERVING_WORKSPACE && cd $TFRA_SERVING_WORKSPACE
## Clone the release branches of serving and TFRA according to `Compatibility Matrix`.
git clone -b r2.8 https://github.com/tensorflow/serving.git
git clone -b $TFRA_BRANCH https://github.com/tensorflow/recommenders-addons.git
## Run config shell script
cd $TFRA_SERVING_WORKSPACE/recommenders-addons/tools
bash config_tfserving.sh $TFRA_BRANCH $TFRA_SERVING_WORKSPACE/serving $SERVING_WITH_GPU
## Build serving with TFRA OPs.
cd $TFRA_SERVING_WORKSPACE/serving
./tools/run_in_docker.sh bazel build tensorflow_serving/model_servers:tensorflow_model_server
I noticed this uses tfra/serving:2.8.3-devel which has a very outdated bazel version. TF serving 2.15 requires bazel 6.4.0.
I switched the image to tensorflow/serving:2.15.1-devel but now I get this error:
ERROR: Traceback (most recent call last):
File "/home/alykhantejani/workspace/tfra_serving_workspace/serving/tensorflow_recommenders_addons/tensorflow_recommenders_addons.bzl", line 10, column 6, in <toplevel>
"cuda_is_configured",
Error: file '@local_config_cuda//cuda:build_defs.bzl' does not contain symbol 'cuda_is_configured' (did you mean 'if_cuda_is_configured'?)
if I look at serving/build_deps/toolchains/gpu/cuda/build_defs.bzl.tpl it has cuda_is_configured() defined. So not sure why its failing, perhaps it's looking elsewhere for the opts?
I am trying to build TF serving with the TFRA custom ops, using instructions on the README:
I noticed this uses
tfra/serving:2.8.3-devel
which has a very outdated bazel version. TF serving 2.15 requires bazel 6.4.0.I switched the image to
tensorflow/serving:2.15.1-devel
but now I get this error:if I look at
serving/build_deps/toolchains/gpu/cuda/build_defs.bzl.tpl
it hascuda_is_configured()
defined. So not sure why its failing, perhaps it's looking elsewhere for the opts?