I'm currently working with an existing TensorFlow 2 (TF2) model and the SparseOperationsKit. This set up allows me to utilize the SparseEmbedding Layer of the SOK toolkit. However, I've found that I have to define the sok_model and tf_model separately for training.
sokmodel: This results in a collection of files named `EmbeddingVariablekeys.fileandEmbeddingVariable_values.file`.
tf2 model: This exports saved_model.pb, variables files.
When I need to execute a local test prediction request, I have to load both models independently. I then call the inference_step as follows:
# Load the model
sok_model.load_pretrained_embedding_table()
tf_model = tf.saved_model.load(save_dir)
# Inference steps
@tf.function(experimental_relax_shapes=True, reduce_retracing=True)
def inference_step(inputs):
return tf_model(sok_model(inputs, training=False), training=False)
# Call inference
res = inference_step(inputs)
Questions
Model Serving: My goal is to deploy this model on the Triton Inference Server. I'm seeking guidance or examples that could streamline this process. I'm also curious about the ideal structure for this deployment - would treating it as an ensemble model that includes both sok and TensorFlow 2 backends be beneficial? In terms of backends, which would be the optimal choice - HugeCTR, TensorFlow 2, or another option? If there are any resources or guides that could assist me in this situation, I'd appreciate the pointers. For HugeCTR, it seems necessary to export the model graph; I'm wondering how I can accomplish the same with this TensorFlow 2 model that utilizes the SOK toolkit.
Model Conversion to ONNX: According to the Hierarchical Parameter Server Demo, HugeCTR can load both the sparse and dense models and convert them to a single ONNX model. I'm wondering how I can perform a similar conversion for this merlin-tensorflow model that uses the SOK toolkit and exports both the sparse and dense model.
Details
I'm currently working with an existing TensorFlow 2 (TF2) model and the SparseOperationsKit. This set up allows me to utilize the SparseEmbedding Layer of the SOK toolkit. However, I've found that I have to define the
sok_model
andtf_model
separately for training.After training the new TF2 model with SOK, I found that I need to export both the sok_model and the tf_model separately.
The resulting outputs are as follows:
and
EmbeddingVariable_values.file`.saved_model.pb
,variables
files.When I need to execute a local test prediction request, I have to load both models independently. I then call the inference_step as follows:
Questions
Environment details
nvcr.io/nvidia/merlin/merlin-tensorflow:23.02