First: TF Hub Incompatibilities
ONLY the old TF Hub models that utilized TF1 succeed (eg. the old universal sentence encoder of v2). Any new TF 2-compatible model shows Tensor shape errors, or graph errors, or errors requiring placeholder tensors.
Second: TF2.x Incompatibilities
Even if using the old tf.hub compatible model, there an error arises when trying to use any kind of split function (eg. tf.string.split or tf.compat.v1.string_split), where it creates a new entries to the output dictionary (or rows) for each element it creates instead or having a list of lists or a 2D tensor. This is a huge problem because if wanted to split the words, turn them into vectors, and concatenate those vectors in some way, we cannot, because the split function just creates new tensors that get added to the output dictionary without being able to be further applied to any other tf-function.
Both of these issues can be traced to TF2.x incompatibilities. So this brings up the question, is the future of TFT to evolve to become more compatible with TF2.x? And if so, are there any plans for a user to be able to use it in such as way that we can put arbitrary tf functions in the preprocessing step without needing to worry about TF1 sessions, graphs, and functions?
Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!
In the task of transforming text to vectors (as in this official GCP example), the idea is to pass text into a TF Hub layer. However, this example causes an error in two ways. Here is a Colab Notebook reproducing these issues described below.
First: TF Hub Incompatibilities ONLY the old TF Hub models that utilized TF1 succeed (eg. the old universal sentence encoder of v2). Any new TF 2-compatible model shows Tensor shape errors, or graph errors, or errors requiring placeholder tensors.
Second: TF2.x Incompatibilities Even if using the old tf.hub compatible model, there an error arises when trying to use any kind of split function (eg.
tf.string.split
ortf.compat.v1.string_split
), where it creates a new entries to the output dictionary (or rows) for each element it creates instead or having a list of lists or a 2D tensor. This is a huge problem because if wanted to split the words, turn them into vectors, and concatenate those vectors in some way, we cannot, because the split function just creates new tensors that get added to the output dictionary without being able to be further applied to any other tf-function.Both of these issues can be traced to TF2.x incompatibilities. So this brings up the question, is the future of TFT to evolve to become more compatible with TF2.x? And if so, are there any plans for a user to be able to use it in such as way that we can put arbitrary tf functions in the preprocessing step without needing to worry about TF1 sessions, graphs, and functions?