Open ydennisy opened 2 years ago
This concept is discussed in https://github.com/tensorflow/recommenders/issues/388#issuecomment-941254103 and the comments following.
To make this work you must re-construct the index before each call to model.evaluate()
to update the candidate embedddings.
Sorry, just to confirm. Does this mean you can't use Scann when you are fitting your model for the first time? (Trying to speed up my implemention)
ScaNN is only used for efficient retrieval. It can only help with speeding up evaluation and has no impact on training step speed.
Thanks for the reply, is there any suggestions on how to speed up the retrieval task (bar using GPUs)
There's not much specific advice I can give if you are running on CPU only, other than the best practices around using the tf.data.Dataset
API with parallelism. Moving your lookups into the data pipeline and checking out the Tensorboard Profiler to identify bottlenecks.
@ydennisy to add to Patrick's answer, Keras caches compiled TensorFlow functions. Remember to call compile
before every evaluation as per https://github.com/tensorflow/recommenders/issues/388#issuecomment-941254103.
I have the following model using sequences to predict the next item:
This runs fine and much quicker! The evaluation step sees 100x speed ups!
However this model does not improve on the
metric
, my first thought it that the model is not updating the embeddings each epoch as they are run only once. However, in the normal approach we also pass a dataset of already mapped candidate embeddings...At which point in the training does the model update the embeddings it is learning to use for new evaluation runs?