Open youssefavx opened 4 years ago
Since it's a model with attention, it's not straightforward, because equations are represented by sequences of embeddings, and not a single embedding. What you could do though, is modify the model to make it without attention. For instance, if the encoder output is of shape: X.shape = (batch_size, seq_len, dim)
you can simply do: X = X[:, :1]
to only keep a single hidden state.
Then, X
will provide fixed size representation embeddings.
Thank you so much! Would I be correct in thinking this means I could also do this with pre-trained models as well (as opposed to training a new model from scratch), or do I have to train my own custom model?
I'd like to be able to compare equation similarity. I wonder if I can get this model to extract embeddings for 2 particular equation inputs.