Hi. In the tfhub of elmo (https://tfhub.dev/google/elmo/2) there is an output like you provide:
_elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, maxlength, 1024]
I believe it's the equivalent for
-1 for an average of 3 layers. (default)
I want to take the output you provide (elmo) and turn it into sentence embedding of elmo:
_default: a fixed mean-pooling of all contextualized word representations with shape [batchsize, 1024].
Hi. In the tfhub of elmo (https://tfhub.dev/google/elmo/2) there is an output like you provide: _elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, maxlength, 1024]
I believe it's the equivalent for -1 for an average of 3 layers. (default)
I want to take the output you provide (elmo) and turn it into sentence embedding of elmo: _default: a fixed mean-pooling of all contextualized word representations with shape [batchsize, 1024].
How do I do this fixed mean pooling?
How do I get sentence embedding from your output?
Produces different outputs, but no output is in shape of (2, 1024) which I want (2 sentences embedding)
How can I do this max pooling in order to reach output of (2, 1024)?
Thanks!