So during training, I tried to repeat the function "encoded_text = self.text_field_embedder(tokens)" three times, and print out the value of encoded_text in the following manner:
I am expecting to get three identical output results since the input is the same and the model has not been updated yet (.step() function has not been called).
However, I am getting three different outputs for the value of encoded_text:
This behavior exists during training under the condition that the encoder (BERT) is freezed and the encoder is not freezed.
However, I could get the three identical output results under the testing phrase.
Can anyone help to explain why this happens? Is it the correct behavior? Did something go wrong?
Hi,
I have observed a strange behavior of "encoded_text = self.text_field_embedder(tokens)" in the forward function of seq2labels_model.py (https://github.com/grammarly/gector/blob/master/gector/seq2labels_model.py#L132).
So during training, I tried to repeat the function "encoded_text = self.text_field_embedder(tokens)" three times, and print out the value of encoded_text in the following manner:
I am expecting to get three identical output results since the input is the same and the model has not been updated yet (.step() function has not been called).
However, I am getting three different outputs for the value of encoded_text:
This behavior exists during training under the condition that the encoder (BERT) is freezed and the encoder is not freezed.
However, I could get the three identical output results under the testing phrase.
Can anyone help to explain why this happens? Is it the correct behavior? Did something go wrong?
Thanks a lot:)