Closed caochengchen closed 3 weeks ago
It is a pre-trained model + transformer layers architecture. Please refer to the technical details in the appendix.
I didn't see what text encoder is used in TMR in the appendix. I only saw that you mentioned sBERT in the paper. Is the text encoder in TMR fine-tuned using the BERT model, or is it retrained using a transformer?
@caochengchen I stated that it is extended from the text encoder of TEMOS. TEMOS use the DistilBERT + tf layers as the text encoder.
@caochengchen I stated that it is extended from the text encoder of TEMOS. TEMOS use the DistilBERT + tf layers as the text encoder.
Thank you very much
It is a pre-trained model + transformer layers architecture. Please refer to the technical details in the appendix.