Closed kli017 closed 2 years ago
They did not use positional encoding, as described in paper. [2] Yusuke Fujita, Naoyuki Kanda, Shota Horiguchi, Yawen Xue, Kenji Nagamatsu, Shinji Watanabe, " End-to-End Neural Speaker Diarization with Self-attention," Proc. ASRU, pp. 296-303, 2019
Hello, in transformer.py I found that the pos_enc was initialized in the encoder but it was not used ind the forward?