harvardnlp / encoder-agnostic-adaptation

Encoder-Agnostic Adaptation for Conditional Language Generation
https://arxiv.org/abs/1908.06938
MIT License
79 stars 13 forks source link

Selection of the special tokens #4

Open ShuyangCao opened 4 years ago

ShuyangCao commented 4 years ago

Hi, according to the preprocess.py file, you choose the special tokens as follows,

tgt_bos = '<|endoftext|>'
tgt_eos = '\u0120GDDR'
tgt_pad = '\u0120SHALL'
tgt_unk = '\u0120RELE'
src_pad = '\u0120SHALL'
src_unk = '\u0120RELE'

In the huggingface tokenizer implementation, they use '<|endoftext|>' for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?

Thanks