Hi,
The discrete BFN presented in the paper has demonstrated competitive performance on the text8 dataset. However, the vocabulary size of text8, which stands at a mere 27, is considerably limited for most NLP tasks. Have you experimented with training discrete BFN models on datasets with a larger vocabulary? Could you provide some insights into the model's architecture, settings of hyper parameters, and the performance achieved?
Thanks!
Hi, The discrete BFN presented in the paper has demonstrated competitive performance on the text8 dataset. However, the vocabulary size of text8, which stands at a mere 27, is considerably limited for most NLP tasks. Have you experimented with training discrete BFN models on datasets with a larger vocabulary? Could you provide some insights into the model's architecture, settings of hyper parameters, and the performance achieved? Thanks!