Hi, thanks for the great work of text game. I have one question about the RL agent. In this paper, your agent is Deep Reinforcement Relevance Network (DRRN) from ACL2016 paper. I am wondering did you ever conduct some preliminary experiments on more powerful encoding function like BERT for better contextualized word embedding ? Do you have some intuition for making Transformer as Q-network in DRL ? Much Thanks !
Hi, thanks for the great work of text game. I have one question about the RL agent. In this paper, your agent is Deep Reinforcement Relevance Network (DRRN) from ACL2016 paper. I am wondering did you ever conduct some preliminary experiments on more powerful encoding function like BERT for better contextualized word embedding ? Do you have some intuition for making Transformer as Q-network in DRL ? Much Thanks !