Closed mohanr closed 1 year ago
Hi @mohanr, thank you for your feedback. We are going to better describe what's going on in the agent example code. Here's a high-level description in case you need it now.
The agent receives as input the concatenation of the description of the room (output of the look
command), its inventory contents (output of the inventory
command) and the game's narrative (previous command feedback). A GRU encoder (encoder_gru
) is used to parse the input text and extract features at the current game step. Those features are then provided as input to another GRU (state_gru
) that serves as a state history and spans the whole episode.
To select which commands to send to the game, the agent gets the list of all admissible commands (provided by TextWorld) and scores each of them conditioned on the current hidden state of the state_gru
. To do that, another GRU encoder (cmd_encoder_gru
) is used to encode each text command separately. Then, a simple linear layer (att_cmd
) is used to output a score given the concatenation of an encoded command and the current hidden state of the state_gru
.
Note, that word embedding (embedding
) is learned from scratch but one could use a pre-trained one like word2vec, GloVe, ELMo or BERT.
The agent is trained using A2C (batch size of 1). The critic shares the same bottom layers as the agent and uses a simple linear layer (critic
) to output a single scalar value given the current hidden state of the state_gru
.
The model also uses entropy regularization to promote exploration.
Hey, I am new to RL and got interested in its applications in NLP. I have few queries:
self.transitions.append([None, indexes, outputs, values]) # Reward will be set on the next call
Hi @Acejoy, here is a blog post mentioning how LSTM-DQN can be adapted to text-based games.
Code can be found here https://github.com/microsoft/tdqn
Regarding having access to the possible actions, you can request the admissible_commands
when building the EnvInfos
object (see documentation). Also, you can check Building a simple agent.ipynb, this is used in RandomAgent
.
Thank you for the reply.
I believe the agent in the notebook doesn't have any description. Is it possible to include any other material to support the code ? I am not sure if a more simple agent would help to get started. I can try a simpler version if I manage to understand enough.