Closed hhexiy closed 7 years ago
On a separate note, here's the plan for implementing mini-batching:
Instead of process a whole dialogue, we will input a pair of (partner utterance, agent utterance). The model encodes the partner utterance, updates the graph embedding, decode the agent utterance and update the graph embedding again.
Within a dialogue, we will initialize the hidden state, utterance, graph from the previous example but won't BP through it.
Sounds good.
On Thu, Oct 6, 2016 at 10:29 PM, hhexiy notifications@github.com wrote:
On a separate note, here's the plan for implementing mini-batching:
Instead of process a whole dialogue, we will input a pair of (partner utterance, agent utterance) and the model encodes the partner utterance, updates the graph embedding, and decode the agent utterance.
Within a dialogue, we will initialize the hidden state, utterance, graph from the previous example but won't BP through it.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/stanfordnlp/game-dialogue/pull/14#issuecomment-252156399, or mute the thread https://github.com/notifications/unsubscribe-auth/AAakuANYLHN3QYoFi96Tv3Bqfl2BvuA9ks5qxdi7gaJpZM4KPFFh .
Some updates: graph.py
graph_embedder.py
learn.py
encdec.py
Put all test code under model/test.
Still some TODOs: