Closed Dr-Corgi closed 7 years ago
init_hidden initialises the hidden state of the recurrent network to zeros. In Line 23 of discriminator.py, in init_hidden(self, batch_size)
h = autograd.Variable(torch.zeros(2*2*1, batch_size, self.hidden_dim))
This specifies the hidden state of the recurrent network before any input is passed through it.
Ohhh, I understand. Thank you very much!!
I just read the source file "discriminator.py", and i couldn't understand why should we call the function init_hidden() each time in batchClassify(self, inp) and batchBCELoss(self, inp, target): ` def batchClassify(self, inp): h = self.init_hidden(inp.size()[0]) out = self.forward(inp, h) return out.view(-1)
def batchBCELoss(self, inp, target): loss_fn = nn.BCELoss() h = self.init_hidden(inp.size()[0]) out = self.forward(inp, h) return loss_fn(out, target) `
In my opinion, calling the function init_hidden() results to a totally new parameters settings.
However, I'm new to pytorch. Therefore, I'm not sure if I had misunderstood this part of code. Thank you.