ixaxaar / pytorch-dnc

Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch
MIT License
335 stars 56 forks source link

A question about memory initialization. #53

Open LiUzHiAn opened 4 years ago

LiUzHiAn commented 4 years ago

Hi,

I am a bit confused about how we save memory states in DNC. To be more specific, at the starting point of training, we have to initialize the memory with no doubt (fill all 0s in code). Having finished the training process, I think the memory values should be saved for testing usages. But it turns out that you reset the memory hidden states to be 0s AGAIN! (just as the erase part of dnc/memory.py,Line 69-75).

Could you please give me some explanations about this? Thank you in advance! Really need your help.

ixaxaar commented 4 years ago

Hey there. I think the reason behind erasing the memory after each training is that the point is for the network to use the memory. If you let memory uninitialised, the consequences are not explored in the paper. I think I remember commenting the line out and trying DNCs for translation. I thought maybe if after training if I were to take the memory cells and see if they have developed any sort of a map (like a dictionary?) by looking at their matrix of dot products but got nothing.

But you're right, not re-initialising memory has not been tried in the paper and hence this implementation, but it can most certainly be tried.