Open tlaurie99 opened 2 weeks ago
8/29
8/30
Weights have been exposed, iterated through and summed over each layer. Running into issues with getting individual neuron outputs from the l layer into the l+1 layer
So far, it looks like a forward hook idea is needed
9/3 - 9/6
Dropped using RLLIB modules and create layers from scratch using torch -- allows easy access to outputs and access to weights. Utility function is currently being calculated given a layer output (mini_batch_size, designated layer size)
Not sure how to reinit the biases for the neuron -- attempted to do it as torch does here, but having dimension issues
The above was due to only have a 1-D tensor (128x1) and calculate fan in / fan out is based on the tensor input and output weights (2-D) -- changed to using uniform which only requires a 1-D tensor
Use of continual backpropagation networks to maintain plasticity of neural networks
This is done by: