Open hallvardnmbu opened 1 month ago
Initial sketch of feedback logic.
Should outputs after looping be summed up or concatenated? Or should the looped results be used directly?
First initial experiment; iris dataset, commit: c6b735c4a09f2b958141427779890f0d52d0bafa. Faster convergence. Single-layer feedback. Looks promising.
Allow feedback connections between output i
to j
if i
's neurons (shape) is greater than or equal to j
's.
Currently only works for layer[i].outputs == layer[j].inputs
.
Include learnable weight for the feedback connection? I.e.;
A -> B -> C
^ |
|_fw_|
where fw
is a separate "layer".
Implementing this could prove useful, because then the shapes of i
and j
(previous comment) does not matter, as the feedback-layer projects to correct shape.
Try out combining "original" input with the fed-back input. See article on loopy neural nets.
Outputs of layer i is fed back to prior layer j as "new" inputs.