caten2 / Tripods2021UA

MIT License
3 stars 5 forks source link

Is the old feed-forward method redundant? #21

Open caten2 opened 10 months ago

caten2 commented 10 months ago

I implemented generalized composition as a method of Operation in order to help clean up the code and debug more effectively. Since feeding forward is nothing but evaluating the tuple of operations represented by the neural net on given inputs, I'd like to hear some input on whether or not we should reimplement feeding forward as an application of this generalized composition method.

caten2 commented 10 months ago

I've thought about this a bit more. I think that it would make sense to make instances of NeuralNet callable and define a method which both changes the activation function on a node in a neural net and also updates the stored sequence of Operation object which give the function represented by the neural net. These are built using the generalized composition rather than feeding individual values forward. This way, a new sequence of Operation objects is created whenever the neural net has a new activation function somewhere. We can make these composite objects not memoized so no additional memory will be used beyond that of creating the operations.

caten2 commented 8 months ago

I tried to implement the feed-forward method in the manner described above and it turned out to be awkward. One has to keep track of which input nodes need to be fed in as arguments to each of the operations represented by the final composites at the output nodes. I think it may still be nice to make the neural net objects callable, though, so I'll leave this open until that's done.