Open Rohan2821999 opened 7 years ago
Sorry for the typing error, I mean at cost 0.2 and epochs 7000 it becomes asymptotic
I ran the neural net for just addition and it works perfect (100% Accuracy) after 1000 runs.. However, I noticed that the output values are still not 1s or 0s instead its <= 0.39 for 0s and around 0.41 for all 1s.. This means that the values are correct just haven't been scaled well. So the neural net is working fine.
I re-tested the neural net on all data (addition and subtraction) -- accuracy - 70% however, again, the values are not 0s or 1s rather other values which definitely do follow a desirable and more or less correct trend..
OK, so at least we can teach it addition! But yeah the cost/output function needs to be updated. Spoke with John, and he recommends a cross-entropy function for this, so try that - basically it's doing a similar thing as logistic regression.
-C
On Thu, Sep 8, 2016 at 12:30 AM, Rohan Hundia notifications@github.com wrote:
I ran the neural net for just addition and it works perfect (100% Accuracy) after 1000 runs.. However, I noticed that the output values are still not 1s or 0s instead its <= 0.39 for 0s and around 0.41 for all 1s.. This means that the values are correct just haven't been scaled well. So the neural net is working fine.
I re-tested the neural net on all data (addition and subtraction) -- accuracy - 70% however, again, the values are not 0s or 1s rather other values which definitely do follow a desirable and more or less correct trend..
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/3#issuecomment-245516498, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO8vLf7EA71Q6pxsjmMGqJ3Pum8B8ks5qn7mxgaJpZM4J3d14 .
So, I didn't use the cross entropy function because the learning graph of that is super weird. Also, the values after that are also not discretized as 0 or 1 (same as before):
Graph of cross entropy function vs number of runs:
I am not clear how the learning is taking place here!
Update on NN stuff:
Implemented Neural Net on only addition vals for type3 data. Doesn't work at all. No learning takes place even after 500 runs and accuracy then is absolutely 0.
Awesome.
On Mon, Sep 12, 2016 at 3:21 PM, Rohan Hundia notifications@github.com wrote:
Implemented Neural Net on only addition vals for type3 data. Doesn't work at all. No learning takes place even after 500 runs and accuracy then is absolutely 0.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/3#issuecomment-246513927, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO8IaNs9OwcbrkYx8doVjFvHSv846ks5qpdBegaJpZM4J3d14 .
@cbattista Sir, I was trying to examine the relationship between the cost function (error) and no. of runs. The graph is given below -- learning rate of the gradient descent used is 0.1 :
I think this cost function is kind of matching human learning because initially the artificial neuron is learning faster (when its badly wrong) and learns slower as error decreases. However, at cost 0.2 and epochs 2000 it kind of becomes asymptotic and no longer learning.
Do you think this would be the ideal cost function for our neural net?