Open bnord opened 5 years ago
Plot of sigma vs constant multiplying number of neurons with 3 iterations without changing the seed
But, we didn't plot all three iterations at each constant value, right?
Looped the AE with the Architecture 0 over 5 alp's (constants multiplying the # of neurons) and for each alp, did 10 iterations for 10 different seeds. This is a plot of sigma vs alp value, with error bars on sigma related to the distribution of results from the iterations alp =5, # of iterations =10, epochs = 500
Will loop over 100 iterations, instead of 10 and get back with the results.
Plot showing sigma versus the constant multiplying the # of neurons of each layer. I did 50 iterations (for the 50 different seeds), for alp =5 and epochs = 500:
So, to me it looks like this flattens out with increasing neurons. I could imagine that it would get better, but not by much.
What do you think?
I think we can benefit if we increase the number of neurons but I don't see much difference between x2 and x5 implementation. Probably if we double the size we can gain in precision and not loose substantially in speed.
I agree with that.
Do you mean double from x5 or double from x2?
Also, are you recording the run times for each of the trainings?
I mean to double the # of neurons in Arch0, which means to go with the x2 implementation.
I don't record the run times... I'll start doing that from now on.