Closed kyleskom closed 1 year ago
@kyleskom A couple of things worth noting as I continue my personal script from last year. Your random state should be adjust randomly as well. When training, the initial set of weights could offset the initial values incorrectly. So you should test training with random values initially to see which are good. I also did not see great results when testing LR. There are too many connections that LR models don't handle well.
I played around with the random a bunch as well. I couldn't find any that would come close to the other model so I'm just going to leave it as be for now.
I would suggest a save to file that writes the output and the accuracy of each model. Some models are trained a million times before a great one will appear. Human recording the values is not possible. Its the same for your NN models. You have a set number of neurons for each layer. However, what if 4 layers is better, or 3, or 512 neurons, higher or lower learning rate etc. All variations should be set and tested then record (Some should be tested 3-4 times to ensure they were just not a fluke).
I have played around with all that but your right, I do need a better way to test and document models.
Not seeing great results with these