Closed xiewenqing closed 3 years ago
Hi @xiewenqing . I'm not sure if I understood your question. This genetic-oriented search will find the model with best cross-validation accuracy (you can modify/write your own metric here: ./gentun/models/keras_models.py
). The first generation, even if it was created with random seeds, should have a decent cross-validation accuracy after training. The more generations you run, the more likely it is to find models with higher cross-val accuracy. Hope this helps!
Thank you very much for your answer, but this did not solve my problem. I have already tried to modify ./gentun/models/keras_models.py by myself, but the problem still exists. As shown in the figure, the acc reached 0.9 at the beginning of training. How to modify this place to start from 0.0, thank you!
From that training screenshot you send it looks like while training loss (binary crossentropy) is improving, accuracy stays at 0.9. It could be due to class imbalance (eg: for a binary classification problem, 90% of your training data are 0s and 10% are 1s; and the model is throwing all zeros first so it is 90% accurate). You could try using AUC instead of accuracy in metrics (line 138). Note that for AUC, a model that classifies at random -bad- has AUC~0.5. Similarly, bad binary classification models will have accuracy > 0 just because (lucky guesses).
Why is my recognition rate 90% at the beginning, and where can I adjust it to start from 0?