Open donaldknoller opened 2 weeks ago
Additional suggestion from @torquedrop :
I recommend using the truncated mean scoring system. In this system, the highest and lowest scores are removed, and the average of the remaining scores is used to calculate the final result. To save time during score evaluation, consider reducing the sample size to 1024 or 512 for each scoring instance, and limiting the number of score calculations to 5.
Implementations to add:
The current dataset being used for evaluation has the following properties:
While this setup has been effective at preventing blatant overfitting, there are a few limitations. Mainly, the changing nature of the dataset lends itself to a higher degree of variance when it comes to scoring.
This issue aims to discuss potential solutions and the details of the implementation. Some potential suggestions from miners:
More recently, the sample size for evaluation has been adjusted as seen here to aid in reducing variance, but there may be other implementations that may be effective as well