Open iFe1er opened 5 years ago
@shenweichen
@iFe1er That's a good idea,but the hyperparameters of each model are not easy to determine because it is difficult to determine a consistent setting. And it is too cumbersome to find the optimal parameters by parameter tuning. Do you have any good solution?
@shenweichen I would recommend simply adding testing them with default hyper-parameter. Control the variable such as learning rate and batch size, then compare those algorithm on the same test set. That should be helpful enough and an easy way to do.
What about adding a performance benchmark (by AUC and logloss) over datasets such as Criteo?