Open danyaljj opened 8 years ago
The NerBenchmark thingy I wrote was actually intended for such a purpose. Doing parameters sweeps of this nature is something I have seen people do before to improve results. However, for the NER benchmark, you had to create a configuration file for experiment you wanted to run, the you would have to go back and compare the results when all of them completed. I think it would be really cool to be able to specify a parameter, a range of values to run, and an increment and have the system just go and run them all.
On May 27, 2016, at 1:42 PM, Daniel Khashabi notifications@github.com wrote:
Is there a systematic way to tune a classifier's parameters (say output threshold etc) to maximize its F1?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/IllinoisCogComp/lbjava/issues/68, or mute the thread https://github.com/notifications/unsubscribe/ACdHSyqzGxW8nTQmCchsdlp1rjH8b1vuks5qFzsNgaJpZM4Iow6r.
Is there a systematic way to tune a classifier's parameters (say output threshold etc) to maximize its F1?