Open adiwajshing opened 4 years ago
This seems like it would be an extremely significant refactoring. I don't have a problem with it (it sounds like a cool idea to me!), I just want to point out that it seems likely to take many weeks of hard work to make that work correctly.
Agreed nice idea, I think some parts are already there, but they have to be put together in the right way.
@rcurtin @zoq I'm willing to put the work in, it should actually be quite exciting! I had trouble benchmarking everything when I was working on this and so, figured a uniform & simply way to benchmark ML algorithms would go a long way in helping everybody who wants to test out new changes etc.
Also, I had proposed this in my GSOC application, I hope it's okay if I start this now?
Sure, feel free, just be aware that it might be a while until any of us are able to review it. :)
No worries, it'll take a while to finish anyway!
Hi,
I was thinking of adding some features & simplifying the benchmarking tool into a single line usage command line interface.
For example:
The following should be the parameters to this system:
The output should be the time it took to train on each data set, error rate, more specific output on the algorithm itself (MSE, avg. time per epoch etc.)
This enhancement should also allow users to specify which commit of the library they wanna run the benchmark test on. Alternatively, they could specify a local commit to bench mark, to easily test uncommitted changes. Moreover, this program will automatically download & build the source of the library if required.