Closed MrinalJain17 closed 2 years ago
Thanks for the suggestion! We added this to our roadmap several months ago and started working on it in a branch that we recently merged into main. The new code supports GPU batched inference and parallelization of GriTS on CPU. It should provide a pretty significant speed-up over the first version of the evaluation code.
Hi @bsmock @rohithpv ,
I was wondering if you explored implementing a batched/parallel version of GriTS. It might be quite helpful when tuning a bunch of model hyperparameters on (relatively) set of large document images.
Or, if not, could you provide any suggestions on how to approach this? I'd be happy to try and work on a PR.