Closed guojian0614 closed 1 year ago
could't make it faster ??? now python do same thing takes few.
@guojian0614
In theory, OpenCV should not be slower than PIL (https://towardsdatascience.com/image-processing-opencv-vs-pil-a26e9923cdf3) iff both doing identical operation. I guess your image processing might skipped some operation.
@guojian0614
In theory, OpenCV should not be slower than PIL (https://towardsdatascience.com/image-processing-opencv-vs-pil-a26e9923cdf3) iff both doing identical operation. I guess your image processing might skipped some operation.
@frankfliu
ok,thanks,i will pay attention to inference step go on ,maybe the docker serving of tensorflow officail provide is too faster。but the whole process presstsure test result comparation gap indeed too huge。
python project tps 22-23 , DJL tps 11-12.
DJL just inference step without pre/post process and other opertions, take time between 50-70ms。but i can't get takes time of python inference step .
I am still not sure my custom DJL project indeed used gpu to inference so far .
Description
(A clear and concise description of what the feature is.)
Will this change the current api? How?
Who will benefit from this enhancement?
References