Closed Linkersem closed 3 months ago
Hi, Could you please give more details because , i have tried both v1 and v2 the speed of v2 is faster comapred to v1 uisng openvino and also pytorch model.
hi, I tested the same data on the RTX4090 with the small model, where v2 takes almost twice as long as v1, and I found that the main time is in the stage where the depth value of inference is copied from the gpu to the cpu and the function parameters are passed back, I modified the code and now the two take the same time, but not faster than v1.
Thank you for discussions. Our V2 shares the same model architecture with V1. So the inference time should be the same.
Hi, thx for your greate work! I have test the inference time between depthanyting v1 and v2 ,I found that v2 is about twice as slow as v1, is this reasonable?