Open PaulaScharf opened 3 years ago
I added an option to not use the depth image. This decreases the computation time from ~0.33 seconds to ~0.14 seconds per frame. However, this also means no visualization or logging of hand height. So this approach should only be optional and further means to improve performance have to be investigated.
When I directly show the frame (cv2.imshow(...)
) while also sending the frame to the client there is no temporal difference between the client and the direct visualization of the frame. This leads me to believe that there is no delay being caused by the network. This implies that all latency is caused from reading the frame from the camera.
Edit: Now that we are using usb3.0 again, reading the frame is now possible in under 0.01 seconds. Furthermore it became apparent, that there are other causes of latency as well. Will be investigated.
A big performance eater was the usage of the function "where" from numpy. I replaced every occurence, which resulted in double the framerate.
Currently the visualization is delayed and runs only at about 3 frames per second. It should be investigated if this can be improved. One possible approach could be adjusting the networking.