New example added to apps folder called "live_scene_and_gaze_nocalib_using_queues.py".
Based on "live_scene_and_gaze_nocalib.py", it includes two modifications:
Import queue and threading. Creating a frame_grabber() function. Using this for just processing the last received frame.
Import YOLO from ultralytics. Additionally to drawing a circle at the gaze position, the received frame is sent to YOLOv8 to detect what is in the scene. Detected object's bounding boxes are drawn in red/green depending on gaze location.
First modification allows for a faster processing. This improvement may go unnoticed if the machine's CPU is sufficiently powerful. However, if heavy post-processing is performed on that image, the difference is noticeable. The second modification shows the difference. YOLOv8 is used running on CPU so that the observed delay between gaze and camera image is about 3 seconds if frame_grabber() is not used and about 0.4 seconds if it is used.
Nonetheless, this does NOT guarantee synchronization between gaze and camera image.
New example added to apps folder called "live_scene_and_gaze_nocalib_using_queues.py". Based on "live_scene_and_gaze_nocalib.py", it includes two modifications:
First modification allows for a faster processing. This improvement may go unnoticed if the machine's CPU is sufficiently powerful. However, if heavy post-processing is performed on that image, the difference is noticeable. The second modification shows the difference. YOLOv8 is used running on CPU so that the observed delay between gaze and camera image is about 3 seconds if frame_grabber() is not used and about 0.4 seconds if it is used.
Nonetheless, this does NOT guarantee synchronization between gaze and camera image.