Closed kazmifactor closed 9 months ago
Hi @kazmifactor
Please spend some time looking through the code. You'll notice that all the API methods return std::vector<Object>
which contains the information you are after: https://github.com/cyrusbehr/YOLOv8-TensorRT-CPP/blob/afe5a445a64869fb0c5f690ce0d58dc7ea625a41/src/yolov8.h#L75-L81
Thank you for your reply.
I am not able to decode it and get the data that is required for my application. i am very new to the lniux and c++ framework.
if you could help me with making a separate code that runs a video file and sends each frame separately, and get the results like bbox or keypoints either through python or c++.
this would help me or any other person who will use this in future. Thank you in advance.
My code already works with a video input, decodes the outputs, and annotates the video. It already does everything you are asking for. I can't help you any more than that. The project is already fairly simple. You need to spend some time learning C++ and try running the project. I cannot help you with that.
Is there any way to get the inference data from these results. Like coordinates of the bbox. Or keypoints in pose. In live video