Open romaincosson opened 1 year ago
Hi! Unfortunately, without any specifics, it's hard to say what the issue is. Perhaps you could first try to get face mesh working, like in this example, and then gradually adapt the code to pose tracking?
I already adapted the code to pose tracking
Here is my get landmarks function :
`std::vector<std::array<float, 3>> get_landmarks(const std::shared_ptr
// Clear the pose_landmarks output stream queue
while (pose_tracking->GetOutputQueueSize("pose_landmarks") > 0) {
packet.reset(pose_tracking->GetOutputPacket("pose_landmarks"));
}
// Check if pose_landmarks output stream has packets in the queue
if (packet.get() != nullptr && !mediapipe::LibMP::PacketIsEmpty(packet.get())) {
// Get the underlying protobuf message from the packet
const void* lm_list_proto = mediapipe::LibMP::GetPacketProtoMsg(packet.get());
if (lm_list_proto != nullptr) {
// Get the size of the protobuf message
size_t lm_list_proto_size = mediapipe::LibMP::GetProtoMsgByteSize(lm_list_proto);
// Create buffer to hold protobuf message data; copy data to buffer
std::shared_ptr<uint8_t[]> proto_data(new uint8_t[lm_list_proto_size]);
mediapipe::LibMP::WriteProtoMsgData(proto_data.get(), lm_list_proto, static_cast<int>(lm_list_proto_size));
// Initialize a mediapipe::NormalizedLandmarkList object from the buffer
mediapipe::NormalizedLandmarkList pose_landmarks;
pose_landmarks.ParseFromArray(proto_data.get(), static_cast<int>(lm_list_proto_size));
// Copy the landmark data to our custom data structure
for (const mediapipe::NormalizedLandmark& lm : pose_landmarks.landmark()) {
landmarks.push_back({ lm.x(), lm.y(), lm.z() });
}
}
return landmarks;
}
}`
and then i use the graph pose_tracking.cpu and here is my opencv reading code :
`// Create MP pose tracking graph
std::shared_ptr
// Landmark XYZ data output stream
pose_tracking->AddOutputStream("pose_landmarks");
// Start MP graph
pose_tracking->Start();
// Ouvrir la vidéo
cv::VideoCapture cap(0);
if (!cap.isOpened()) {
std::cerr << "Could not open device #0. Is a camera/webcam attached?" << std::endl;
return EXIT_FAILURE;
}
cv::Mat frame_bgr;
cap >> frame_bgr;
while (cap.read(frame_bgr)) {
// Convert frame from BGR to RGB
cv::Mat frame_rgb;
cv::cvtColor(frame_bgr, frame_rgb, cv::COLOR_BGR2RGB);
// Start inference clock
auto t0 = std::chrono::high_resolution_clock::now();
// Feed RGB frame into MP pose_tracking graph (image data is COPIED internally by LibMP)
if (!pose_tracking->Process(frame_rgb.data, frame_rgb.cols, frame_rgb.rows, mediapipe::ImageFormat::SRGB)) {
std::cerr << "Process() failed!" << std::endl;
break;
}
pose_tracking->WaitUntilIdle();
// Stop inference clock
auto t1 = std::chrono::high_resolution_clock::now();
int inference_time_ms = std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count();
// Get landmark coordinates in custom data structure using helper function (see above)
std::vector<std::array<float, 3>> mediapipe_landmarks = get_landmarks(pose_tracking);
// For each face, draw a circle at each landmark's position
for (const std::array<float, 3>&norm_xyz : mediapipe_landmarks) {
int x = static_cast<int>(norm_xyz[0] * frame_bgr.cols);
int y = static_cast<int>(norm_xyz[1] * frame_bgr.rows);
cv::circle(frame_bgr, cv::Point(x, y), 1, cv::Scalar(0, 255, 0), -1);
}
// Display frame
cv::imshow("LibMP Example", frame_bgr);
// Close on any keypress
if (cv::waitKey(1) >= 0) {
break;
}
}
cv::destroyAllWindows();
return EXIT_SUCCESS;
}`
But the pose tracking is stucked on the fisrt frame
Hello ,
I try to adapt your program main.cpp with pose_tracking, the first frame works but it stucks on the first.
I saw that other people got the same issue and fixed it but did not explain how , do you have any solutions ? I already clear the queue before the while loop.
Thank you