Nicholasli1995 / EvoSkeleton

Official project website for the CVPR 2020 paper (Oral Presentation) "Cascaded deep monocular 3D human pose estimation wth evolutionary training data"
https://arxiv.org/abs/2006.07778
MIT License
333 stars 44 forks source link

Regarding h36m image #81

Open jenowary opened 1 year ago

jenowary commented 1 year ago

Thank you for kindly providing 2D HRNet model. Please let me raise an issue about reproducing 2D detection results.

I could not reproduce Average Joint Localization Error at 4.4 pixel, and i found the example image you provided is different with mine. Red regions indicate inconsistent pixel value. image

How I extract image from original video is to use ffmpeg with arguments -hide_banner -loglevel error -nostats -i SrcVideoPath -q:v 1, and I don't know why the image inconsistency occurs.

Can you share how you do that? That means to me a lot, appreciate in advance.

Nicholasli1995 commented 1 year ago

Thank you for kindly providing 2D HRNet model. Please let me raise an issue about reproducing 2D detection results.

I could not reproduce Average Joint Localization Error at 4.4 pixel, and i found the example image you provided is different with mine. Red regions indicate inconsistent pixel value. image

How I extract image from original video is to use ffmpeg with arguments -hide_banner -loglevel error -nostats -i SrcVideoPath -q:v 1, and I don't know why the image inconsistency occurs.

Can you share how you do that? That means to me a lot, appreciate in advance.

Hi, I did not use the ffmpeg commands. I used the video functionality in OpenCV. Specifically, I used cv2.VideoCapture to initialize a video stream and read the frames: import cv2 cap = cv2.VideoCapture(video_path) while(cap.isOpened()): ret, frame = cap.read()

jenowary commented 1 year ago

Thank you for providing the details. But I've tried it by the code clip below and still have difference:

def readFrames(videoFile, destination_dir, sequence_name):
  global image_size, frame_step, destination_format
  directory = os.path.join(destination_dir)
  if not os.path.exists(directory):
      os.makedirs(directory)
  image_counter = 1
  read_counter = 0
  cap = cv2.VideoCapture(videoFile)
  while(cap.isOpened()):
      ret,cv2_im = cap.read()
      if ret and read_counter % frame_step == 0:
              cv2.imwrite(os.path.join(destination_dir, sequence_name + '_%06d'%image_counter + '.' + destination_format), cv2_im)
              image_counter += 1
      elif not ret:
              break
      read_counter += 1
  cap.release()

image

Can you give more details, like using cv2.imwrite or PIL.Image.save?

Appreciate again for your patient reply.

Nicholasli1995 commented 1 year ago

Thank you for providing the details. But I've tried it by the code clip below and still have difference:

def readFrames(videoFile, destination_dir, sequence_name):
  global image_size, frame_step, destination_format
  directory = os.path.join(destination_dir)
  if not os.path.exists(directory):
      os.makedirs(directory)
  image_counter = 1
  read_counter = 0
  cap = cv2.VideoCapture(videoFile)
  while(cap.isOpened()):
      ret,cv2_im = cap.read()
      if ret and read_counter % frame_step == 0:
              cv2.imwrite(os.path.join(destination_dir, sequence_name + '_%06d'%image_counter + '.' + destination_format), cv2_im)
              image_counter += 1
      elif not ret:
              break
      read_counter += 1
  cap.release()

image

Can you give more details, like using cv2.imwrite or PIL.Image.save?

Appreciate again for your patient reply.

Hi, which method you use to save the image should not cause the problem. Is it possible that the timestamp of the frame you used is different from the example image? In addition, how large is the error quantitatively? Is the error large enough to affect the produced 2D key-point predictions?

jenowary commented 1 year ago

Thank you for your analysis. I think the timestamp are same. For example, 1002.jpg that this repo provided means the 1002th or 1003th frame, I tried both and it seems the 1003th frame. The evaluated 2D error on S9&S11 is 5.76 pixel, significant deterioration in comparison to the reported 4.4 pixel.

But here are some updates. I noticed that 12 video sequences of S9 subject have drifting problem of joint annotation. (ref: Human3.6M erroneous annotations) After removing these samples from test set, the evaluation comes down to 4.6 pixel, similar to 4.4.

I wonder if this drifting-remove evaluation is same to yours?

Nicholasli1995 commented 1 year ago

Thank you for your analysis. I think the timestamp are same. For example, 1002.jpg that this repo provided means the 1002th or 1003th frame, I tried both and it seems the 1003th frame. The evaluated 2D error on S9&S11 is 5.76 pixel, significant deterioration in comparison to the reported 4.4 pixel.

But here are some updates. I noticed that 12 video sequences of S9 subject have drifting problem of joint annotation. (ref: Human3.6M erroneous annotations) After removing these samples from test set, the evaluation comes down to 4.6 pixel, similar to 4.4.

I wonder if this drifting-remove evaluation is same to yours?

It was similar. The wrong ground truth annotations indeed need some extra processing. Instead of removing them, the ground truth keypoints were moved to the centroid of the predicted keypoints for evaluation.