dragonfly90 / mxnet_Realtime_Multi-Person_Pose_Estimation

This is a mxnet version of Realtime_Multi-Person_Pose_Estimation, origin code is here https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation
125 stars 38 forks source link

Reproduce evaluation issues #3

Open dragonfly90 opened 6 years ago

dragonfly90 commented 6 years ago

Hard to reproduce the result mentioned in Cao's original paper, they talk about it in the original implementation. https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/issues/35 https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/issues/68 https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/issues/72 https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/issues/74

The matlab version should be 0.577

Python I obtained an average precision of 0.43

Matlab (1 scale) 48.2 Matlab (4 scales) 57.7 caffe_rtpose (1 scale) 44.9

My current implementation starting_range = 0.85; %0.25 0.7 ending_range = 1.5; %1.2 1.8 octave = 6; starting_scale = boxsize/(size(oriImg,1)ending_range); ending_scale = boxsize/(size(oriImg,1)starting_range); multiplier = 2.^(log2(starting_scale):(1/octave):log2(ending_scale));

Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.400 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.648 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.423 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.377 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.443 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.436 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.663 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.455 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.387 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.502 None 119.816125202

dragonfly90 commented 6 years ago

python evaluation

boxsize = 368 scale_search = 0.5, 1, 1.5, 2 multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] padRightCorner if subset[i][-1] < 4 or subset[i][-2]/subset[i][-1] < 0.4: deleteIdx.append(i) interpolation = cv.INTER_CUBIC

matlab evaluation

starting_range = 0.85; %0.25 0.7 ending_range = 1.5; %1.2 1.8 octave = 6; starting_scale = boxsize/(size(oriImg,1)ending_range); ending_scale = boxsize/(size(oriImg,1)starting_range); multiplier = 2.^(log2(starting_scale):(1/octave):log2(ending_scale));

Scale: default 'bicubic'

Inference if score > -100

padHeight if (subset(i,end)<3) || (subset(i,end-1)/subset(i,end)<0.2) deleIdx = [deleIdx;i]; end

dragonfly90 commented 6 years ago

not working num 8

boxsize = 368 scale_search = [0.5, 1, 1.5, 2] multiplier = [x boxsize1.0/ oriImg.shape[0] for x in scale_search]

DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.474 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.701 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.527 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.508 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.455 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.509 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.718 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.552 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.514 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.507 None 123.940952067

dragonfly90 commented 6 years ago

Python accuracy: DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.550 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.800 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.610 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.541 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.576 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.591 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.812 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.644 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.549 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.651 None 114.388944797

dragonfly90 commented 6 years ago

Below is matlab detailed accuracy: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.577 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.797 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.627 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.545 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.629 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.621 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.814 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.662 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.555 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.706

Evaluation: GT bbox + CPM: 63 SSD + CPM: 53 method of Realtime Pose paper: 58.5

Mask RCNN: around 63 at test dataset G-RMI: 64.9 Associative Embedding: 65.5

dragonfly90 commented 6 years ago

not working number 8 [492395, 132791, 100896, 559665, 134206, 560349, 429633, 451095]

dragonfly90 commented 6 years ago

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.551 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.801 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.611 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.545 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.579 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.595 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.818 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.649 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.554 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.654

lukemurray commented 6 years ago

We've been playing with this network in Pytorch.

I'm interested in what your loss looks like, have you graphed it, or have a sample of the numbers?

Also how long does it train for, I'm running it but there is much output info.

Thanks,

dragonfly90 commented 6 years ago

@lukemurray I am not sure about it. I only calculate the last layer loss. The sum of heat map and part affinity graph mean is about 200~300. How about yours?

lukemurray commented 6 years ago

Similar range to want we get as well.

dragonfly90 commented 6 years ago

@kohillyang How is the training going now? Thanks.

Ai-is-light commented 6 years ago

@dragonfly90 @lukemurray I got fewer loss about the issue, but I change the network and get the smaller model and faster network, in stage 6 the L1-loss is about 50 and the L2-loss is about 15. the total loss is about 422

Ai-is-light commented 6 years ago

Hi @dragonfly90 I'm confused by code of the author in python-demo. I would like to compute the mAP of the model trained by myself, but I didn't clearly know how to get the output format like coco-key point evaluation . Did you compute the mAP of your model by python-caffe, I have not the matlab-caffe but only python-caffe. Any answer is appreciated. Looking forward your answer

dragonfly90 commented 6 years ago

@Ai-is-light You mean of caffe model? I only used the mxnet model. What mAP did you get? Evaluation on coco validation dataset with transfered mxnet model: evaluation_coco.py

FoolishMao commented 6 years ago

@dragonfly90 'Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.801'。I have a tensorflow model about openpose. but when evaluation the model, I can't reproduce the results in the paper. Your results seems similar, so could you tell me which scale you used?Thanks!

soans1994 commented 3 years ago

python evaluation

boxsize = 368 scale_search = 0.5, 1, 1.5, 2 multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] padRightCorner if subset[i][-1] < 4 or subset[i][-2]/subset[i][-1] < 0.4: deleteIdx.append(i) interpolation = cv.INTER_CUBIC

matlab evaluation

starting_range = 0.85; %0.25 0.7 ending_range = 1.5; %1.2 1.8 octave = 6; starting_scale = boxsize/(size(oriImg,1)ending_range); ending_scale = boxsize/(size(oriImg,1)starting_range); multiplier = 2.^(log2(starting_scale):(1/octave):log2(ending_scale));

Scale: default 'bicubic'

Inference if score > -100

padHeight if (subset(i,end)<3) || (subset(i,end-1)/subset(i,end)<0.2) deleIdx = [deleIdx;i]; end

Can you please tell me the config data to test the MPI model in python. The config file only has COCO evaluation paramters. Thank You