Open chaiban opened 5 years ago
I am not sure, how they exactly used the DP++ for centre extraction. In general, the com refinement takes an approximate centre and predicts an update towards the true centre. The MCP of the middle finger is used as centre. If you want to apply the com refinement on the FPHA benchmark, you should also train the com refinement on a similar dataset. The MSRA15 dataset is not well suited, since it has a third person viewpoint, whereas FPHA has a first person viewpoint. So you should add another Dataset class, similar to MSRA15Dataset, and train the com refinement on this dataset. Then you can use this to refine the com, ie the location where the bounding volumen is cropped, before predicting the pose. Since the center-of-mass is rather inaccurate for egocentric viewpoints, I would suggest to use a temporal tracking of the com.
Ok. Thank you for information. I will try that.
@chaiban
Could u extract center points?
If so, could u help me with that issue?
Hi @moberweger ,
Thank you for the code. I am trying to implement V2V PoseNet algorithm on First-Person Hand Action Benchmark. But they mentioned deep prior++ for center extraction.
I am trying to calculate centers for First-Person Hand Action Benchmark. I am ran MSRA15_com_refine.py file and I am not sure at which point it results Center points. Please send a pointer at which I can get centers.
Thank you for your help!