microsoft / multiview-human-pose-estimation-pytorch

This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019".
MIT License
544 stars 89 forks source link

A question in 3.1 Implementation #36

Open lsvery666 opened 3 years ago

lsvery666 commented 3 years ago

Hi, thanks for your great work, but I have one problem about cross view fusion. In 3.1 Implementation, you note that "Different channels of the feature maps share the same weights". The so-called "weights" mean that "non-corresponding locations on the epipolar line will contribute no or little to the fusion". However, the corresponding location should be determined by the depth of the corresponding 3d joint, right? Therefore, in my opinion, the depth of the 3d joint should influence the so-called "weights". I'm wondering why you say that different channels share the same weights, since different joints should have different depths. Could you give me a more specific explanation, thank you very much.

haibo-qiu commented 3 years ago

Yes, different channels represent different joints that have different depths.

However, our fusion is achieved by the specific-trained fc layer (i.e., by matrix multiplication), which means that for each location, there are unique weights (a column inside the matrix) that indicate how this location should contribute to the target map. In this way, all the pixels (locations) of the feature map with different depths contribute accordingly. Figure 3 in our paper illustrates the process.