bmartacho / UniPose

We propose UniPose, a unified framework for human pose estimation, based on our “Waterfall” Atrous Spatial Pooling architecture, that achieves state-of-art-results on several pose estimation metrics. Current pose estimation methods utilizing standard CNN architectures heavily rely on statistical postprocessing or predefined anchor poses for joint localization. UniPose incorporates contextual seg- mentation and joint localization to estimate the human pose in a single stage, with high accuracy, without relying on statistical postprocessing methods. The Waterfall module in UniPose leverages the efficiency of progressive filter- ing in the cascade architecture, while maintaining multi- scale fields-of-view comparable to spatial pyramid config- urations. Additionally, our method is extended to UniPose- LSTM for multi-frame processing and achieves state-of-the- art results for temporal pose estimation in Video. Our re- sults on multiple datasets demonstrate that UniPose, with a ResNet backbone and Waterfall module, is a robust and efficient architecture for pose estimation obtaining state-of- the-art results in single person pose detection for both sin- gle images and videos.
Other
211 stars 44 forks source link

wrong codes are uploaded #5

Closed NargessYarahmadiGharaei closed 3 years ago

NargessYarahmadiGharaei commented 3 years ago

Hi I think some mistakes had made by you in the codes : I tried to read your paper and code . I solved some problems after all I'm facing this error: it seems the model output and the heatmap_var dimensions are not compatible . the error is in bellow :

Epoch 0: 0% 0/195 [00:00<?, ?it/s]torch.Size([8, 4, 46, 46]) heatmap_var shape torch.Size([8, 15, 46, 46]) heat shape /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py:446: UserWarning: Using a target size (torch.Size([8, 4, 46, 46])) that is different to the input size (torch.Size([8, 15, 46, 46])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) Traceback (most recent call last): File "unipose.py", line 280, in trainer.training(epoch) File "unipose.py", line 122, in training loss_heat = self.criterion(heat, heatmap_var) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 446, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2659, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/usr/local/lib/python3.6/dist-packages/torch/functional.py", line 71, in broadcast_tensors return _VF.broadcast_tensors(tensors) # type: ignore RuntimeError: The size of tensor a (15) must match the size of tensor b (4) at non-singleton dimension 1 0% 0/195 [00:02<?, ?it/s]

this error happens in training unipose.py for lsp dataset. as you can see in the second and third line the dimensions are not the same . even i try to use interpolation for resizing them in to one size , but I cant .

thanks regards. I'm waiting for your answer . good luck .

bmartacho commented 3 years ago

Dear,

Thank you for your interest on our paper.

We do not comment on individual user implementation errors. Joints outputs from the model should be compared to joints GT from the dataset, same for bounding boxes.