Gengzigang / PCT

This is an official implementation of our CVPR 2023 paper "Human Pose as Compositional Tokens" (https://arxiv.org/pdf/2303.11638.pdf)
MIT License
332 stars 21 forks source link

Curiosity about Model Choice: Swin-based vs. ViTPose with PCT #19

Open Janus-Shiau opened 1 year ago

Janus-Shiau commented 1 year ago

Hello @Gengzigang and team,

The idea of representing human pose as compositional tokens (PCT) is both unique and compelling. By modeling the relationship between keypoints in such a structured manner, it's pretty inspiring.

However, I have a question regarding your model choice. I noticed that you opted for a Swin-based model for implementation. Given the current success and traction of ViTPose, I'm curious as to why you didn't choose to integrate PCT directly with ViTPose. Was there a specific reason or advantage for preferring the Swin-based model over ViTPose when incorporating PCT?

Thank you for taking the time to answer. I'm eager to delve deeper into your work and truly appreciate the effort you've put into this research. Looking forward to your insights!

Warm regards, Jia-Yau

Gengzigang commented 1 year ago

Hi Jia-Yau, thank you for your interest in our work. Our PCT and ViTPose were developed concurrently. When we started working on PCT, ViTPose hadn't been released yet. At that time, Swin backbone performed well in other computer vision tasks, so it was a natural choice to use Swin as the backbone.