ChaoFan96 / GaitPart

Temporal part-based model for gait recognition.
62 stars 6 forks source link

Some silent hyperparameters in GaitPart have been mentioned here! #3

Open OliverHxh opened 4 years ago

OliverHxh commented 4 years ago

Hi, appreciation on your great work! I have read it through, and I have two questions in the followings.

  1. In this paper you mentioned that HP module horizontally splits the feature map into n parts, i already read this paper through, however i do not konw the exact value of n, could u help me?

  2. In this paper you say you use Adam optimizer and the momentum value is 0.9, but i couldn't find adam optimizer with momentum in pytorch tutorial, could you help me on that?

Anyway, thank you very much! Waiting for your reply!

ChaoFan96 commented 4 years ago

Thanks for your attention.

  1. n=16 in GaitPart.
  2. torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.
OliverHxh commented 4 years ago

@ChaoFan96 thanks for your reply! that really helps me! I have other questions now:

  1. what' s the exact value of s in convnet1d
  2. what's the exact architecture of convnet1d, I guess it is like: conv1d-relu-conv1d-sigmoid , do I do something wrong?
  3. at the end of the net you use FC layer to transform features to another space, does it transform feature from 128 to 256 like gaitset does? and do you add nonlinear function on it?
ChaoFan96 commented 4 years ago

Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:

  1. 4
  2. No, you're right
  3. just linear mapping without any nonlinear activation

And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!

OliverHxh commented 4 years ago

@ChaoFan96 thank you very much! You really help me a lot! If I have other questions, I will contact you. Best wishes!

barbecacov commented 4 years ago

Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:

  1. 4
  2. No, you're right
  3. just linear mapping without any nonlinear activation

And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!

I have one question: You said "due to it contains almost 20 times more sequences than CASIA-B, an additional block composed of two FConv Layers is stacked into the FPFE (the output channel is set to 256)", so this additional block if it is followed by maxpooling or the third block is followed by maxpool and the last block doesnt followed maxpool.I prefert to the latter way. How about you? Thank you very much

ChaoFan96 commented 4 years ago

@barbecacov Thanks for your attention! For the OU-MVLP database, both the block3 and block4 are not equipped with maxpooling layer. Just the block1 & block2 are followed by maxpooling layer. Hope this respond would help you.

logic03 commented 3 years ago

Thanks for your attention.

  1. n=16 in GaitPart.
  2. torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?

logic03 commented 3 years ago

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) or beta=(0.9, 0.9) during training?

ChaoFan96 commented 3 years ago

@logic03 Hello, thanks for your attention and correction. In the real practice, I'm using the default parameter for beta in the Adam.

ChaoFan96 commented 2 years ago

Hi, the OpenGait is released now! ( https://github.com/ShiqiYu/OpenGait ) This project not only contains the full code of gaitpart but also reproduces several SOTA models of gait recognition. Enjoy it and any questions or suggestions are welcome!