Cysu / open-reid

Open source person re-identification library in python
https://cysu.github.io/open-reid/
MIT License
1.34k stars 349 forks source link

Batch size vs rank 1 #28

Closed cptay closed 6 years ago

cptay commented 7 years ago

Hi,

I have 3 questions here

1) Using the code provided here, I tested the market 1501 dataset using the following batch sizes (My 8Gb GPU card does not allow higher batch size)

batch size ( Rank1 ) 64 ( 77.4% ) 96 ( 79.5% ) 108 ( 79.8% )

Is this the expected results? Running a batch size of 256 will get me ~84% rank 1?

2) Also, there seems no different in the result whether image normalization is used or not. True?

3) I have memory leakage problem. Maximum epoch run before OS terminate the program is about 18-20, depending on batch size. Not sure if this is a problem of running Pytorch under Windows 10, or python code issues?

Thanks

zydou commented 7 years ago

Hi, @cptay

  1. In my experiments, running a batch size of 256 gives me ~67% mAP and ~84% rank 1. Larger batch size will get better results.

  2. I'm not sure about it.

  3. Pytorch is not officially supported on Windows. The code running on Linux works perfectly.

cptay commented 7 years ago

Hi Zhiyong, Thanks for the prompt reply.

On Tue, Oct 10, 2017 at 4:33 PM, Zhiyong Dou notifications@github.com wrote:

Hi, @cptay https://github.com/cptay

1.

In my experiments, running a batch size of 256 gives me ~67.9% mAP and ~84% rank 1. Larger batch size will get better results. 2.

I'm not sure about it. 3.

Pytorch is not officially supported on Windows. The code running on Linux works perfectly.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Cysu/open-reid/issues/28#issuecomment-335401093, or mute the thread https://github.com/notifications/unsubscribe-auth/AbdWmB72jETPmx92wI9r_DYCbSBRvYrCks5sqyvigaJpZM4PziyT .

Cysu commented 7 years ago

@zydou Thanks very much for your answer!

@cptay

  1. Larger batch size works better for triplet loss, since you can find harder negative samples.
  2. The effective of using normalization could be very minor.
  3. Sorry but I have no experience with PyTorch on Windows.
cptay commented 7 years ago

Hi,

May I know if you used test-time augmentation during evaluation?

Thanks

On Wed, Oct 11, 2017 at 4:11 AM, Tong Xiao notifications@github.com wrote:

@zydou https://github.com/zydou Thanks very much for your answer!

@cptay https://github.com/cptay

  1. Larger batch size works better for triplet loss, since you can find harder negative samples.
  2. The effective of using normalization could be very minor.
  3. Sorry but I have no experience with PyTorch on Windows.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Cysu/open-reid/issues/28#issuecomment-335593014, or mute the thread https://github.com/notifications/unsubscribe-auth/AbdWmEDTgnP69sLQi1QruwYE7QHgaTKFks5sq89igaJpZM4PziyT .

Cysu commented 7 years ago

@cptay No we don't have data augmentation for test evaluation. Just resizing and normalizing the whole image at here.