tr3e / InterGen

[IJCV 2024] InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions
https://tr3e.github.io/intergen-page/
170 stars 9 forks source link

Large differences in experimental results when BATCH_SIZE = 16 and EPOCH=500 #9

Open Xiyan-Xu opened 8 months ago

Xiyan-Xu commented 8 months ago

Thanks for sharing your great work! I have trained the model myself with respect to your readme guideline, but set BATCH_SIZE = 16 and EPOCH=500 due to the lack of computing resources. In this setting, my trained model has much worse performance compared with the evaluation results presented in the paper. I am wondering if it is essential to have exact same training setting to make the model have similar performance to paper's model. Besides, could you kindly release the checkpoint that exclusively trained on the training set? I think that would be really helpful for me! Thanks for your time and patience!

tr3e commented 8 months ago

Sorry for that, there are some typos in evaluator.py. We have already fixed that. please make sure your code is up to date.

Xiyan-Xu commented 8 months ago

Thanks for reply. I am sure my code is up to date. Can you release the checkpoint that exclusively trained on the training set? That would be really helpful.

pabloruizponce commented 7 months ago

I have trained for 1500 epochs with a batch size of 16 and I have a 12.9409 in FID compared to the 5.9 reported in the paper. Is there any reason for such a difference? All the rest of the parameters in the configs files were the ones used in the training of the model reported in the paper?

Thanks :)

tr3e commented 7 months ago

I am figuring it out. I will contact you as soon as possible.

pabloruizponce commented 7 months ago

@tr3e Any news on the issue? I have trained a model with same configuration as the one in your repo (except the batch size)

GENERAL:
  EXP_NAME: IG-S-8
  CHECKPOINT: ./checkpoints
  LOG_DIR: ./log

TRAIN:
  LR: 1e-4
  WEIGHT_DECAY: 0.00002
  BATCH_SIZE: 16
  EPOCH: 2000
  STEP: 1000000
  LOG_STEPS: 10
  SAVE_STEPS: 20000
  SAVE_EPOCH: 100
  RESUME: #checkpoints/IG-S/8/model/epoch=99-step=17600.ckpt
  NUM_WORKERS: 2
  MODE: finetune
  LAST_EPOCH: 0
  LAST_ITER: 0

But these are my results using your evaluation script:

========== MM Distance Summary ==========
---> [ground truth] Mean: 3.7844 CInterval: 0.0012
---> [InterGen] Mean: 3.8818 CInterval: 0.0017
========== R_precision Summary ==========
---> [ground truth](top 1) Mean: 0.4306 CInt: 0.0070;(top 2) Mean: 0.6110 CInt: 0.0086;(top 3) Mean: 0.7092 CInt: 0.0060;
---> [InterGen](top 1) Mean: 0.2517 CInt: 0.0071;(top 2) Mean: 0.3818 CInt: 0.0048;(top 3) Mean: 0.4662 CInt: 0.0046;
========== FID Summary ==========
---> [ground truth] Mean: 0.2966 CInterval: 0.0085
---> [InterGen] Mean: 10.7803 CInterval: 0.1791
========== Diversity Summary ==========
---> [ground truth] Mean: 7.7673 CInterval: 0.0440
---> [InterGen] Mean: 7.8075 CInterval: 0.0274
========== MultiModality Summary ==========
---> [InterGen] Mean: 1.5340 CInterval: 0.0615

As you can observe, the results are very distant from the ones provided in the paper. I am in an ongoing research using your dataset, but in order to make a fair comparison, we need to be able to replicate your results.

Hope you find what's going on :)

tr3e commented 6 months ago

Hello! I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs. The results are like this:

========== MM Distance Summary ========== ---> [ground truth] Mean: 3.7847 CInterval: 0.0007 ---> [InterGen] Mean: 4.1817 CInterval: 0.0009 ========== R_precision Summary ========== ---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047; ---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032; ========== FID Summary ========== ---> [ground truth] Mean: 0.2981 CInterval: 0.0057 ---> [InterGen] Mean: 5.8447 CInterval: 0.0735 ========== Diversity Summary ========== ---> [ground truth] Mean: 7.7516 CInterval: 0.0163 ---> [InterGen] Mean: 7.8750 CInterval: 0.0324 ========== MultiModality Summary ========== ---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

pabloruizponce commented 6 months ago

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

Xiyan-Xu commented 6 months ago

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

me too.

tr3e commented 6 months ago

my email is lianghan@shanghaitech.edu.cn :)

szqwu commented 2 months ago

Hello! I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs. The results are like this:

========== MM Distance Summary ========== ---> [ground truth] Mean: 3.7847 CInterval: 0.0007 ---> [InterGen] Mean: 4.1817 CInterval: 0.0009 ========== R_precision Summary ========== ---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047; ---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032; ========== FID Summary ========== ---> [ground truth] Mean: 0.2981 CInterval: 0.0057 ---> [InterGen] Mean: 5.8447 CInterval: 0.0735 ========== Diversity Summary ========== ---> [ground truth] Mean: 7.7516 CInterval: 0.0163 ---> [InterGen] Mean: 7.8750 CInterval: 0.0324 ========== MultiModality Summary ========== ---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

Hi, I found that the MMDist here is lower than what is presented in the paper. When I am reproducing your work as well as my model, this MMDist is always around 4. Is there any mistake in the calculation?

RunqiWang77 commented 4 weeks ago

屏幕截图 2024-06-21 110045 The R_precision of InterGen that I reproduced is always higher than that of GT. Does anyone know the reason for this? Thank you very much.