YouHuang67 / InterFormer

MIT License
35 stars 5 forks source link

Reproducing results #3

Closed qinliuliuqin closed 10 months ago

qinliuliuqin commented 10 months ago

Dear authors, @YouHuang67

Thanks for releasing the code and models. I am trying to rerun your released light model on DAVIS, but the numbers look better than you reported in the paper. For example, I got 5.54 for NoC90, but you reported 6.19. Also, your code will crash for case 008.jpg, so I need to remove it for evaluation. Missing one case won't cause such a big difference, so I may need your help to figure out the issue.

  1. How did you handle the crashed case 008.jpg?
  2. Did you merge all the objects in an image for evaluation? As previous works did.
  3. Did you do any processing for the original DAVIS345 dataset? I saw you at least renamed the masks.

BTW, the results for Berkeley are the same. I look forward to hearing from you! Thanks in advance.

YouHuang67 commented 10 months ago

Thank you for your interest in our work and for reaching out with your questions.

Regarding the issue with case 008.jpg, it seems that the discrepancy arises from the fact that the DAVIS dataset I utilized was sourced from the FocusCut repository, which can be found here: https://github.com/frazerlin/focuscut#focuscut The corresponding DAVIS dataset link is: https://drive.google.com/file/d/1-ZOxk3AJXb4XYIW-7w1-AXtB9c8b3lvi/view

This version of the dataset has pre-processed ground truths where all objects are merged, which is different from the one provided by the RITM repository (the link that is mistakenly mentioned in our repository). This explains why I didn't encounter the crash issue you experienced with case 008.jpg.

Our code is indeed tailored for the FocusCut dataset format, and it may encounter issues if there are multiple objects as GT in the format provided by other sources.

I apologize for any confusion caused and appreciate your understanding. Should you have any further questions, please do not hesitate to ask.

qinliuliuqin commented 10 months ago

Thanks, You. The following are my script and reproduced results using your provided model and data.

CUDA_VISIBLE_DEVICES=0,1,2,3 \
    bash tools/dist_clicktest.sh \
    work_dirs/interformer_light_coco_lvis_320k/iter_320000.pth 4 \
    --dataset DAVIS \
    --size_divisor 32

Results:
NoC85: 4.54
NoC90: 5.57
NoC95: 12.41 

I also observed that the result numbers varied slightly for different runs. Is there any uncontrolled randomness in the evaluation pipeline? It is common that results may differ in other environments, but the evaluation pipeline has to be deterministic. Do you have any insights on this? Thank you so much!

YouHuang67 commented 10 months ago

Thank you for your follow-up and for sharing your test results.

I have noticed slight variations in test outcomes as well. Achieving completely deterministic results in testing can be challenging, and I suspect a couple of factors could be contributing to these minor fluctuations:

The click generation process in our model uses certain approximations, especially during distance transforms and center point identification. For example, in our implementation, we use conditions like dist_map > max_dist / (sfc_inner_k + eps) to identify the center when sfc_inner_k = 1, instead of using a condition like dist_map == max_dist. This method also generates random clicks during training when sfc_inner_k > 1. These approaches might introduce a small degree of randomness. Additionally, not using a fixed random seed during the testing phase could be another factor.

There is also some inherent randomness in PyTorch computations, like in convolution operations which might use approximation for faster processing. Though the effect of this randomness is expected to be very small, it might contribute to the slight variability, as the evaluations were not run in a deterministic mode.

These factors together could be the reason behind the observed variances. However, I believe their impact on the overall results is minimal.

I hope this sheds some light on the issue. If you have more questions, feel free to ask.

qinliuliuqin commented 10 months ago

Thanks for your detailed explanation. Yes, randomness in evaluation can be tolerated if it's small and bounded. Another way to circumvent this issue is to report the mean and std of multiple runs. I may reopen this issue if I have something to discuss. Thank you again!