minghangz / cpl

CPL: Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning
56 stars 5 forks source link

Can not reproduce the same results in the paper #3

Closed chaudatascience closed 1 year ago

chaudatascience commented 2 years ago

Hi,

I am trying to reproduce the results for ActivityNet by running

python train.py --config-path config/activitynet/main.json --vote

After 5 hours of training, it returned

| R@1,IoU@0.1 0.8339 | R@1,IoU@0.3 0.5399 | R@1,IoU@0.5 0.2787 | | R@5,IoU@0.1 0.8769 | R@5,IoU@0.3 0.6158 | R@5,IoU@0.5 0.3874 |

Which are quite off (boldface) compared to the results in the paper: | R@1,IoU@0.1 0.8255 | R@1,IoU@0.3 0.5573 | R@1,IoU@0.5 0.3137 | | R@5,IoU@0.1 0.8724 | R@5,IoU@0.3 0.6305 | R@5,IoU@0.5 0.4313 |

Is the config file the right hyper-parameters to reproduce the results in the paper?

I attach the full log below. Please let me know what I should change to be able to reproduce the results. Thank you, Chau

update log file: https://drive.google.com/file/d/1BCxHFc0JEMlkXrUkNjaQ4OsHPQ3a40Vv/view?usp=sharing

minghangz commented 2 years ago

Can you share the full log file with me? I can only see logs for the first 8 epochs and the last epoch.

chaudatascience commented 2 years ago

Here's the log file: https://drive.google.com/file/d/1BCxHFc0JEMlkXrUkNjaQ4OsHPQ3a40Vv/view?usp=sharing

Thank you, Chau

chaudatascience commented 2 years ago

Hi Minghang,

I know you may be busy working on other things, just want to check in if there are any updates on this.

Thank you, Chau

Eagen-l commented 2 years ago

Hi, I also encountered a similar situation in the process of reproducing the results. Is there an updated version of the parameters? Thank you

hyf015 commented 2 years ago

Hi, I also met with the same issue. On charades STA the results can be reproduced, but not on ActivityNet. Maybe it is because of the setting of hyperparameters?

dbstjswo505 commented 1 year ago

Hi, I also met with the same issue in the process of reproducing the results for the ActivityNet

ryanjb22 commented 1 year ago

Any update on this? I also have the same issue to reproduce the results for Activitynet. Thanks.

Richard-61 commented 1 year ago

Hi, I also met with the same issue in the process of reproducing the results for the ActivityNet. (R@1 IoU=0.5 28.47 )

LemonQC commented 1 year ago

Could you help me with the splited files. How to use them?

LemonQC commented 1 year ago

@chaudatascience Any pre-process about the following files? image

chaudatascience commented 1 year ago

@LemonQC From the 6 files above, first, you need to combine then unzip, which can be done by

cat ./data/raw/activitynet/activitynet_v1-3.part-0* > ./data/raw/activitynet/activitynet_raw.zip
unzip ./data/raw/activitynet/activitynet_raw.zip -d ./data/activitynet

It should give you an hdf5 file. Hope this helps.

LemonQC commented 1 year ago

@LemonQC From the 6 files above, first, you need to combine then unzip, which can be done by

cat ./data/raw/activitynet/activitynet_v1-3.part-0* > ./data/raw/activitynet/activitynet_raw.zip
unzip ./data/raw/activitynet/activitynet_raw.zip -d ./data/activitynet

It should give you an hdf5 file. Hope this helps.

Many Thanks, I have a try.

LemonQC commented 1 year ago

Could you share new parameters? I can not reproduce the results.

qiwuteng commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

minghangz commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

qiwuteng commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Richard-61 commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Can you share your log file and config file with me? I set the hyperparameter lambda (from 0.125 to 0.135 with the gap 0.01) and I still can not reproduce the results of R@1,IoU@0.5 0.3137. Thanks a lot!

qiwuteng commented 1 year ago

I just change the lambda to 0.135. I reproduce the results: R@1,mIoU 0.3587 | R@1,IoU@0.1 0.7849 | R@1,IoU@0.3 0.5189 | R@1,IoU@0.5 0.3005 | R@1,IoU@0.7 0.1367 | R@1,IoU@0.9 0.0437 | R@5,mIoU 0.4609 | R@5,IoU@0.1 0.8793 | R@5,IoU@0.3 0.6431 | R@5,IoU@0.5 0.4404 | R@5,IoU@0.7 0.2502 | R@5,IoU@0.9 0.0919 |

abc403 commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Can you share your log file and config file with me? I set the hyperparameter lambda (from 0.125 to 0.135 with the gap 0.01) and I still can not reproduce the results of R@1,IoU@0.5 0.3137. Thanks a lot!

I can get the results with lambda=0.133:

| R@1,mIoU 0.3631 | R@1,IoU@0.1 0.8078 | R@1,IoU@0.3 0.5451 | R@1,IoU@0.5 0.3058 |

Richard-61 commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Can you share your log file and config file with me? I set the hyperparameter lambda (from 0.125 to 0.135 with the gap 0.01) and I still can not reproduce the results of R@1,IoU@0.5 0.3137. Thanks a lot!

I can get the results with lambda=0.133:

| R@1,mIoU 0.3631 | R@1,IoU@0.1 0.8078 | R@1,IoU@0.3 0.5451 | R@1,IoU@0.5 0.3058 |

Thank you! I will try other lambda.

qiwuteng commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Can you share your log file and config file with me? I set the hyperparameter lambda (from 0.125 to 0.135 with the gap 0.01) and I still can not reproduce the results of R@1,IoU@0.5 0.3137. Thanks a lot!

I can get the results with lambda=0.133:

| R@1,mIoU 0.3631 | R@1,IoU@0.1 0.8078 | R@1,IoU@0.3 0.5451 | R@1,IoU@0.5 0.3058 |

Hi! What's your python, PyTorch and cudatoolkit version? My version can't reproduce the result when the lambda is 0.133. I use python 3.9.13 PyTorch1.13 cudatoolkit11.7.

APiaoG commented 1 year ago

@LemonQC @chaudatascience Any pre-process about the following files? image

Hi!Can you share these hdf5 files? Now I can't download the files because of the 404 ERROR. image

Tangkfan commented 1 year ago

Hi, Do you reproduce the results for ActivityNet?

Hi, you can try to adjust the hyperparameter lambda (e.g. from 0.125 to 0.135) in the configuration file on your own machine, as in our experiments we found that the model is more sensitive to this hyperparameter.

Thanks! I set the lambda to 0.135, finally I successfully reproduced the results. And We should use the hyperparameter in the source code, not the hyperparameter in the paper πŸ˜†

Can you share your log file and config file with me? I set the hyperparameter lambda (from 0.125 to 0.135 with the gap 0.01) and I still can not reproduce the results of R@1,IoU@0.5 0.3137. Thanks a lot!

I can get the results with lambda=0.133: | R@1,mIoU 0.3631 | R@1,IoU@0.1 0.8078 | R@1,IoU@0.3 0.5451 | R@1,IoU@0.5 0.3058 |

Hi! What's your python, PyTorch and cudatoolkit version? My version can't reproduce the result when the lambda is 0.133. I use python 3.9.13 PyTorch1.13 cudatoolkit11.7.

Hi! Did you reproduce the results?

Xiyu-AI commented 9 months ago

lambda=0.133: Hello, did you solve this problem??