icml-2020-nlp / semsim

40 stars 9 forks source link

Finetune w/ semsim without rewarder checkpoint? #7

Closed pltrdy closed 4 years ago

pltrdy commented 4 years ago

Hi,

In the 4rth step of the README criterion is semsim (which make sense) however I dont get how it runs considering that I don't have the rewarder model on my system (as far as I know). Isn't it mandatory to actually reproduce?

icml-2020-nlp commented 4 years ago

Hi, Thanks for your interests on our manuscript. Unfortunately, we found some mistakes on the current code (please see the first part of README). Current code might produce misleading results and we discourage usage of this repository.

We are working on altering the model (with the same motivation). Hence please refrain from using the code and scores until further notice (hopefully on the other conference). We sincerely apologize for the inconvenience.

Thank you! Authors

Ps) Thank you so much for sharing wonderful ROUGE libraries.

Dear visitors,
We have checked the code and found out that there is a mistake in the code. We suspect that this issue is caused while we organize our code before the public release.

We are currently checking further details on the issue. The examination will take about a couple of weeks as the code and the code history are not available in the current workplace at the moment.

Considering this situation, we decide to post a notice on the repository to discourage usage until we double-check and update the code.

Please refrain from using the model and scores until further notice.
pltrdy commented 4 years ago

Right, I read it.

Problems might only occur when using the semantic criterion right? I mean, for comparison purposes I've been finetuning vanilla BART models, do you think "normal" training might be compromised as well?

Thanks for your quick reply and nice PS :) (i'll close the issue tho)

icml-2020-nlp commented 4 years ago

If you train vanilla BART, then the result will be identical to this version of BART code. Please download bart.large.tar.gz from here or from fairseq README. Thanks!

pltrdy commented 4 years ago

If you train vanilla BART, then the result will be identical to this version of BART code. Please download bart.large.tar.gz from here or from fairseq README. Thanks!

Alright, thx