Closed Jenonone closed 1 year ago
Same question, I cannot reproduce the paragraph2action results of the paper through the weights on the huggingface repo: https://huggingface.co/GT4SD/multitask-text-and-chemistry-t5-small-standard, whose BLEU score is supposed to 0.929, but I test it only to be 0.659. Does the model need to be further finetuned?
Thanks for checking out our work, I will close this issue and move it to the dedicated repo (https://github.com/GT4SD/multitask_text_and_chemistry_t5) and we will get back to you as soon as possible.
I cannot reproduce the results of the paper through the weights on the huggingface repo:GT4SD/multitask-text-and-chemistry-t5-base-augm ,whose accuracy is supposed to 0.322,but I test it only to be 0.196.