A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Apache License 2.0
935
stars
57
forks
source link
Incorrect results when evaluate OnePiece on RefCOCO+ #59
Hi, I evaluate OnePiece following the guidance of https://github.com/OFA-Sys/ONE-PEACE/blob/main/one_peace/README.md with the weights of finetune_refcoco+.pt on dataset/refcoco+/val.tsv on a single GPU and using fp16, but the results are:
INFO:one_peace.evaluate:{'iou_acc': 0.29689533370514964, 'score_sum': 3194.0, 'score_cnt': 10758}
I don't know how to solve this issue, do you have any suggestions? Thank you!