Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.39k
stars
248
forks
source link
Reasons for evaluating the same content but with different results each time. #404
I noticed that the gen_out of the function eval_ocr is no longer consistent and seems to start with encoder_outs in models/sequence_generator.py.
May I ask what part of the model causes unstable inference? Can stable results be obtained for ocr tasks? How should they be implemented?
Thanks in advance.
I used the same model, ofa_cn_ocr_base.pt, the model you provided to evaluate the same content but the results were different each time.
linux, python3.7.0, torch1.10.0+cu102
I noticed that the gen_out of the function eval_ocr is no longer consistent and seems to start with encoder_outs in models/sequence_generator.py. May I ask what part of the model causes unstable inference? Can stable results be obtained for ocr tasks? How should they be implemented? Thanks in advance.