My collaborators and I were experimenting with FinBERT by comparing the results of your code from the Finbert Model Example notebook to the results of using the transformer code in a HuggingFace pipeline. of the 7 we tested it on, all came out with different scores, but 3 came out with different labels altogether. We were wondering if you knew what the reason for that might be? I can upload the code that we used if that would be helpful.
My collaborators and I were experimenting with FinBERT by comparing the results of your code from the Finbert Model Example notebook to the results of using the transformer code in a HuggingFace pipeline. of the 7 we tested it on, all came out with different scores, but 3 came out with different labels altogether. We were wondering if you knew what the reason for that might be? I can upload the code that we used if that would be helpful.