Open hsotoparada opened 3 months ago
Hi Hugo!
The output of the model contains the un-normalized logits for each class (which is also the input expected by the nn.CrossEntropyLoss
). If you want to go from the un-normalized logits to something which more resembles a probability, I recommend applying a softmax to the output.
However, if you are only predicting two classes then is this a binary classification problem? If so, you can have a single output (num_outputs = 1
) (in which case a sigmoid activation is automatically applied by the model) and train the model using nn.BCELoss
. You can then interpret the outputs as the probability of the positive class.
Hi @gabrieltseng, I've read your paper and find it a really interesting work! Thanks a lot for sharing your code as well!
I'm trying to adapt your downstream task notebook for finetuning the pretrained Presto model on the same dataset used in the notebook.
My approach is based on the README instructions, the code in the notebook and the functions evaluate and finetune found in cropharvest_eval.py. The main part of my code is:
And from the print outputs I see for example that the predictions in test_preds[0] are:
predicting with finetuning model... 53 [[-9.505264 , 9.816492 ], [-9.501129 , 9.811971 ], [-9.496433 , 9.806909 ], [-9.49617 , 9.806579 ], [-9.495665 , 9.805991 ], [-9.4937105, 9.803866 ], [-9.497982 , 9.808611 ], [-9.507018 , 9.818317 ], [-9.520019 , 9.832625 ], [-9.512251 , 9.824137 ], ... [-9.4941025, 9.804452 ], [-9.506046 , 9.817224 ], [-9.48634 , 9.795685 ], [-9.496958 , 9.807267 ]]
and I get similar numbers for the remaining elements in test_preds. But if these numbers are predictions I would expect them to be probabilities that sum up to 1, or that should not be the case here?
I guess there may be some step I'm missing but I can't figure out what it could be. Could you please give any hint on this? I would really appreciate your help.
Cheers, Hugo