Closed zwsbh closed 4 months ago
pls refer to lib/utils/measure.py
There is no the results of the confusion matrix when running text_submission. py
test_submission is just for the hold-out test, not for validation, there is no ground-truth for the hold-out test set, we only submit the results to the Challenge organizations, and then they will evaluate your submission on their hold-out ground-truth.
Is the indicator in the paper the result of the validation set or the test set
How to obtain the results of comparative experiments without real labels from the test set
Just compare models on your validation set. You cannot get the IOU or F1 scores on the hold-out test set now since the workshop has already ended and closed any submission evaluations.
How to display the test results, which is the confusion matrix