Closed mdsmith-cim closed 3 months ago
Hello @mdsmith-cim , thanks for reporting the issue. We are actively investigating it.
Hi @mdsmith-cim — it's possible that our baselines' code is slightly outdated.
It used to be the case that the encoding script accepted subsampled prediction and confidence arrays, but for simplicity and reliability's sake the current script requires that the arrays have the exact same size as the input image.
We will update the baselines, but one possible workaround is that you upsample the arrays yourself (using, e.g., the nearest pixel on the prediction array, and bilinear interpolation on the confidence array).
Thanks for pointing me in the right direction. I noted that in the baselines, the SMIYC set uses image resizing while the other datasets do not. Removing that solved the issue and I've confirmed I can now at least successfully upload a basic submission to the evaluation server.
Close #1 - updated codes to fix resolution mismatches
In the process of trying to encode a submission, I run into an IndexError in the encode script, making it obviously impossible to upload any results. I've noticed this on the provided baseline code so I don't believe it's anything to do with my implementation.
Steps to reproduce:
python evaluate_ood_bravo.py --score_func rba --dataset_mode selective --selected_datasets bravo_ACDC bravo_SMIYC bravo_outofcontext bravo_synflare bravo_synobjs bravo_synrain --model_mode selective --selected_models swin_b_1dl --models_folder ckpts/ --datasets_folder datasets --out_path test_bravocode_out
(Resulting output structure: encoding_source_dir.txt)python -m bravo_toolkit.util.encode_submission baselines/RbA/test_bravocode_out/ encoded_output.tar --samples ../bravo_SAMPLING.tar
Any suggestions/fixes would be appreciated.