clovaai / wsolevaluation

Evaluating Weakly Supervised Object Localization Methods Right (CVPR 2020)
MIT License
332 stars 55 forks source link

Evaluation_test Failure and also path cannot find. #18

Closed egundogdu closed 4 years ago

egundogdu commented 4 years ago

Hi,

Thanks for this great work done. I have two questions: (1) When I run: python evaluation_test.py, it outputs: FAIL: test_compute_bboxes_from_scoremaps_degenerate (main.EvalUtilTest)

Traceback (most recent call last): File "evaluation_test.py", line 98, in test_compute_bboxes_from_scoremaps_degenerate self.assertListEqual(boxes, [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], AssertionError: First sequence is not a list: ([array([[0, 0, 0, 0]]), array([[0, 0, 0, 0]]), array([[0, 0, 0, 0]]), array([[0, 0, 0, 0]]), array([[0, 0, 0, 0]])], [1, 1, 1, 1, 1])

====================================================================== FAIL: test_compute_bboxes_from_scoremaps_multimodal (main.EvalUtilTest)

Traceback (most recent call last): File "evaluation_test.py", line 125, in test_compute_bboxes_from_scoremaps_multimodal self.assertListEqual(boxes, [[0, 0, 4, 3], AssertionError: First sequence is not a list: ([array([[0, 0, 4, 3]]), array([[0, 0, 2, 2]]), array([[0, 3, 3, 3]]), array([[2, 3, 3, 3]]), array([[0, 3, 1, 3]])], [1, 1, 1, 1, 1])

====================================================================== FAIL: test_compute_bboxes_from_scoremaps_unimodal (main.EvalUtilTest)

Traceback (most recent call last): File "evaluation_test.py", line 110, in test_compute_bboxes_from_scoremaps_unimodal self.assertListEqual(boxes, [[1, 1, 4, 3], AssertionError: First sequence is not a list: ([array([[1, 1, 4, 3]]), array([[1, 1, 4, 3]]), array([[2, 1, 4, 3]]), array([[2, 2, 4, 3]]), array([[2, 2, 3, 3]])], [1, 1, 1, 1, 1])

(2) My second problem is when I run your suggested script: python evaluation.py --scoremap_root=train_log/scoremaps/ --metadata_root=metadata/ --mask_root=dataset/ --dataset_name=CUB --split=val --cam_curve_interval=0.01

It gives the following error:

Loading and evaluating cams. Traceback (most recent call last): File "evaluation.py", line 528, in main() File "evaluation.py", line 516, in main evaluate_wsol(scoremap_root=args.scoremap_root, File "evaluation.py", line 465, in evaluate_wsol image_ids = get_image_ids(metadata) File "/egundogdu/WSOL/wsolevaluation/data_loaders.py", line 62, in get_image_ids with open(metadata['image_ids' + suffix]) as f: FileNotFoundError: [Errno 2] No such file or directory: 'metadata/image_ids.txt'

Do you have any idea with these issues?

coallaoh commented 4 years ago

Thanks a lot for your comments!

@junsukchoe Looks like the evaluation_test.py is broken after the update for MaxBoxAccV2. Could you look into this?

I will look into the second problem asap - looks like there is a path error in the code base. In the meantime, you can probably try --metadata_root=metadata/CUB/val instead.

junsukchoe commented 4 years ago

I just fixed the bug in evaluation_test.py. It occurs due to the recent update on box evaluation. Thanks for your feedback!

junsukchoe commented 4 years ago

The function evaluation_wsol in evaluation.py is fixed now (#19). Sorry for your inconvenience and thanks for your comments!

coallaoh commented 4 years ago

You can now use your original command

python evaluation.py --scoremap_root=train_log/scoremaps/ --metadata_root=metadata/ --mask_root=dataset/ --dataset_name=CUB --split=val --cam_curve_interval=0.01

and this should give no error now. Enjoy! :)

egundogdu commented 4 years ago

Thanks for the fast response. evaluation_test.py works! It would also be great to see the evaluation with an already working example. When I run python evaluation.py --scoremap_root=train_log/scoremaps/ --metadata_root=metadata/ --mask_root=dataset/ --dataset_name=CUB --split=val --cam_curve_interval=0.01

It complains bcs I dons have the resuls in train_log. How can I reproduce a least one of the methods numbers to better understand adapting my own scores?

coallaoh commented 4 years ago

How can I reproduce a least one of the methods numbers to better understand adapting my own scores?

Then try the train+eval code at https://github.com/clovaai/wsolevaluation#6-wsol-training-and-evaluation. The code uses evaluation.py internally. It generates heatmaps in numpy arrays and evaluates them right away. If you want to use evaluation.py as above, then you will have to change the inference code of train+eval slightly to save the heatmaps at train_log.

See the below response for the other issue https://github.com/clovaai/wsolevaluation/issues/3:

If you run the main.py code, the heatmaps are not saved. They are directly evaluated on the fly (in memory). This happens at https://github.com/clovaai/wsolevaluation/blob/0c476cd115c21900a734a86cb34a8e92b8b7e278/inference.py#L71.

If you wish to save the heatmaps, save the cam_normalized array in https://github.com/clovaai/wsolevaluation/blob/0c476cd115c21900a734a86cb34a8e92b8b7e278/inference.py#L82

coallaoh commented 4 years ago

Closing the issue, assuming the question was answered :) Please re-open the issue as necessary.