Closed YiwenShaoStephen closed 6 years ago
Great! merging.
Yiwen, what do you mean by the algorithm given by madcat? I'm looking at Ashish's scoring code and it seems to use polygons from the shapely library; I don't see binary operations over points. (I do see the use of an '&' operator but I assume it is something overloaded by the shapely library).
However, Ashish's code does need better documentation.
On Wed, May 30, 2018 at 3:31 PM, Yiwen Shao notifications@github.com wrote:
Now we can run from start to end and get a detailed result statistic. Concretely, in exp/unet_../segment we have: img/ : visualized predicted mask for each image rle/: run-length encoding result for each image submission.csv: aggregated rle results for all data result.txt: statistical evaluation result, including: final mean average precision and average precision for each image
The algorithm for computing IOU in scoring.py is well optimized. It's much much faster than the naive algorithm given by madcat which does a logical and/or over two binary masks. It reduces the evaluation time from more than 1 day to just seconds now. But I'm not perfectly sure it's the fastest and bug-free. I didn't find reference online.
You can view, comment on, or merge this pull request online at:
https://github.com/waldo-seg/waldo/pull/62 Commit Summary
- added scoring scripts for dsb2018
File Changes
- A egs/dsb2018/v1/local/scoring.py https://github.com/waldo-seg/waldo/pull/62/files#diff-0 (149)
- M egs/dsb2018/v1/local/segment.py https://github.com/waldo-seg/waldo/pull/62/files#diff-1 (3)
- M egs/dsb2018/v1/run.sh https://github.com/waldo-seg/waldo/pull/62/files#diff-2 (8)
Patch Links:
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/waldo-seg/waldo/pull/62, or mute the thread https://github.com/notifications/unsubscribe-auth/ADJVu-AZeSG9rMeHmA59fZ_f7p8Ln9uLks5t3vOdgaJpZM4UT4Pw .
We can use shapely library if the input is in text format (rectangle coordinates). But if we want to compute IOU value based on actual mask (image) and predicted mask (image) then it is completely based on numpy array operations. Yeah, Yiwens implementation for computing IOU value from a mask and predicted mask is faster as compared to madcat implementation since it is based on run-length encoding as compared to matrix level (and/ or) operations.
That sounds super slow. Are you at least limiting the masks to the bounding boxes of where they are nonzero? You could make it much faster by doing that-- maybe creating some Python object corresponding to a bounding mask with offsets for where its (0,0) co-ordinate lies, with a stored area and an operation to get the overlap with a similar object.
On Wed, May 30, 2018 at 3:56 PM, Ashish Arora notifications@github.com wrote:
We can use shapely library if the input is in text format (rectangle coordinates). But if we want to compute IOU value based on actual mask and predicted mask then it is completely based on numpy array operations. Yeah, Yiwens implementation for computing IOU value from a mask and predicted mask is faster as compared to madcat implementation since it is based on run-length encoding as compared to matrix level (and/ or) operations.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/waldo-seg/waldo/pull/62#issuecomment-393296796, or mute the thread https://github.com/notifications/unsubscribe-auth/ADJVu6gqHv0CKKKTnMR41rmboQDtxrmBks5t3vlUgaJpZM4UT4Pw .
Thanks, I am not doing it currently, but i will do it.
Now we can run from start to end and get a detailed result statistic. Concretely, in exp/unet_../segment we have: img/ : visualized predicted mask for each image rle/: run-length encoding result for each image submission.csv: aggregated rle results for all data
result.txt: statistical evaluation result, including: final mean average precision and average precision for each image
The algorithm for computing IOU in scoring.py is well optimized. It's much much faster than the naive algorithm given by madcat which does a logical and/or over two binary masks. It reduces the evaluation time from more than 1 day to just seconds now. But I'm not perfectly sure it's the fastest and bug-free. I didn't find reference online.