Closed pounde closed 3 years ago
Did you get a chance to confirm how this affected predictions? When I manually bumped the resolution to 3.5e-6
, we got many more false positives for localization (rocks and such)
What dataset was that run against? The test data will require 6e-6 as that is the res of the post data. I can try it with a small dataset that has both pre and post at a higher resolution then step through the resolutions and compare the outputs. I have a feeling attempting to upsample is going to be problematic. This code should give you the highest resolution that both the pre and post imagery can support.
So with the test data (at 0.6m resolution), using .6 seemed to work best, albeit it with more false positives than .4. Downsampling to .8 creates additional false benefits with no seeming benefit. Still not 100% sure how the difference in imagery resolutions will effect this. My intuition (as useless as that may be) is that the highest resolution that both will support will yield the best results. Input welcome.
What dataset was that run against
I ran it on the Valley fire. At 3.5e-6
(the source resolution for the Maxar imagery), I got hundreds of false positives. At the default 6e-6
, it seemed stable.
My intuition (as useless as that may be) is that the highest resolution that both will support will yield the best results. Input welcome.
I'm also not sure as to why using the source imagery resolution is leading to worse results. Until we figure that out, I propose we leave this PR open. I'm not the GIS expert either 😬
Ok. I'll do some more testing with it at some different resolutions and imagery and see if I can get any useful data from that.
I ran it on the Valley fire. At
3.5e-6
(the source resolution for the Maxar imagery), I got hundreds of false positives. At the default6e-6
, it seemed stable.
I'm back to working this. So it looks like the pre imagery for the Valley Fire AOI resolution is (5.3e-6, 4.5e-6). I'm guessing that up-sampling the pre imagery is the cause for the false positives. I'm going to run this on some imagery that has consistent resolutions at about 3.5e-6 and see what the different resolutions give me for outputs.
Ok, I ran an inference on a small portion of the SMU3 imagery. The source imagery on both the pre and post was <3e-06. I did three runs at 3e-06, 6e-05, and 8e-06. As one might expect the 8e-06 was the worst with a decent amount of missed localizations. 6e-06 was next best, however not the best. 3e-06 was the best. Some screen shots below highlight some of interesting areas.
It is worth noting that the 3e-06 inference did have one building that it identified as two buildings. This is something that should be corrected when I get to #33.
The screenshots below are not overlayed on the pre imagery used for the inference (due to my slow internet currently) but I am relatively confident they are good enough for demonstration purposes.
This is the most egregious miss on all but the 3e-06 inference.
This was a miss on almost all, with 3e-06 only picking up a small portion of the building.
The next two are misses on all but 3e-06.
Any thoughts on additional testing in this regard?
No additional thoughts. Use the resolution that most closely matches the resolution of the input imagery, whichever is better.
Closes #16.