Closed jacquelynsmale closed 1 year ago
I've got the current version downloading a copy of the scene to be re-projected, which was the only way I could get gdal.Warp to work.
Now, I'm facing an error that Exception: Upper bound of coregistered image index should be <= size of image1 (and image2) minus 1
. @jhkennedy , @forrestfwilliams have you come across this error before?
Okay, I got this working for a L5 pair on my laptop. Open questions I'd like @jacquelynsmale and @forrestfwilliams thoughts on:
@jhkennedy @forrestfwilliams I would also lean towards filtering then reprojecting.
Looking at these two scenes:
LE07_L1TP_063018_20040911_20200915_02_T1 LE07_L1TP_063018_20040810_20200915_02_T1
I've confirm we've got the filtering for L7 working (as in, the filtered TIFs look correct), but we're still getting completely zero output products and browse images.
Looking out the outputs:
window*.tif
outputs from Geogrid look correct as compared to develop
autoRIFT_intermediate.nc
, SeachLimitX
, SearchLimitY
and noDataMask
look correct as compared to develop
autoRIFT_intermediate.nc
, Dx
, Dy
, InterpMask
, ChipSizeX
are all zero, leading to :point_down: offset.tif
, and therefor velocity.tif
, from autoRIFT are all zeroNext place to look at is how Dx
, Dy
, InterpMask
, ChipSizeX
are set.
Okay, I suspect it's something going on with uniform_data_type
which is called in vend/testautoRIFT.py
right after the pre-processing steps:
switching obj.DataType
to 1
instead of the default 0
here:
https://github.com/ASFHyP3/hyp3-autorift/pull/173/files#diff-aec97127e1a0f9c669793fef4650bdfcd75ea2522b8826a4df963ef623d937a7R326
makes it so we output some data, albeit bogus data:
Along a different path, the highpass filter (HPS) is applied to all missions, so:
but we currently don't handle either of these cases (and removed the filter from autoRIFT workflow!).
We want to filter before reprojecting:
For L8/9, I don't think reprojecting will have much effect on the low-frequency features, and we likely don't need to filter ahead of time. For S2, and S1, we won't ever be re-projecting, so there's no need to filter ahead of time.
Therefore, I will walk back moving the high-pass filter before geogrid/autoRIFT for all missions.
Back looking at these two scenes:
LE07_L1TP_063018_20040911_20200915_02_T1 LE07_L1TP_063018_20040810_20200915_02_T1
With the changes I just pushed, I can now get output data that looks reasonable:
But still not as good as from develop
:
With the complexities you mentioned @jhkennedy, I'd also favor walking back the changes to the high-pass filter. Thanks!
It looks like we haven't correctly implemented filtering for at least Landsat 7. Post uniform_data_type images created using develop
clearly look Wallis filtered, but the images created using landsat_reproject
don't.
develop: reproject:
I've confirmed that the problems are also present pre-uniform_data_type
. The same visual pattern as above is present, but the develop
branch data is scaled -5 to 5 (correct for Wallis filter) and the reproject
branch data is scaled 0 to 255 (incorrect for Wallis filter).
@forrestfwilliams I think the wallis filter itself is working correctly.
For the current HEAD of this branch, if you throw the TIFFs from the filtered
directory into QGIS, you'll get:
which looks Wallis filtered to me, and has a range from approx. -5 to 5. gdalinfo -stats
on those confirm they are float32 data type and have a range from approx. -5 to 5.
To get the output I do see, I had to uncomment these two lines: https://github.com/ASFHyP3/hyp3-autorift/blob/landsat_reproject/hyp3_autorift/vend/testautoRIFT.py#L156-L157
If they are commented out, we get blank browse images and empty netCDF files again.
So, I think what's happing is that autorift at that point assumes the images are scaled from 0-255 (the landsat input images are!). The cast to uint8
causes the scale to go from 0-255 because anything < 0 ends up wrapping to 255-X.
I'm having a hard time finding where these assumptions are, and really understanding what uniform_data_type
is even trying to do to the uint8 images.
But that begs the question, why did the L4/5 filter work? Isn't that also producing filtered images that range from -5 to 5??
@jhkennedy I agree that the issue is likely occurring inside of testAutoRIFT.py
after images are pre-filtered. The images in the filtered directory also look good to me. I'll work on tracking down this issue.
It looks like we hadn't correctly set the path for the zeroMask files we create during pre-filtering, so they weren't being read in. This caused cascading errors. To get decent out put I've commented these lines. They are the source of some issues, but I'll pick this back up later.
Current version works for L7, but with a slight decrease in the number of valid pixels. Not sure where this difference is coming from. If I had to guess, I'd bet it's a datatype issue in the pre-filtering inputs.
Alright! Looks like most everything is good to go for golden testing. @forrestfwilliams if you're happy with #198 , feel free to merge it and this one to develop.
Took a look at a test image pair that needed correction. We're producing decent data in comparison to the V1 datasets: V1 vX Dataset
New reprojected vX Dataset (data here)
@jhkennedy and @jacquelynsmale, do we want to deal with the SHA entropy trufflehog issue before merging to develop?
@forrestfwilliams I think we're just going to ignore it and it'll clean itself up next release