Open Multihuntr opened 8 months ago
I have figured out a preprocessing graph for SNAP which seems to work well. Visually, it looks to be mostly within resampling difference. The only real difference is that the provided preprocessed images have more bright-spot artifacts as compared to using this graph.
<graph id="KuroSiwoPreprocessingGraph">
<version>1.0</version>
<node id="OrbitApplied">
<operator>Apply-Orbit-File</operator>
<sources>
<sourceProduct>${source}</sourceProduct>
</sources>
<parameters>
<polyDegree>2</polyDegree>
</parameters>
</node>
<node id="Calibrated">
<operator>Calibration</operator>
<sources>
<source>OrbitApplied</source>
</sources>
<parameters>
<sourceBands>Intensity_VV,Intensity_VH</sourceBands>
<selectedPolarisations>VV,VH</selectedPolarisations>
</parameters>
</node>
<node id="Filtered">
<operator>Speckle-Filter</operator>
<sources>
<source>Calibrated</source>
</sources>
<parameters>
<filter>Lee Sigma</filter>
<filterSizeX>5</filterSizeX>
<filterSizeY>5</filterSizeY>
<sigmaStr>0.9</sigmaStr>
</parameters>
</node>
<node id="Terrain">
<operator>Terrain-Correction</operator>
<sources>
<source>Filtered</source>
</sources>
<parameters>
<demName>SRTM 1Sec HGT</demName>
<pixelSpacingInMeter>10.0</pixelSpacingInMeter>
<sourceBands>Sigma0_VV,Sigma0_VH</sourceBands>
<mapProjection>EPSG:3857</mapProjection>
</parameters>
</node>
</graph>
You can use it with SNAP's gpt
. I used gpt graph.xml -c 12G -Ssource=/path/to/zip -t /path/to/output.dim
. I found that exporting to .dim
was fast, but exporting to .tif
was slow.
I have also validated that S1 images created with this graph give extremely similar results using the pretrained models across one site, but the code is too involved to share here.
Hi @Multihuntr,
Sorry for the late reply.
We have updated Kuro Siwo to include more events outside of Europe and the respective raw SLC products (with minimal preprocessing).
You can find the updated preprocessing scripts used to generate both the GRD and SLC products in configs/grd_preprocessing.xml
and configs/slc_preprocessing.xml
.
Feel free to check the updated paper for more information.
Nikos.
Hi @ngbountos, thanks for the reply. I think those changes address this issue.
But, I also wanted to ask when/if you planned to release the pretrained weights for the new models that were trained for the updated paper. And, once they are updated, to ask: which preprocessing was used on the S1 images to train them? configs/grd_preprocessing.xml
or configs/slc_preprocessing.xml
?
To use the pretrained models, we need to preprocess the Sentinel-1 images identically to the KuroSiwo dataset. Else we risk degraded performance from subtle data distribution misalignment.
In the paper, Section 3 describes it as:
Could you provide either a script or a configuration file or something like that which would allow others to exactly replicate this preprocessing on other Sentinel-1 data?
P.S. Sorry for raising so many issues. I'm excited to use the model is all.