AlexeyPechnikov / pygmtsar

PyGMTSAR (Python InSAR): Powerful and Accessible Satellite Interferometry
http://insar.dev/
BSD 3-Clause "New" or "Revised" License
387 stars 88 forks source link

[Bug]: How to unify grids to resolve this error? #128

Open wharfbanger opened 2 months ago

wharfbanger commented 2 months ago

SBAS and PSI analysis on Otmanbozdagh Mud Volcano, Azerbaijan, 2020-2023

Fails at "SBAS Trend Correction": --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 4 inc = decimator(sbas.incidence_angle()) 5 yy, xx = xr.broadcast(topo.y, topo.x) ----> 6 trend_sbas = sbas.regression(unwrap_sbas.phase, [topo, topoyy, topoxx, topoyyxx, yy, xx, inc], corr_sbas)

/usr/local/lib/python3.10/dist-packages/pygmtsar/Stack_detrend.py in regression(data, variables, weight, valid_pixels_threshold, **kwargs) 233 vshapes = [v[0].shape if v.ndim==3 else v.shape for v in variables] 234 equals = np.all([vshape == dshape for vshape in vshapes]) --> 235 assert equals, f'ERROR: shapes of variables slices {vshapes} and data slice {dshape} differ.' 236 #assert equals, f'{dshape} {vshapes}, {equals}' 237 variables_stack = [v.chunk(dict(y=chunk2d[0], x=chunk2d[1])) for v in variables]

AssertionError: ERROR: shapes of variables slices [(776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940)] and data slice (775, 940) differ.

Processing Log If applicable, attach a log file from notebook cell or terminal output or relevant error messages

[Please note you can use the ChatGPT4-based InSAR AI Assistant for guidance or troubleshooting. Visit https://insar.dev/ai for more information.]

AlexeyPechnikov commented 2 months ago

Use xarray reindex_like() function to unify the grids: array1.reindex_like(array2).

wharfbanger commented 2 months ago

Thanks Alexey. Can you please provide code with actual array1 and array2? Even better would be to provide an updated Colab notebook..

AlexeyPechnikov commented 2 months ago

Here is the Xarray documentation: https://docs.xarray.dev/en/latest/generated/xarray.DataArray.interp_like.html

Example notebooks with the function usage are shared on my Patreon, but it is pretty obvious.

wharfbanger commented 2 months ago

Thanks Alexey,

I modified using the Xarray function (as suggested by ChatGPT) and the notebook successfully executed that step (SBAS Trend Correction), and the remaining SBAS steps. GREAT.

Unfortunately there was an incomplete red progress indicator at a later PS processing step during materialization:

image

In subsequent steps the PS plots looked empty:

image

image

Here is a my revised Colab notebook: https://drive.google.com/file/d/100g6LRKmWc-geL6OMFLv6Ytoa59VFNy4/view?usp=sharing

AlexeyPechnikov commented 2 months ago

There are some pixel-wise inconsistencies between the interferograms valid pixels in the stack, which can be resolved as follows:

# check pixel stacks
validmask_ps = np.isfinite(intf_ps).all(axis=0)
# optionally, materialize to disk and open
validmask_ps = sbas.sync_cube(validmask_ps, 'validmask_ps')
validmask_ps.where(validmask_ps).plot.imshow()

disp_ps_pairs = sbas.los_displacement_mm(sbas.unwrap1d(
    intf_ps.where(validmask_ps)
    ))
disp_ps_pairs
wharfbanger commented 2 months ago

My notebook completed, but there was no correlation between PS and SBAS (SBAS vs PS Results Comparision). I exported SBAS (disp-sbas) and calculated rate (mm/year) by linear regression.

Otmanbozdagh_mud gave very different SBAS results to the SBAS Imperial Valley example (same area, same scenes):

Otmanbozdagh_mud (mm/yr): image

Imperial_Valley (mm/yr): image

You can see this makes a big difference in the maps (points labelled mm/year):

Otmanbozdagh_mud (mm/yr): image

Imperial_Valley (mm/yr): image

Can you provide an explanation for these different results? What parameters can I adjust?

AlexeyPechnikov commented 2 months ago

The examples are very different. In the Imperial Valley case, we remove very strong stratified plus turbulent atmospheric noise using Gaussian filtering, whereas the Otmanbozdagh mud volcano shows pretty good results with linear regression trend correction. How do you propose we unify the nature of these processes? :)

wharfbanger commented 1 month ago

It would be very useful to have a general purpose example that can be applied to a variety of areas, and where the parameters are well documented. StaMPS has been very good in this respect, as results with default parameters are very good in a variety of cases (agree with convectional levelling contours). I have spent a great deal of time trying to adapt these examples to my own areas but perhaps they are too specific? For example, I gave up trying to modify the Otmanbozdagh mud volcano because the results were so different from the more simple Imperial Valley (I cant imagine it is different atmospheric filtering). But the Imperial Valley example has very coarse resolution (60 m), with a large wavelength (400 m). I have spent many hours trying to decrease the Imperial Valley wavelength and grid size (to reduce the spatial smoothing) so I can examine smaller-scale phenomenon, without success. Is there anywhere I can access more documentation to allow me to better modify the examples? Would your book be helpful? As a Patreon subscriber do I have access to the book? Thanks, Mark

AlexeyPechnikov commented 1 month ago

Mark, StaMPS does not have any parameters to process distributed scatterers at all, and it is primarily a persistent scatterers (PS) analysis tool, as mentioned in its name. The PyGMTSAR Imperial Valley notebooks illustrate DS SBAS analysis in the absence of persistent scatterers, making it impossible to produce complete area coverage results using StaMPS and other PSI tools. The Otmanbozdagh mud volcano is a completely different case, having a great density of PS. As illustrated in the notebook, both PSI and SBAS methods can produce very similar results.

If you apply DS analysis as in the Imperial Valley case to sparse persistent scatterers, it does not work due to exact physical reasons. PyGMTSAR allows for both PS and DS analysis using PSI and SBAS techniques, but it is impossible to unify the different physics into a single standalone example. PS are smaller than a single Sentinel-1 pixel and are affected by speckle noise and often sparse, whereas DS are larger than a pixel, and speckle noise is not an issue because it is effectively removed using anti-aliasing filtering and proper DS analysis scale selection typically can produce dense but lower resolution pixels.

There are many practical cases where not-so-rare PS can be grouped into DS, and all analysis methods work well enough, although they still have some limitations, as explained in my publications available on Patreon.

wharfbanger commented 1 month ago

Thanks Alexey,Your explanations are useful and appreciated.  Imperial Valley:  Is it possible to reduce the grid spacing from 60 to 30 meters (or 15 meters)? If so, which parameters?  Please note this is for my own study area in New Zealand, there are many PS in the area.  I have already used the Imperial Valley example in this area and the results are consistent with StaMPS.  However, I would like to see if results can be improved; e.g. by reducing averaging of DS, using a smaller grid size.   If not possible to do this, then perhaps the Otmanbozdagh_mud example will be a better template?Thanks,MarkSent from my iPhoneOn 11/05/2024, at 12:39 AM, Alexey Pechnikov @.***> wrote: Mark, StaMPS does not have any parameters to process distributed scatterers at all, and it is primarily a persistent scatterers (PS) analysis tool, as mentioned in its name. The PyGMTSAR Imperial Valley notebooks illustrate DS SBAS analysis in the absence of persistent scatterers, making it impossible to produce complete area coverage results using StaMPS and other PSI tools. The Otmanbozdagh mud volcano is a completely different case, having a great density of PS. As illustrated in the notebook, both PSI and SBAS methods can produce very similar results. If you apply DS analysis as in the Imperial Valley case to sparse persistent scatterers, it does not work due to exact physical reasons. PyGMTSAR allows for both PS and DS analysis using PSI and SBAS techniques, but it is impossible to unify the different physics into a single standalone example. PS are smaller than a single Sentinel-1 pixel and are affected by speckle noise and often sparse, whereas DS are larger than a pixel, and speckle noise is not an issue because it is effectively removed using anti-aliasing filtering and proper DS analysis scale selection typically can produce dense but lower resolution pixels. There are many practical cases where not-so-rare PS can be grouped into DS, and all analysis methods work well enough, although they still have some limitations, as explained in my publications available on Patreon.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

AlexeyPechnikov commented 1 month ago

The Imperial Valley example uses a wavelength of 400 meters for Gaussian filtering (antialiasing and speckle removal) and spatial averaging (multilooking) into a ~60-meter square grid (16x4=64 original pixels aggregated into one) to enhance the signal-noise ratio and correlation, plus a Goldstein filter is applied. This processing is targeted at highly decorrelated areas with high radiometric noise and strong atmospheric effects. It's possible to apply a much lower wavelength, starting from about 4 meters, and to exclude multilooking and the Goldstein filter if you have more frequent and higher-quality interferograms in a well-coherent area like in the Golden Valley example. But why do you need to improve the SBAS results? Use these to start your PSI processing, utilizing SBAS-detected phase ramps to produce higher resolution results, as demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is more accurate but requires a priori known phase corrections, which we define using less accurate and lower-resolution SBAS analysis.

wharfbanger commented 1 month ago

Thanks Alexey!

To answer your question: why do I need to improve SBAS results?

Here is my situation - I have a study: a two year time interval in a geothermal area with a subsidence bowl.

StaMPS results in color shading (see legend). Note: ascending and descending combined to give an estimate of vertical. White contours are from a ground-based levelling survey (same time period) showing a maximum of -90 mm/year. StaMPS is in good agreement with -85 mm/year (at bowl center). [image: image.png]

Now, Imperial SBAS results in color, showing a maximum of -60 mm/year, quite nice, but I am hoping for better agreement with StaMPS and levelling. So maybe we can just reduce the wavelength to ~100m, and reduce the grid size to ~30 m? I have tried, but get errors because I don't know what parameters to change. [image: image.png]

On Mon, May 13, 2024 at 5:22 AM Alexey Pechnikov @.***> wrote:

The Imperial Valley example uses a wavelength of 400 meters for Gaussian filtering (antialiasing and speckle removal) and spatial averaging (multilooking) into a ~60-meter square grid (16x4=64 original pixels aggregated into one) to enhance the signal-noise ratio and correlation, plus a Goldstein filter is applied. This processing is targeted at highly decorrelated areas with high radiometric noise and strong atmospheric effects. It's possible to apply a much lower wavelength, starting from about 4 meters, and to exclude multilooking and the Goldstein filter if you have more frequent and higher-quality interferograms in a well-coherent area like in the Golden Valley example. But why do you need to improve the SBAS results? Use these to start your PSI processing, utilizing SBAS-detected phase ramps to produce higher resolution results, as demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is more accurate but requires a priori known phase corrections, which we define using less accurate and lower-resolution SBAS analysis.

— Reply to this email directly, view it on GitHub https://github.com/AlexeyPechnikov/pygmtsar/issues/128#issuecomment-2106319789, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMOB4Q5T573RYLKESEIKBJTZB6QL5AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWGMYTSNZYHE . You are receiving this because you authored the thread.Message ID: @.***>

wharfbanger commented 1 month ago

Thanks Alexey!

Please disregard last email that contained an error

To answer your question: why do I need to improve SBAS results?

Here is my situation - I have a study: a two year time interval in a geothermal area with a subsidence bowl.

StaMPS results in color shading (see legend). Note: ascending and descending combined to give an estimate of vertical. White contours are from a ground-based levelling survey (same time period) showing a maximum of -90 mm/year. StaMPS is in good agreement with -85 mm/year (at bowl center). [image: image.png]

Now, Imperial SBAS results in color, showing a maximum of -30 mm/year, the shape is good, but I am hoping for better agreement with StaMPS and levelling in terms of rate (mm/year). So maybe we can just reduce the wavelength to ~30m and grid size to ~30 m? I have tried, but get errors because I don't know what parameters to change. [image: image.png]

On Sun, May 12, 2024 at 10:22 AM Alexey Pechnikov @.***> wrote:

The Imperial Valley example uses a wavelength of 400 meters for Gaussian filtering (antialiasing and speckle removal) and spatial averaging (multilooking) into a ~60-meter square grid (16x4=64 original pixels aggregated into one) to enhance the signal-noise ratio and correlation, plus a Goldstein filter is applied. This processing is targeted at highly decorrelated areas with high radiometric noise and strong atmospheric effects. It's possible to apply a much lower wavelength, starting from about 4 meters, and to exclude multilooking and the Goldstein filter if you have more frequent and higher-quality interferograms in a well-coherent area like in the Golden Valley example. But why do you need to improve the SBAS results? Use these to start your PSI processing, utilizing SBAS-detected phase ramps to produce higher resolution results, as demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is more accurate but requires a priori known phase corrections, which we define using less accurate and lower-resolution SBAS analysis.

— Reply to this email directly, view it on GitHub https://github.com/AlexeyPechnikov/pygmtsar/issues/128#issuecomment-2106319789, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMOB4Q5T573RYLKESEIKBJTZB6QL5AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWGMYTSNZYHE . You are receiving this because you authored the thread.Message ID: @.***>

AlexeyPechnikov commented 1 month ago

Unfortunately, the images are not visible in your last messages.

In the Imperial Valley case, we use a wavelength of 400 meters to detect larger DS instead of the default ("gold standard") value of 200 meters. For small DS, we can apply a wavelength of 100 meters and even 60 meters for PS detection. However, this is tricky for SNAPHU unwrapping, so I do not recommend using a wavelength less than 100 meters. It's preferable to use PSI analysis after SBAS to enhance your results.

# Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity
intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4))
phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))

Additionally, we can specify a more detailed output grid like 30 meters using sbas.decimator(30) instead of the default 60-meter one:

# use default 60m resolution
decimator = sbas.decimator()

Commonly, there is no reason for a higher resolution output SBAS grid due to SNAPHU unwrapping error accumulation.

The Goldstein filter can introduce some spatial inaccuracies while it helps to clean up the interferograms' spatial spectrum:

# Goldstein filter expects square grid cells produced using multilooking
intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))

Typically, we do not need to exclude the filter because it can be adjusted to use a smaller processing window of 16 pixels as sbas.goldstein(phase, corr, 16).

As shown in my other examples, for many InSAR cases, the best approach is to apply SBAS analysis with default or nearly default parameters to produce stable results on a rough grid and enhance it using PS analysis with default or close to default parameters too, to calculate the most accurate displacements on the original Sentinel-1 resolution grid. Of course, SBAS and PSI results need to be consistent to validate the analyses. If you only need an intermediate output resolution like 30 meters, sometimes it is sufficient to apply only SBAS processing.

wharfbanger commented 1 month ago

Perfect, I'll give it a try with your suggestions.

I have attached the images FYI.

Mark

On Sun, May 12, 2024 at 11:11 PM Alexey Pechnikov @.***> wrote:

Unfortunately, the images are not visible in your last messages.

In the Imperial Valley case, we use a wavelength of 400 meters to detect larger DS instead of the default ("gold standard") value of 200 meters. For small DS, we can apply a wavelength of 100 meters and even 60 meters for PS detection. However, this is tricky for SNAPHU unwrapping, so I do not recommend using a wavelength less than 100 meters. It's preferable to use PSI analysis after SBAS to enhance your results.

Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity

intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4)) phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))

Additionally, we can specify a more detailed output grid like 30 meters using sbas.decimator(30) instead of the default 60-meter one:

use default 60m resolution

decimator = sbas.decimator()

Commonly, there is no reason for a higher resolution output SBAS grid due to SNAPHU unwrapping error accumulation.

The Goldstein filter can introduce some spatial inaccuracies while it helps to clean up the interferograms' spatial spectrum:

Goldstein filter expects square grid cells produced using multilooking

intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))

Typically, we do not need to exclude the filter because it can be adjusted to use a smaller processing window of 16 pixels as sbas.goldstein(phase, corr, 16).

As shown in my other examples, for many InSAR cases, the best approach is to apply SBAS analysis with default or nearly default parameters to produce stable results on a rough grid and enhance it using PS analysis with default or close to default parameters too, to calculate the most accurate displacements on the original Sentinel-1 resolution grid. Of course, SBAS and PSI results need to be consistent to validate the analyses. If you only need an intermediate output resolution like 30 meters, sometimes it is sufficient to apply only SBAS processing.

— Reply to this email directly, view it on GitHub https://github.com/AlexeyPechnikov/pygmtsar/issues/128#issuecomment-2106728418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMOB4Q37YZ7PJHTJP4O3GMTZCBKR3AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWG4ZDQNBRHA . You are receiving this because you authored the thread.Message ID: @.***>

wharfbanger commented 1 month ago

Thanks Alexey,

I took your advice and tried changing the Imperial Valley to a 30m grid:

decimator = sbas.decimator(30)

This gives an error at the later step:

tqdm_dask(unwrap_ll := sbas.ra2ll(unwrap.phase).persist(), desc='Geocoding')

Error: .Transforming grid spacing (grid_dy, grid_dx) is smaller than transform matrix spacing (trans_dy, trans_dx), call Stack.geocode() with less coarsing

I tried changing coarsen=(1,4) to coarsen=(1,1),

then I tried changing coarsen=(1,4) to coarsen=(1,2)

..but still get the same error

What other parameters do I need to change Imperial Valley to a 30m grid?

Thanks,

Mark

I tried changing

On Sun, May 12, 2024 at 11:11 PM Alexey Pechnikov @.***> wrote:

Unfortunately, the images are not visible in your last messages.

In the Imperial Valley case, we use a wavelength of 400 meters to detect larger DS instead of the default ("gold standard") value of 200 meters. For small DS, we can apply a wavelength of 100 meters and even 60 meters for PS detection. However, this is tricky for SNAPHU unwrapping, so I do not recommend using a wavelength less than 100 meters. It's preferable to use PSI analysis after SBAS to enhance your results.

Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity

intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4)) phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))

Additionally, we can specify a more detailed output grid like 30 meters using sbas.decimator(30) instead of the default 60-meter one:

use default 60m resolution

decimator = sbas.decimator()

Commonly, there is no reason for a higher resolution output SBAS grid due to SNAPHU unwrapping error accumulation.

The Goldstein filter can introduce some spatial inaccuracies while it helps to clean up the interferograms' spatial spectrum:

Goldstein filter expects square grid cells produced using multilooking

intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))

Typically, we do not need to exclude the filter because it can be adjusted to use a smaller processing window of 16 pixels as sbas.goldstein(phase, corr, 16).

As shown in my other examples, for many InSAR cases, the best approach is to apply SBAS analysis with default or nearly default parameters to produce stable results on a rough grid and enhance it using PS analysis with default or close to default parameters too, to calculate the most accurate displacements on the original Sentinel-1 resolution grid. Of course, SBAS and PSI results need to be consistent to validate the analyses. If you only need an intermediate output resolution like 30 meters, sometimes it is sufficient to apply only SBAS processing.

— Reply to this email directly, view it on GitHub https://github.com/AlexeyPechnikov/pygmtsar/issues/128#issuecomment-2106728418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMOB4Q37YZ7PJHTJP4O3GMTZCBKR3AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWG4ZDQNBRHA . You are receiving this because you authored the thread.Message ID: @.***>

AlexeyPechnikov commented 1 month ago

For geocoding the results, you need to create the proper geocoding matrix like:

# use the original Sentinel-1 resolution (1 pixel spacing)
sbas.compute_geocode(1)

Select a suitable coarsening factor instead of 1 (it works too but creates a matrix much larger than you actually need).

wharfbanger commented 1 month ago

Thanks Alexey, this solved my issue!Sent from my iPhoneOn 15/05/2024, at 12:34 AM, Alexey Pechnikov @.***> wrote: For geocoding the results, you need to create the proper geocoding matrix like:

use the original Sentinel-1 resolution (1 pixel spacing)

sbas.compute_geocode(1)

Select a suitable coarsening factor instead of 1 (it works too but creates a matrix much larger than you actually need).

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>