isce-framework / nisar-workflows

3 stars 1 forks source link

Associating incidence angle and location arrays with specific pixel locations in the image #11

Closed fmonaldo closed 5 months ago

fmonaldo commented 5 months ago

I have read the sample file

NISAR_L2_PR_GCOV_001_005_A_219_4020_SHNA_A_20081012T060910_20081012T060926_P01101_F_N_J_001.h5/

The image in that file is a floating-point 4545 x 6220 array. But for example, the incidence angle array is a floating point array 220 x 254 x 21 array. I know that the last dimension represents the height above the ellipsoid. The 220-long dimension in the incidence angle array maps to the 4545-long dimension in the image. The 254-long dimension in the incidence angle array maps to 6220-long dimension in the image.

I can appreciate that since the incidence angle is slowly varying they don't need to have a large array to represent these data. However, as much as I examined the documentation and data cubes it is not clear to me how the pixels in the incidence angle array map into the image array.

dbekaert commented 5 months ago

Hi @fmonaldo,

The idea of the data cubes is to save on product size. For a fixed height level the incidence angle, baselines etc are very smooth. Hence the data is provided on a coarse lateral grid and at multiple height levels. The incidence angles etc are in the line of sight direction for those specific x, y, and z coordinates. Hence, you are able to reconstruct the full resolution information by intersection of the 3D cube with a DEM. The cubes itself span a larger spatial area than the product and cover a larger vertical range too. Hence there is no additional extrapolation artifacts.

The NISAR product specification provides a description, but you might be interested to look at ARIA-tools where we have a prototype implemented for NISAR GUNW from Sentinel-1. See figure 4 under this example notebook.

fmonaldo commented 5 months ago

I guess my problem is not so much in appreciating that variables like incidence angle are slowly varying, it is tying this information to the pixels in the image array.  In the file:  NISAR_L2_PR_GCOV_001_005_A_219_4020_SHNA_A_20081012T060910_20081012T060926_P01101_F_N_J_001.h5  The xcoordinates, ycoordinates, incidencangle, image are  xcoordinates   DBLARR(220)          Min = 362567.18   ,  Max = 472067.18 ycoordinates   DBLARR(254)          Min = 3789080.0   ,  Max = 3915580. incidenceangle FLTARR(220,254,21)   Min = 19.2872     ,  Max = 29.8078 image          FLTARR(4545,6220)    Min = 1.74829e-10,  Max = Inf

At xcoordinates[i] and ycoordinates[j], I can go to the incidence angle at incidenceangle[i,j,k], where the k is determined by the DEM.   However, I don't know how to associate this position and incidence angle to a particular set of indices in the image array.  For example, if I lam interested in value at image[2000,3000] to which indices in xcoordinates, ycooridnates, incidenceangle arrays are these associated? 


Frank Monaldo @.***

On Thursday, March 21, 2024 at 12:44:30 PM EDT, dbekaert ***@***.***> wrote:  

Hi @fmonaldo,

The idea of the data cubes is to save on product size. For a fixed height level the incidence angle, baselines etc are very smooth. Hence the data is provided on a coarse lateral grid and at multiple height levels. The incidence angles etc are in the line of sight direction for those specific x, y, and z coordinates. Hence, you are able to reconstruct the full resolution information by intersection of the 3D cube with a DEM. The cubes itself span a larger spatial area than the product and cover a larger vertical range too. Hence there is no additional extrapolation artifacts.

The NISAR product specification provides a description, but you might be interested to look at ARIA-tools where we have a prototype implemented for NISAR GUNW from Sentinel-1. See figure 4 under this example notebook.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>