Open sssangha opened 4 years ago
@dbekaert, this is a post I've made after I've replicated this bug directly through ISCE. I will post a PR on the ARIA-tools end shortly with a temporary fix.
Thanks for looking into this. Tagging @hfattahi and @piyushrpt for their insights.
@sssangha I had a brief look into differences with the Baseline.py
an the topsApp runComputeBaseline.py
. Equations are the same, with a minor difference on how target on ground is calculated using a different function. https://github.com/isce-framework/isce2/blob/579dfab3024658422162a3481109cb92fa90e3a7/components/isceobj/TopsProc/runComputeBaseline.py#L71
Did you confirm that this example runs find with topsApp and only has issue for StackSentinel?
@dbekaert yes, I get the same sign-change through topsApp. I will paste the relevant section from the "topsProc.xml" file, which was generated by runComputeBaseline.py
.
<IW-1_Number_of_bursts_in_master>5</IW-1_Number_of_bursts_in_master>
<IW-1_First_common_burst_in_master>0</IW-1_First_common_burst_in_master>
<IW-1_Last_common_burst_in_master>5</IW-1_Last_common_burst_in_master>
<IW-1_Number_of_bursts_in_slave>9</IW-1_Number_of_bursts_in_slave>
<IW-1_First_common_burst_in_slave>4</IW-1_First_common_burst_in_slave>
<IW-1_Last_common_burst_in_slave>9</IW-1_Last_common_burst_in_slave>
<IW-1_Number_of_common_bursts>5</IW-1_Number_of_common_bursts>
<IW-1_Bpar_at_midrange_for_first_common_burst>8.301006746969616</IW-1_Bpar_at_midrange_for_first_common_burst>
<IW-1_Bperp_at_midrange_for_first_common_burst>-6.217928376574782</IW-1_Bperp_at_midrange_for_first_common_burst>
<IW-1_Bpar_at_midrange_for_last_common_burst>7.743289980281157</IW-1_Bpar_at_midrange_for_last_common_burst>
<IW-1_Bperp_at_midrange_for_last_common_burst>6.165073711420777</IW-1_Bperp_at_midrange_for_last_common_burst>
<IW-2_Number_of_bursts_in_master>6</IW-2_Number_of_bursts_in_master>
<IW-2_First_common_burst_in_master>0</IW-2_First_common_burst_in_master>
<IW-2_Last_common_burst_in_master>6</IW-2_Last_common_burst_in_master>
<IW-2_Number_of_bursts_in_slave>8</IW-2_Number_of_bursts_in_slave>
<IW-2_First_common_burst_in_slave>2</IW-2_First_common_burst_in_slave>
<IW-2_Last_common_burst_in_slave>8</IW-2_Last_common_burst_in_slave>
<IW-2_Number_of_common_bursts>6</IW-2_Number_of_common_bursts>
<IW-2_Bpar_at_midrange_for_first_common_burst>8.449771270248977</IW-2_Bpar_at_midrange_for_first_common_burst>
<IW-2_Bperp_at_midrange_for_first_common_burst>-7.40172674659679</IW-2_Bperp_at_midrange_for_first_common_burst>
<IW-2_Bpar_at_midrange_for_last_common_burst>7.661899139168533</IW-2_Bpar_at_midrange_for_last_common_burst>
<IW-2_Bperp_at_midrange_for_last_common_burst>7.443353337187633</IW-2_Bperp_at_midrange_for_last_common_burst>
<IW-3_Number_of_bursts_in_master>6</IW-3_Number_of_bursts_in_master>
<IW-3_First_common_burst_in_master>0</IW-3_First_common_burst_in_master>
<IW-3_Last_common_burst_in_master>6</IW-3_Last_common_burst_in_master>
<IW-3_Number_of_bursts_in_slave>8</IW-3_Number_of_bursts_in_slave>
<IW-3_First_common_burst_in_slave>2</IW-3_First_common_burst_in_slave>
<IW-3_Last_common_burst_in_slave>8</IW-3_Last_common_burst_in_slave>
<IW-3_Number_of_common_bursts>6</IW-3_Number_of_common_bursts>
<IW-3_Bpar_at_midrange_for_first_common_burst>8.39632807623885</IW-3_Bpar_at_midrange_for_first_common_burst>
<IW-3_Bperp_at_midrange_for_first_common_burst>8.593347356961905</IW-3_Bperp_at_midrange_for_first_common_burst>
<IW-3_Bpar_at_midrange_for_last_common_burst>7.535711174771366</IW-3_Bpar_at_midrange_for_last_common_burst>
<IW-3_Bperp_at_midrange_for_last_common_burst>8.71268069614796</IW-3_Bperp_at_midrange_for_last_common_burst>
</baseline>```
Just to confirm - the two pixels are 7.3km apart? When I compute np.linalg.norm(targxyz - mxyz) which should be the slant range - the difference for the two adjacent pixels is about 7.3 km from the numbers provided above.
Ugh ... the problem could be in the use of the method LLH() - this is heritage code that in not maintained in isceobj.Util.geo.ellipsoid. This should be replaced by xyz_to_llh or llh_to_xyz - I think the error could be in the transformation of targetxyz . All the module where numbers get used for processing use the xyz_to_llh / llh_to_xyz - baselines are only reported and I guess never got updated
Can you also include the master and slave folders with IW{}.xml files only?
@piyushrpt here are the zip files containing both sets of files. Thanks! master.zip slave.zip
Looks like the scenes are from different datasets than the one above. So, am not able to reproduce the plots above.
The equations currently don't account for along-track shift. Bpar and Bperp are 2D concepts when the imaging geometry is in 3D. There are 2 solutions:
This will keep implementation general and should work in both zero doppler and native doppler systems (if doppler is provided as additional parameter in rdr2geo and geo2rdr in future).
Replace sxyz with
mvelunit = mvel / np.linalg.norm(mvel)
sxyz = sxyz - np.dot ( sxyz-mxyz, mvelunit) * mvelunit
This ensures baseline has no along track component reducing it to 2D diagrams we see in papers.
This only works for zero doppler geometry since the doppler curve reduces to a plane.
You can just use a single slave orbit position for the entire range line - by finding closest point between 2 orbits using
mllh = refElp.xyz_to_llh(mxyz)
stime, srng = sOrb.geo2rdr(mllh)
ssv = sOrbit.interpolate(stime, method='hermite')
sxyz = np.array(ssv.GetPosition())
This way all 3 points, mxyz, sxyz and target are all on zero doppler plane - reducing to a 2D representation. You can reuse same sxyz for all slant ranges.
Quick check with your annotation files that you sent shows that both agree to sub-mm level for zero doppler. If you can verify that the fix works with the troublesome data, that would be great.
@piyushrpt I will test out your suggestions but I realized I had mistakenly attached the wrong files for a separate, unrelated run, I apologize for the confusion.
Attached are the requested files for this particular IFG. master.zip slave.zip
Thanks for the quick response. The step appears to be gone now with the baseline transitioning smoothly between -1.x to 1.x , There was a zero crossing of baseline in the scene allowing us to catch the bug and the offset due to the along track shift.
I am just seeing this now. Am I understanding correctly that this bug only affects Baseline.py
and the topsApp runComputeBaseline.py
calculations, not the actual interferogram processing? Then the main effect is on the down-stream applications like the ARIA standard product pipeline that includes the baseline grid and other people who use the output of the topsStack processor.
Correct, no impact to IFG workflow. To be checked if this is also an issue for other SAR sensors, perhaps @piyushrpt can clarify?
At least impacts: TopsApp, StackSentinel-1, ARIA derived products.
So it may also affect the stripmapApp and stripmapStack baseline outputs, perhaps @hfattahi can clarify that. I think that alos2App may use a different baseline method since @CunrenLiang wrote it separately. It does not write out a baseline grid anyway.
Reached out to @CunrenLiang as well as we do leverage a baseline grid as well for the ARIA processing to mimic NISAR.
As an update, "Solution 1" proposed by @piyushrpt seems to fix the major metadata issues I've so far encountered, but one more subtle artifact persists.
My post-processing strategy for fixing these issues in the ARIA standard product is addressed on the ARIA-tools github page (https://github.com/aria-tools/ARIA-tools/pull/170), but I will share the results of the proposed solutions for these same case studies through topsApp directly (master/slave folders for each are attached below):
(1) Large offset sign-flip in perpendicular baseline file across track 121 in Tibet. Resolved after reprocessing with solution 1. In this case, only the perpendicular baseline file was originally affected. T121_xmls.zip
(2) Large offsets across frame boundaries across track 87 in Saudi Arabia. Resolved after reprocessing with solution 1. In this case, the artifact was restricted just to the azimuth angle layer, as other were already stable. T087_xmls.zip
(3) Subtle artifacts in track 144 in the Californian Central Valley. Still persists after reprocessing with solution 1. Output seems identical. In this case, the artifact was pervasive across all the metadata fields (e.g. baseline grids, azimuth angle, incidence angle, look angle). T144_xmls.zip
I could issue a PR with these fixes for now, but I am curious to see how such minor artifacts as seen in (3) can be resolved. It's hard for me to decipher their origin, but the fact that it's consistent amongst all metadata layer suggests to me it may be still related to the calculated satellite positions. @dbekaert let me know if you have any thoughts regarding not only these findings, but my implementation in ARIA-tools
@sssangha thanks for testing, looks like solution 1 is the way to go. If possible, would be nice if the subtle changes can be addressed in same PR such we can increment the ARIA product version once. @piyushrpt any thoughts on that?
@sssangha Will have a look at the post-fix solution on ARIA-tools git repo.
@piyushrpt per recommendation in the PR #141 , generated an issue ticket #142 for the other workflows. This includes Stripmap, but I also added Scansar (TBC by @CunrenLiang).
@piyushrpt @hfattahi before closing this ticket, do you have any idea on the subtle variation that @sssangha still sees? Example 3 above.
I have documented a bug whereby there is an abrupt sign-change when generating perpendicular baseline grids. I have generated these products using the "baselineGrid.py" script packaged through stackSentinel, which is essentially a wrapper of this ISCE script "components/zerodop/baseline/Baseline.py". I have attached "baselineGrid.py" as a reference here within a zip file.
The master scene used was "S1B_IW_SLC1SDV_20181210T000734_20181210T000752_013972_019ECE_FB9D.zip", and the slave scenes used were "S1B_IW_SLC__1SDV_20171203T000703_20171203T000730_008547_00F2A8_F803.zip, S1B_IW_SLC1SDV_20171203T000727_20171203T000745_008547_00F2A8_DCE6.zip"
Within this zip file the extracted baseline grids may also be found, but I will illustrate the problem below:
I made some print statements to track changes in the variables involved in the estimate of the direction (i.e. "direction = np.sign(np.dot( np.cross(targxyz-mxyz, sxyz-mxyz), mvel))"), and here is an example of two sets of variables corresponding to adjacent pixels along range for which there was a sign flip:
" ############################################################################################ direction -1.0 targxyz [ 13843.73542219 5020808.95559147 3920247.75736679] mxyz [-431237.83432596 5613389.08465543 4277645.57748178] sxyz [-431234.7701128 5613379.85014268 4277646.91694975] mvel [ 1404.18442248 4599.24232307 -5877.04480614] Bperp[jj] -6.0630693 ############################################################################################ direction 1.0 targxyz [ 26673.98226566 5019284.63081567 3922120.34709414] mxyz [-431237.83432596 5613389.08465543 4277645.57748178] sxyz [-431234.80521498 5613379.73516128 4277647.06387625] mvel [ 1404.18442248 4599.24232307 -5877.04480614] Bperp[jj] 6.2536173 "
After looking carefully across this sign-flip in the field, it was my impression that the field would be smoothly varying. I.e. if I wrote out the absolute value of all pixels in the field, there would no longer be an abrupt jump in the field. Thus, I believe there could be an issue in the baseline equation, but I haven't been able to pinpoint a bug.
Please let me know if any additional information or clarification is needed to help diagnose the problem. bPerp_bug_track121.zip