There is per interferogram masking that occurs in previous steps which reduces the total number of usable observation in the time series step.
In the configuration file we have a parameter (ts_pthr) that describes the user set threshold for minimum number of valid observations allowed for a time series and rate calculation to be performed.
We are seeing areas with less observations than the set threshold based on the resulting linear_samples.tif.
Expected behaviour
If we set ts_pthr to 10, then we should see a velocity map that includes results only for those pixels where there are 10 or more observations.
Summary of findings
The input parameter ts_pthr is being used correctly in step to calculate time series.
But after this, more observations are removed beyond this threshold when the linear rate is calculated because we are removing rank deficient rows before calculating the linear rate.
Hence we can still end up with a velocity calculation based on only 2 observations, even if our user set threshold is higher than that.
Where is this happening?
_These code links are from the time_series.py script_
Line 180 shows where the if condition is that uses the user set threshold to calculate time series or not.
Data Example
_Here I am working on a small cropped data set to document issue. There are 17 IFGs and 8 time series epochs. I have ts_pthr set to 4. So our time series and rates should only include 4 or more obervations per pixel. But we see that there is as little as two for some pixels._
Linear rate map:
Linear samples map:
Print out of relevant variables
The array sel is the IFGs after MST selection. I printed the length of this out per pixel after the if statement which uses the ts_pthr value of 4 as a threshold:
note that there are no results for a query below 4.
I did the same thing for the variable nsamp which is the final amount of samples in a pixel:
note there are now pixels with samples below 4.
Remaining questions
is the way to solve this to repeat the ts_pthr threshold at the rate calculation step as well?
is the threshold operating on total number of observations, or number of sequential observations? I think its the former.
Description
ts_pthr
) that describes the user set threshold for minimum number of valid observations allowed for a time series and rate calculation to be performed.linear_samples.tif
.Expected behaviour If we set
ts_pthr
to 10, then we should see a velocity map that includes results only for those pixels where there are 10 or more observations.Summary of findings
ts_pthr
is being used correctly in step to calculate time series.Where is this happening? _These code links are from the
time_series.py
script_Line 180 shows where the if condition is that uses the user set threshold to calculate time series or not.
Line 191 calls a function to then remove rank deficient rows. https://github.com/GeoscienceAustralia/PyRate/blob/dde59c67bc8eb637012dfa5a80e3a535896c8ec2/pyrate/core/timeseries.py#L173-L212
Further along when the linear rate is being calculated, the number of samples is returned from line 322.
This is the final number of samples used after previous steps, and this can be below the user set threshold. https://github.com/GeoscienceAustralia/PyRate/blob/dde59c67bc8eb637012dfa5a80e3a535896c8ec2/pyrate/core/timeseries.py#L292-L329
Data Example _Here I am working on a small cropped data set to document issue. There are 17 IFGs and 8 time series epochs. I have
ts_pthr
set to 4. So our time series and rates should only include 4 or more obervations per pixel. But we see that there is as little as two for some pixels._Linear rate map:
Linear samples map:
Print out of relevant variables The array
sel
is the IFGs after MST selection. I printed the length of this out per pixel after the if statement which uses thets_pthr
value of 4 as a threshold:I did the same thing for the variable
nsamp
which is the final amount of samples in a pixel:Remaining questions
ts_pthr
threshold at the rate calculation step as well?