Open FuncUny opened 12 months ago
Hi, are you using evenly distributed vertical grid? Currently xinvert only does his job on evenly distributed grid.
For large array, you need chunks to help you. Which problem are you looking at? Usually it is chunked along time. But also depends on your specific case.
That make sence ! I've tried an evenly distributed vertical grid but with 20 levels (totally 1000m), and it comes a nice reslut. Thank you for the patience. BTW, is 'dz' important in the calculation ? Beacuse it seems that to keep an evenly distributed grid is more important, and both evenly distributed case (200 or 20 levels in totally 1000m) show nice accuracy.
BTW, is 'dz' important in the calculation ?
That's right, dx, dy, dz are all important in the calculations of derivatives along each direction. If they are not evenly distributed, the results are not correct. Right now it is not easy to add support to unevenly distributed grids. But it is possible in a future release.
If xinvert requires equally spaced vertical grid then perhaps an assert statement is in order and perhaps an error w some informative warning to the user if the assertion fails?
Hi @navidcy, that's necessary. I am thinking about it, and will fix this soon.
Something like that would suffice
# compute dz from first grid point
dz = z[1] - z[0]
# ensure that all dz are the same
assert (
np.max(np.abs(np.diff(z - dz)) < 1e-14
), "provided z coordinate must be uniformly spaced"
You're right. I am thinking of np.isclose
to do this. Also, lat/lon (in degree) should be also checked too.
Just curious what's the differece between if ... then raise Exception('...')
and assert
? I only use assert
in pytest.
oh I'm not sure...
yeah, if ... then raise Exception('...')
would also do the job!
Hi,
I'm trying to use the method in a domain which has 19 levels in depth (totally 1000m), and I found it makes a poor inversion result. However, it comes a much better result after I set my vertical levels as the data used in oceanic example (200 levels for 1000m), which convinced me that 'dz' counts a lot in calculation. But with the improvement comes a fat data array ( [360,200,1700,650] ) that is so hard to run forward. So I’d like to know, from your perspective, what value of 'dz' (or levels in depth) can allows me to reduce the size of data while maintaining a good accuracy?
Thanks a lot!