nansencenter / DAPPER

Data Assimilation with Python: a Package for Experimental Research
https://nansencenter.github.io/DAPPER
MIT License
348 stars 122 forks source link

iEnKS implementation #18

Closed yumengch closed 3 years ago

yumengch commented 3 years ago

I was looking at the code for iEnKS in DAPPER and I realise it is a bit different from the Bocquet and Sakov 2013 (BS13) paper.

Within the Gauss-Newton iteration, BS13 does not update the ensemble anomaly. In DAPPER, you update the ensemble anomaly in each iteration. For SDA, the observation anomaly is ‘de-conditioned’, and the ‘de-condition’ is not applied to MDA. This treatment is intuitively reasonable as this propagate the new observation assimilation into all DA window, but I wonder if you know/have any literature/reference that can back this treatment?

Thanks!

patnr commented 3 years ago

I was looking at the code for iEnKS in DAPPER and I realise it is a bit different from the Bocquet and Sakov 2013 (BS13) paper.

Within the Gauss-Newton iteration, BS13 does not update the ensemble anomaly. In DAPPER, you update the ensemble anomaly in each iteration.

Yes, I'm aware, as you can see in the comments here. Good spot though. It makes very little difference, in my benchmarks, so I have yet to put it in the docstring. I chose this way because I think it is more rigorous to update the entire ensemble (but Bocquet views the method more as a "var" type method, which is why he prefers to not update the anomalies that way), and it is a lot more natural for the stochastic (not sqrt) formulations.

For SDA, the observation anomaly is ‘de-conditioned’, and the ‘de-condition’ is not applied to MDA. This treatment is intuitively reasonable as this propagate the new observation assimilation into all DA window, but I wonder if you know/have any literature/reference that can back this treatment?

I'm not sure what exactly you're pointing at here... Is it related to the previous point? Anyway there is a discussion around de-conditioning in my paper

yumengch commented 3 years ago

Hi Patrick

I vaguely feel the current implementation might not actually update ensemble anomaly.

The Tinv here seems de-updated the ensemble anomaly, while Tinv kept as identity during MDA implementation.

yumengch commented 3 years ago

I just notice the paper. I will have a look then. Thanks!

yumengch commented 3 years ago

Hi Patrick

Thanks for your comments.

I hope I understand these correctly.

Thanks.

patnr commented 3 years ago
  • DAPPER blends the bundle method (EPS) and transform/projection method (T) to approximate the Jacobian of observation operator.

Yes.

Yes, it uses annealing. It original paper for is this one which I have now cited in the docstring (dev1 branch). Yes, the paper you cite is building on that. It's the same idea as BS14, but they use it "on top of" Gauss-Newton, whereas DAPPER only has the code to do it instead of Gauss-Newton.

  • In the annealing/DAPPER, one observation at time step k, y(k), is progressively assimilated within one fixed DA windows (0:L). At each step/iteration, the state vector x(0:L) assimilate a part of y(k).

Yes. the DAPPER implementation only does what BS14 calls SDA, x(k-L) assimilates y(k) fully.

  • In BS14, the DA window moves forward. For example, in DA window (0:L), one observation at time step k, y(k), is partly (say half) assimilated for (0:L). Then, in the DA window (1:L+1), the other half of y(k) is assimilated. Each part of y(k) is assimilated by different DAW.

Yes, that is correct. And DAPPER does not yet support it. But it has been added as TODO items in the dev1 branch. If you wish you could try to tackle it. Let me know. I suspect it won't be very easy. At least I think the tests for the iEnKS are in pretty good shape, which should help.