Also in the case of well specified models supported by good data, Gadget estimates are rarely (if ever) identical from two identical runs. No surprise, as this is the nature of the optimization procedure. In fact, once we've verified that estimates are "sufficiently" stable, seeding is a common practice for certain applications (ie assessment, multispecies keyruns).
However, resolving stability is not always easy and in some cases not possible given the available data and knowledge on the stock.
If thoroughly explored (with a sufficiently large number of runs) could this instability be explicitly represented (for instance with an ensemble) in the form of an uncertainty? What's the view of the Gadget community on this?
Also in the case of well specified models supported by good data, Gadget estimates are rarely (if ever) identical from two identical runs. No surprise, as this is the nature of the optimization procedure. In fact, once we've verified that estimates are "sufficiently" stable, seeding is a common practice for certain applications (ie assessment, multispecies keyruns).
However, resolving stability is not always easy and in some cases not possible given the available data and knowledge on the stock.
If thoroughly explored (with a sufficiently large number of runs) could this instability be explicitly represented (for instance with an ensemble) in the form of an uncertainty? What's the view of the Gadget community on this?