Closed hannorein closed 1 year ago
@aryaakmal I agree that figure 6 would be the right one to look at, but there are no jumps...?!
@hannorein - certainly no discontinuities. But I imagine @dhernan is referring to the large change in magnitude in the right panel?
On Tue, Feb 21, 2023 at 12:51 PM Hanno Rein @.***> wrote:
@aryaakmal https://github.com/aryaakmal I agree that figure 6 would be the right one to look at, but there are no jumps...?!
— Reply to this email directly, view it on GitHub https://github.com/matthewholman/assist/pull/43#issuecomment-1438880154, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOMJBQ7UKZ3MIXC5LG6J433WYT6AJANCNFSM6AAAAAAUMOYS6Q . You are receiving this because you were mentioned.Message ID: @.***>
I am just trying to understand the concern. I should let @dmhernan speak for himself.
Hi all, I was referring to the scatter plot in Fig. 2, the penaltimate point in time, which has the largest error of all points. It may be due to a time step jump, as we've seen happens in several plots in this thread. That wouldn't be surprising, and it would clarify that figure significantly. Whether it is or not is not important--- the point is that these error jumps have caused a headache to me and possibly all of us, and it seems they are potential headaches for other researchers.
I agree that Figure 2 caused us some headaches. But it's very contrived and not an interesting test from a physical point of view. The scale is arbitrary. There is no a priori reason to think that we need a precision of 100m over 1e5 days. Nothing will break down if we only have 1km or 10km precision.
OK, that logic seems good. Unless a user demonstrates other impacts of time step jumps, I am fine assuming there is none.
While coding up the roundtrip test from the paper as a unit test, I've noticed something that doesn't seem quite right. The error seems to depend on the range in a very non-smooth way (have a look at the output below). The plot in the notebook and paper doesn't have that many data points, so I'm not sure if this is a new problem. In any case, I don't think this is correct as of right now. I suspect this is related to how the
reb_integrate
function handles the last timestep, which depending on where the timesteps fall, might have to be much smaller than a "normal" timestep. (but it could also be something completely different)