Closed mcr222 closed 8 years ago
In function H_k
, the input k
is an integer >= 3. So this would happen only when err_k
is 0, which means that our estimated solution at that step happened to have no error. I found that this can happen on certain problems and initial conditions when I was testing the code and developing the test cases.
Understood.
But digging deeper I am seeing that this turns out to return (in these cases) the h_new=inf and then the line: h = min(h_new, t_max - t_curr) chooses the new step as t_max - t_curr, which means that we will take only one step more (doesn't sound right if t_curr is still far from t_max).
The test cases I am seeing that have this problem are always on the last step (t_curr=t_max) therefore it doesn't affect the solution (and tests pass). Maybe this should be treated more carefully.
I think the usual approach is to add a small epsilon to the denominator. The trouble is that it seems like an appropriate size for epsilon would be problem dependent. There is probably something in H&W on this.
In my branch I left the current approach, it takes the step as big as possible (until the final time), which would be like taking a very small epsilon.
We can discuss further.
A related issue is sometimes also shown: RuntimeWarning: divide by zero encountered in double_scalars hint = H((1/errint)_*(1/(u+4)))
This one is raised when errint=0 (interpolation error). This one doesn't affect the execution (if errint==0 then h_int is not used).
Solved with last commit to mcr-implicit branch.
Added a factor that limits the relative increase or decrease of the new estimated step (relative to the last step).
In the 2nd test in regression test, although the test passes, this exception is shown:
divide by zero encountered in double_scalars H_k = lambda h, k, err_k: h0.94(0.65/errk)*(1/(2_k-1))