Closed pancetta closed 2 years ago
Was it after some of my PR on master, or just always the case ?
I don't know, to be honest.. I came across this when I removed warnings in v5, but could not wrap my head around this one. If you have an idea, shoot. If not, we'll just ignore it.
Challenge accepted !
Ok, got it. Seems to be scipy related, in there implementation of the BarycentricInterpolator :
In the evaluation formula for the barycentric interpolation (see https://parallel-in-time.org/pySDC/pySDC/core.html#module-core.Lagrange), there is a division by zero when an evaluated point coincides with a node. You can reproduce it with the following code :
from scipy.interpolate import BarycentricInterpolator
interp = BarycentricInterpolator([0, 1], [1, 2])
yi = interp(interp.xi)
print(yi)
This still produces the correct result, but since the np.sum(c,axis=-1)
produces some zeros, then you get a division by zero at some point. Numpy allows this, but raise a warning. On the other side, the scipy implementation does not care, and simply "manually" modify those points after the comment # Now fix where x==some xi
.
And since the interpolation_matrix_1d
and restriction_matrix_1d
functions in the transfer_helper.py
still use the BarycentricInterpolator class, the test_Q_transfer.py script that uses those functions raises the warning.
Now there is several possibilities to remove this warning :
transfer_helper.py
functions. The getInterpolationMatrix
do something similar than the evaluation of the interpolating polynomial, so there is just some adaptations to do here to evaluate the interpolating polynomial with given values at the nodes. The warning won't be here, since it is specifically ignored when it should happen, see :However, this is less efficient than the scipy implementation, since it requires to compute the interpolation matrix for each different points, and compute the matrix vector multiplication. For a few points (which is the case in many pySDC applications), the scipy implementation is probably faster.
(Lazy) in test_Q_transfer.py
ignore division by zero warning when computing the Pcoll
and Rcoll
matrices, using the statement
with np.errstate(divide='ignore'):
# ...
(Less Lazy) in transfer_helper.py
, wrap the following lines (4 times) with the with np.errstate(divide='ignore')
statement
M[i, nn] = np.asarray(list(map(lambda x: x(p), bary_pol)))
(Contributing to Community) do a PR on scipy repo that wrap the problematic line in the BarycentricInterpolator class with the with np.errstate(divide='ignore')
statement, hope that they accept it, and wait for the next scipy version to be released.
Wow, great analysis, thank you! I guess 3. makes the most sense for now. I can do this in the next round of clean-up. 4. is also something to consider, though.
Just to mention : solution 4 has been done (wasn't too complicated after all). And I think the warnings should disappear with the coming 1.10 version of scipy ...
For some reason,
test_Q_transfer_minimal[Equidistant]
always seems to lead to a Runtime Warning in_polyint
, see https://github.com/Parallel-in-Time/pySDC/runs/7786219453?check_suite_focus=true#step:5:134. This seems to happen each time, but I have no idea why.. @tlunet?