Closed oesteban closed 1 year ago
Patch coverage: 100.00%
and project coverage change: -0.06%
:warning:
Comparison is base (
a608927
) 80.53% compared to head (2bf307f
) 80.48%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I've read and re-read the BSpline docs, and I cannot for the life of me figure out how we're supposed to get the derivatives to generate the Jacobian. Do you know how that's supposed to work?
I guess https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BSpline.derivative.html
Something like BSpline.derivative().design_matrix(...)
?
The coefficients are the same. It's just the weights that change. Pretty convenient :)
design_matrix()
is a classmethod. If we find a way to write generate_design_matrix(BSpline())
, then we could do generate_design_matrix(BSpline().derivative())
.
I see. We can initialize a BSpline
because t
, c
and k
are known when we want to calculate the jacobian. However, we will need to see how the design matrix is generated for an object because we need to calculate the tensor product.
Now I'm hoping my Cython implementation is still in my local git repository...
Well, they do have an equivalence in https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BSpline.design_matrix.html#scipy.interpolate.BSpline.design_matrix:
from scipy.interpolate import make_interp_spline, BSpline
import numpy as np
k = 2
t = [-1, 0, 1, 2, 3, 4, 5, 6]
x = [1, 2, 3, 4]
c = np.eye(len(t) - k - 1)
design_matrix = BSpline.design_matrix(x, t, k)
design_matrix_gh = BSpline(t, c, k)(x)
np.allclose(design_matrix.toarray(), design_matrix_gh, atol=1e-14)
So we could try:
orig = BSpline(knots, np.eye(len(knots) - 3 - 1), 3)
wd0.append(scipy.sparse.csr_matrix(orig(locs).T))
if jacobian:
deriv = orig.derivative()
wd1.append(scipy.sparse.csr_matrix(deriv(locs).T))
Well, they do have an equivalence in https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BSpline.design_matrix.html#scipy.interpolate.BSpline.design_matrix:
from scipy.interpolate import make_interp_spline, BSpline import numpy as np k = 2 t = [-1, 0, 1, 2, 3, 4, 5, 6] x = [1, 2, 3, 4] c = np.eye(len(t) - k - 1) design_matrix = BSpline.design_matrix(x, t, k) design_matrix_gh = BSpline(t, c, k)(x) np.allclose(design_matrix.toarray(), design_matrix_gh, atol=1e-14)
So we could try:
orig = BSpline(knots, np.eye(len(knots) - 3 - 1), 3) wd0.append(scipy.sparse.csr_matrix(orig(locs).T)) if jacobian: deriv = orig.derivative() wd1.append(scipy.sparse.csr_matrix(deriv(locs).T))
I don't see why this would not work.
I think it's the opposite, if I understood your message correctly. If we don't drop weights, we will need to pad here.
On Fri, Aug 18, 2023, 18:42 Chris Markiewicz @.***> wrote:
@.**** commented on this pull request.
In sdcflows/transform.py https://github.com/nipreps/sdcflows/pull/388#discussion_r1298655014:
- coeffs_data.append(np.pad(
- level.get_fdata(dtype="float32"),
- ((1, 1), (1, 1), (1, 1)),
- ).reshape(-1))
If we're not removing weights when fitting, the coefficients should match the size of the design matrix, so we should not need to pad them here, correct?
— Reply to this email directly, view it on GitHub https://github.com/nipreps/sdcflows/pull/388#pullrequestreview-1584891183, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAESDRUIIPOVWY7BY2TYX7LXV6LQBANCNFSM6AAAAAA3TVE5YU . You are receiving this because you authored the thread.Message ID: @.***>
Math is hard, and I should probably trust the tests. If you're comfortable that we're testing everything thoroughly, I'm okay merging. I'm working on a branch off of here to add Jacobians based on my above comment.
Replaced by #393.
Resolves: #370.