kinnala / scikit-fem

Simple finite element assemblers
https://scikit-fem.readthedocs.io
BSD 3-Clause "New" or "Revised" License
467 stars 76 forks source link

apply Dirichlet conditions by setting rows to I, rhs to value #591

Closed nschloe closed 3 years ago

nschloe commented 3 years ago

For the first time, I'm looking at scikit-fem in a serious manner. Diving into it, I notice that the common way the "apply" Dirichlet conditions is to eliminate the respective entries from the matrix/rhs à la

A, b = condense(A, b, I=mesh.interior_nodes())

I on the other hand would like to see those condition applied via setting the respective rows in A to diagonal 1 and 0 otherwise plus the respective rhs entry to the value. Something like

apply_dirichlet(A, b, mesh.boundary_nodes(), vals)

perhaps. The resulting matrix has the same spectral properties as the condensed A, so the number of iterations with Krylov methods will be the same, too. Advantages of this approach include the fact that neither A nor b need to be rewritten in memory. Also, the solution will include the values of all degrees of freedom, making postprocessing easier.

gdmcbain commented 3 years ago

G'day, Nico.

There was a very brief reference to the alternative apply_dirichlet technique a couple of years ago as part of a broader discussion of the many ways to deal with essential boundary conditions in the finite element method #150; as noted there, Ern & Guermond (2004) called it ‘keeping the essential nodes’ (§8.4.3) to distinguish it from ‘eliminating the essential nodes’ (§8.4.2).

skfem.utils.condense was already here before I stumbled upon scikit-fem, so I can't speak of the original motivation, but I was pleased to find it as I had already been using this technique for some years in my own in-house finite element (& other) codes. I first learned of it from comments by Roy Stogner on libmesh-users; here's a (slightly redacted) excerpt from my notes from 2012–2013:

Many evolution problems for dynamical systems involve constraints on some of the unknowns, such as the pressure being specified at certain nodes in [incompressible fluid mechanics], or essential boundary conditions in the finite element method.

A matrix-oriented representation of constraining dynamical systems by mapping degrees of freedom is suggested by Roy Stogner's comments on essential boundary conditions in the finite element method posted to the libmesh-users list under the thread ‘interaction between subdomain_id and dof constraints?’. Stogner discussed the case of a steady-state matrix problem Kx = f, subject to a constraint x = Cy + h; here K is some stiffness matrix, x the complete vector of nodal values, f the right-hand side forcing vector, y the vector of unknown nodal values, h a vector supplying nonzero nodal constraints (e.g. essential boundary conditions) and C a matrix mapping the free degrees of freedom into the complete nodal vector.

The mapping technique is particularly useful for modal analyses since it does not introduce spurious eigenvalues as some other techniques do. For example, to apply the built-in boundary conditions to the finite element […] model, compare […] which zeros the offdiagonal elements and puts ones on the diagonal in slots corresponding to the fixed degrees of freedom with […] using the present technique. The former generates one spurious mode with unit eigenvalue for each fixed degree of freedom.

Now I do remember being very pleased at being able to avoid spurious eigenvalues as these had been a constant headache in some of my earlier work using orthogonal collocation methods; however this seems to be at odds with your claim about the preservation of the spectral properties. Let me go back and check!

I think it should be pretty easy to implement apply_dirichlet and I think it would also be good to have it in skfem.utils (possibly with a name more descriptive of how it applies Dirichlet conditions seeing as it's only one way of doing it). I'm afraid #150 hasn't had much attention lately, but I would very much like to see it completed in a comprehensive way, including apply_dirichlet, the penalty method (E. & G., §8.4.4), &c., &c. PRs welcome, of course!

gdmcbain commented 3 years ago

Rather than

apply_dirichlet(A, b, mesh.boundary_nodes(), vals)

it should have the same signature as condense:

enforce(A, b, vals, D=mesh.boundary_nodes())
kinnala commented 3 years ago

A is scipy.sparse.csr_matrix, so I think we need to do some low-level trickery using A.data and A.indptr to make it efficient. Another option would be to specify these values inside asm before even creating scipy.sparse.csr_matrix.

nschloe commented 3 years ago

Something like

for i in dofs:
    matrix.data[matrix.indptr[i] : matrix.indptr[i + 1]] = 0.0

d = matrix.diagonal()
d[dofs] = 1.0

would set the rows to I, but perhaps one can do better than the python loop here. I've added a feature request in scipy, but the above would already work well as a makeshift solution.

gdmcbain commented 3 years ago

Interesting, thanks. I'm watching scipy/scipy#13746 now. Yes, changing sparsity patterns is bad.

Ern & Guermond (ibid.) give some hints on implementation, I'll reread that. (Although it's probably for compiled languages and so may not address the special needs of Python and scipy.sparse.)

And yes the makeshift solution, correct if not optimally efficient, would still be worthwhile for exploring the extent of

Advantages of this approach include the fact that neither A nor b need to be rewritten in memory. Also, the solution will include the values of all degrees of freedom, making postprocessing easier.

If the interface is correct, there's no problem improving the implementation later.

kinnala commented 3 years ago

Here is a proposition. First, I was looking into the following diff:

modified   skfem/assembly/form/bilinear_form.py
@@ -81,8 +81,8 @@ class BilinearForm(Form):
         # initialize COO data structures
         sz = ubasis.Nbfun * vbasis.Nbfun * nt
         data = np.zeros(sz, dtype=self.dtype)
-        rows = np.zeros(sz)
-        cols = np.zeros(sz)
+        rows = np.zeros(sz, dtype=np.int64)
+        cols = np.zeros(sz, dtype=np.int64)

         # loop over the indices of local stiffness matrix
         for j in range(ubasis.Nbfun):
@@ -98,6 +98,19 @@ class BilinearForm(Form):
                     dx
                 )

+        # dofs that will be modified
+        D = ubasis.mesh.nodes_satisfying(lambda x: x[0] == 0)
+
+        # zero rows
+        rows_mapping = np.ones(np.max(vbasis.N))
+        rows_mapping[D] = 0
+        data = rows_mapping[rows] * data
+
+        # ones to diagonal
+        data = np.concatenate((data, np.ones(len(D))))
+        rows = np.concatenate((rows, D))
+        cols = np.concatenate((cols, D))
+
         # TODO: allow user to change, e.g. cuda or petsc
         return self._assemble_scipy_matrix(
             data,

Now while this is a prototype, it seems to work as expected(?) and I see no bottlenecks:

In [1]: from skfem import *                                                                                            

In [2]: m = MeshTri().refined()                                                                                        

In [3]: basis = InteriorBasis(m, ElementTriP1())                                                                       

In [4]: from skfem.models.poisson import laplace                                                                       

In [5]: laplace.assemble(basis).todense()                                                                              
Out[5]: 
matrix([[ 1. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
        [ 0. ,  1. ,  0. ,  0. , -0.5,  0. ,  0. , -0.5,  0. ],
        [ 0. ,  0. ,  1. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
        [ 0. ,  0. ,  0. ,  1. ,  0. ,  0. ,  0. , -0.5, -0.5],
        [-0.5, -0.5,  0. ,  0. ,  2. ,  0. , -1. ,  0. ,  0. ],
        [ 0. ,  0. ,  0. ,  0. ,  0. ,  1. ,  0. ,  0. ,  0. ],
        [ 0. ,  0. ,  0. ,  0. , -1. , -1. ,  4. , -1. , -1. ],
        [ 0. , -0.5,  0. , -0.5,  0. ,  0. , -1. ,  2. ,  0. ],
        [ 0. ,  0. , -0.5, -0.5,  0. ,  0. , -1. ,  0. ,  2. ]])

A practical implementation could (finally) use an additional flag to assemble which returns the unassembled triplet (data, rows, cols) that could be later modified via similar steps, maybe using a function in skfem.utils.

What you think?

kinnala commented 3 years ago

Here is a prototype with these ideas: https://github.com/kinnala/scikit-fem/pull/596

gdmcbain commented 3 years ago

Great!

Rather than a flag though (cyclomatic complexity and a union return type-hint), I'd suggest factorization, as in from_meshio and from_file; one function does almost all the work while it's the other that's called routinely but the first can be called directly to get the raw data and indices when required. —And, right, I see that those are encapsulated as UnassembledMatrix in #596.

Besides cuda and PETSc, I'm also keeping an eye on sparse.pydata.org and Dask. It seems likely to me that Python will continue to accrete increasingly powerful numerical backends and that scikit-fem will be well placed to take advantage of these.

gdmcbain commented 3 years ago

On

The resulting matrix has the same spectral properties as the condensed A, so the number of iterations with Krylov methods will be the same, too.

I initially baulked at this as the condensed A obviously has len(D) less eigenvalues. The Dirichlet-modified matrix of full shape has a unit eigenvalue for each constrained degree of freedom; however, it turns out that in a Krylov scheme this is exactly cancelled by the modification of the right-hand side. This is sketched by Ern & Guermond in the reference given above.

The cases that i was previously haunted by weren't static solves but rather involved mass matrices: either generalized eigenvalue problems or unsteady evolution. The corresponding change to the mass matrix is a zero on the diagonal which makes it singular. The corresponding generalized eigenvalues are infinite—Dirichlet degrees of freedom having zero time constants, being 'algebraic' in the sense of DAEs. Those might be a problem even for the Krylov-like methods (Arnoldi) depending where in the spectrum one is looking but if it's nearer the origin than infinity it might be O. K.

kinnala commented 3 years ago

So maybe instead of using coo flag we should have an alternative to Form.assemble? Say Form.unassemble (or whatever)? I mean, this would make the return type constant.

gdmcbain commented 3 years ago

Yeah, I think that that's consistent with the distinction between the elemental and assemble merhods.

‘Unassemble’ sounds a bit like an inverse; maybe…hmm, no, I don't know. coo_data? Is there much to a coo_matrix other than this? I mean is mostly just a data-structure with data and indices and so not really much more than UnassembledMatrix?

kinnala commented 3 years ago

Is there much to a coo_matrix other than this? I mean is mostly just a data-structure with data and indices and so not really much more than UnassembledMatrix?

I suppose it's not exactly the same because I think scipy.sparse.coo_matrix does summation of duplicates upon initialization. We have been also traditionally calling coo_matrix.eliminate_zeros() because in some cases there are many close-to-zero values in the resulting matrix.

Maybe rename UnassembledMatrix to COOData and then we could use Form.coo_data?

kinnala commented 3 years ago

Made some updates to #596.

You can do stuff like:

In [7]: m = MeshTri().refined()                                                                                        

In [8]: basis = InteriorBasis(m, ElementTriP1())                                                                       

In [9]: laplace.coo_data(basis).enforce(m.boundary_nodes()).tocsr()                                                    
Out[9]: 
<9x9 sparse matrix of type '<class 'numpy.float64'>'
    with 13 stored elements in Compressed Sparse Row format>

In [10]: laplace.coo_data(basis).enforce(m.boundary_nodes()).tocsr().todense()                                         
Out[10]: 
matrix([[ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
        [ 0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0., -1., -1.,  4., -1., -1.],
        [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.],
        [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.]])
kinnala commented 3 years ago

Example usage: https://github.com/kinnala/scikit-fem/pull/596/files#diff-95667c4df66488611725f70d1a0242f0a8f522a6eed0931e9117d1644ff40632R317

nschloe commented 3 years ago

Hopefully I'll have some time to check this out tomorrow.

nschloe commented 3 years ago

So I checkout out the PR and I must say that

u = basis.zeros()
A = laplace.coo_data(basis).enforce(boundary_dofs).tocsr()
u[boundary_dofs] = projection(dirichlet, boundary_basis, I=boundary_dofs)
u = solve(A, u)

still looks a bit foreign to me. One computational thing to note is that it is more efficient to eliminate rows in csr matrices as opposed to coo matrices. Beyond that there are many functions involved of which I (the user) doesn't have any idea what they're doing. enforce, projection? I'd just like to set Dirichlet values! I had something like

skfem.apply_dirichlet(A, b, boundary_indices, lambda x: ((x[0] + 1.j * x[1]) ** 2).real)

or

skfem.apply_dirichlet(A, boundary_indices)

in mind. Perhaps that's not in the spirit of scikit-fem though.

gdmcbain commented 3 years ago

I'd say that the PR is WIP and that eventually the syntax would be more like that of skfem.utils.condense, as above

enforce(A, b, vals, D=mesh.boundary_nodes())

so instead of skfem.apply_dirichlet(A, boundary_indices) it'd only be necessary to insert b and prepend D= to the last argument, which isn't too bad.

(I wonder whether it would be reasonable to let the RHS b default to zero in condense and its forthcoming siblings? zeros_like does appear fairly often in the second slot for condense.)

The λ is interesting but might be considered against the spirit of the finite element method since that's not necessarily limited to Lagrangian elements; in that sense, projection is the right thing to do. But for simple cases one can get at Mesh.p or InteriorBasis.doflocs, yes; for example, ex14 can be simplified to

u = solve(
    *condense(A, np.zeros(A.shape[0]), dirichlet(basis.doflocs), D=basis.find_dofs())
)

And for ElementTriP1 the third argument, x, can even be replaced with dirichlet(m.p).

I don't like the name apply_dirichlet so much as there are lots of ways to apply Dirichlet (condense, enforce, penalize, introduce Lagrange multiplier, Nitsche, …) and eventually all of these should be included (if only for pedagogical purposes), but pretty much they should share syntax. And then perhaps an umbrella function which might indeed by called apply_dirichlet, like minimize and root in scipy.optimize that takes a method="condense" kwarg. Do you think ‘condense’ and ‘enforce’ are a reasonable balance of brevity and descriptiveness for §§8.4.2 & 8.4.3 of Ern & Guermond?

I haven't finished convincing myself that §8.4.3 is the one true way, although

Also, the solution will include the values of all degrees of freedom, making postprocessing easier.

does have appeal for nonlinear (#119) and unsteady (#531) problems, especially if the nonlinear or unsteady solver is abstracted into a function.

kinnala commented 3 years ago

One computational thing to note is that it is more efficient to eliminate rows in csr matrices as opposed to coo matrices.

I see your point, maybe it's possible to modify the CSR data structure directly to achieve a similar thing. Now that I kind of understand what's going on here I can try it out.

I'd just like to set Dirichlet values!

I think "Dirichlet values" is not very exact terminology because it's only in the context of CG elements that setting Dirichlet boundary condition has anything to do with what's going on here.

I don't know why FEniCS decided to use apply_dirichlet. Maybe it's doing something else under the hood to deal with, e.g., H(div)-conforming elements?

kinnala commented 3 years ago

Something like

for i in dofs:
    matrix.data[matrix.indptr[i] : matrix.indptr[i + 1]] = 0.0

d = matrix.diagonal()
d[dofs] = 1.0

would set the rows to I, but perhaps one can do better than the python loop here. I've added a feature request in scipy, but the above would already work well as a makeshift solution.

Alright, I'm a little slow. You already had a solution here and it remains to vectorize it.

kinnala commented 3 years ago

I'm currently working on vectorizing the above idea, will soon push to #596.

kinnala commented 3 years ago

In #596, skfem.utils.enforce should be now pretty much equal to skfem.utils.condense but using a different strategy.

E.g., this should work:

from skfem import *
from skfem.models import laplace, unit_load
m = MeshTri().refined(3)
basis = InteriorBasis(m, ElementTriP1())
A = laplace.assemble(basis)
b = unit_load.assemble(basis)
x = solve(*enforce(A, b, D=m.boundary_nodes()))

One can specify a different diagonal value, e.g., to set it to zero diag = 0 which would be useful for the mass matrix:

from skfem.models import mass
enforce(mass.assemble(basis), diag=0., D=m.boundary_nodes())
nschloe commented 3 years ago

The more I think about this, the more I think that condense isn't very good. Consider the very first example in docs:

from skfem import *
from skfem.models.poisson import laplace, unit_load

m = MeshTri().refined(4)

e = ElementTriP1()
basis = InteriorBasis(m, e)

A = asm(laplace, basis)
b = asm(unit_load, basis)

x = solve(*condense(A, b, I=m.interior_nodes()))

if __name__ == "__main__":
    from os.path import splitext
    from sys import argv
    from skfem.visuals.matplotlib import plot, savefig

    plot(m, x, shading='gouraud', colorbar=True)
    savefig(splitext(argv[0])[0] + '_solution.png')

First of all, from a numerical standpoint, condense does a few steps of Gaussian elimination. Why not let spsolve implicitly take care of it? That'd also save the rewrite in memory and the usual u[interior_idx] = sol that I see sometimes in skfem. Curiously, this is absent in the above plot, and I'm wondering how plot(m, x) can succeed: the mesh m has all nodes, x only the interior ones, right? This looks more understandable imo:

from skfem import *
from skfem.models.poisson import laplace, unit_load

# MeshTri is a bit weird, MeshTri of _what_?
m = MeshTri().refined(4)

e = ElementTriP1()
basis = InteriorBasis(m, e)   # TODO find a better name for "Interior"

A = assemble(laplace, basis)
b = assemble(unit_load, basis)
apply_dirichlet(A, b, lambda x: x[0] ** 2 - 0.5, basis.boundary_dofs)

x = solve(A, b)

if __name__ == "__main__":
    # as above
    pass
kinnala commented 3 years ago

condense will pass on the boundary values and dofs to solve, which then expands the solution vector. So x actually contains also the boundary values.

In some cases you'd never want to turn a symmetric problem into unsymmetric one just for the sake of setting boundary conditions. People sometimes go to great lengths to preserve symmetry, to be able to use Cholesky decomposition instead of LU decomposition or more efficient iterative solvers or preconditioners.

However, while enforce is a great and a welcome addition to scikit-fem, all this is somewhat tangential the point of the library which is the creation of the matrices and not solving the resulting systems.

gdmcbain commented 3 years ago

That'd also save the rewrite in memory and the usual u[interior_idx] = sol that I see sometimes in skfem. Curiously, this is absent in the above plot, and I'm wondering how plot(m, x) can succeed: the mesh m has all nodes, x only the interior ones, right?

The u[interior_idx] = sol was more often seen in skfem before #228 ‘Make expand=True the default behavior’ (in condense), merged 2019-08-03; with expand=True it's no longer needed for static problems but does still arise in unsteady algorithms like

https://github.com/kinnala/scikit-fem/blob/56fe1564d39c6ae0ea94c98c4ace2f864e4db106/docs/examples/ex19.py#L84

I intend to look at trying enforce instead of condense next time I look at an initial-value problem #531.

For generalized eigenvalue problems, with a mass-matrix, I'm more skeptical of the benefits of enforce but will also take a look; I see that it has already been adopted in

https://github.com/kinnala/scikit-fem/blob/56fe1564d39c6ae0ea94c98c4ace2f864e4db106/docs/examples/ex21.py#L88

(although not very smoothly: see #609).

gdmcbain commented 3 years ago

For the primitive Orr–Sommerfeld equation (a complex nonhermitian generalized eigenvalue problem with symmetric positive-definite mass matrix) replacing

https://github.com/kinnala/scikit-fem/blob/56fe1564d39c6ae0ea94c98c4ace2f864e4db106/docs/examples/ex29.py#L122

which produces

ex29.png

with

pencil = enforce(stiffness, mass_matrix, D=walls)

fails badly:

ex29

gdmcbain commented 3 years ago

Some further investigation on enforcing Dirichlet conditions in generalized eigenvalue problems. As perhaps the simplest test case, I'm taking (∇u, ∇v) + (∂/∂t)(u, v) = 0 on 0 < x < 1 with u' (0) = 0 and u (1) = 0: the cooling of a slab with insulated and cold faces. The eigensolutions are u (x, t) = cos (m+½) π x exp −(m + ½)²π² for m = 0, 1, 2, ….

The simplest discretization is

b = InteriorBasis(
  MeshLine().with_boundaries({"cold": lambda x: x[0] == 1.0}), 
  ElementLineP1())

for which L = asm(laplace, b) is [[1, -1], [-1, 1]] and M = asm(mass, b) is [[2, 1], [1, 2]]/6.

The condensed pencil is [[1]], [[1/3]] which has sole eigenvalue −3 which is a reasonable approximation of s0 = −(π/2)² ≐ −2.467. The condensed eigenvector is [1] which uncondenses as [1, 0] which is exactly equal to cos πx/2 on the nodes.

The enforced pencil is [[1, -1], [0, 1]], [[2, 1], [0, 0]]/6. The finite eigenpair is the same as in the condensed case but there's also an infinite eigenvalue with eigenfunction [-1, 2]. The eigenfunction corresponding to the infinite eigenvalue does not satisfy the Dirichlet condition! I think that this is correct though: the interpretation is that this mode would be forced to satisfy the Dirichlet condition infinitely quickly, i.e. with zero relaxation time. The vector [-1, 2] is also M-orthogonal to [1, 0].

That was still a little troubling, so I also considered penalization (Ern & Guermond 2004, §8.4.4) which approximates the Dirichlet condition with a Robin:

cold_basis = FacetBasis(b.mesh, b.elem, facets=b.mesh.boundaries["cold"])
epsilon = 1e-3
penalization = asm(mass, cold_basis) / epsilon

L_epsilon = L + penalization

i.e. the penalized stiffness matrix is [[1, -1], [0, 1+1/ε]] while the mass matrix is unchanged. For ε→0 this has two finite eigenvalues: s = −3 + 27 ε/4 + O (ε²) and −4/ε − 9 + O (ε). I didn't work out the eigenvectors analytically but numerically:

(sl, sr), vr = eig(
        -L_epsilon.toarray(),
        M.toarray(),
        homogeneous_eigvals=True,
    )

gives [-0.99999888, -0.00149887] (which approximates [1, 0]) and [-0.44801683, 0.89402512] (which is a unit-normed approximation of [-1, 2]). The penalized matrices and eigensolution approach the enforced ones as ε→0, so I conclude that enforce is correct.

However it does imply some special Dirichlet modes which do not satisfy the Dirichlet condition and which if not deemed actually ‘spurious’ will at least have to be treated with care. Of course, Krylov methods might avoid these.

(Another catch which fooled me for a while when I was testing enforce before penalization: enforce mutates! I assume that that's deliberate for efficiency.)

gdmcbain commented 3 years ago

A quick look at unsteady problems. Say a θ-method is used, as in

https://github.com/kinnala/scikit-fem/blob/8f0f0be7290142ac6dafd8a19a909152f17f52ec/docs/examples/ex19.py#L65-L67

After condensing L and M, the iteration is

(⅓ + θh) u = {⅓ − (1 − θ)h} uold

which has eigenvalue (log λ) / h = −3 + 9(θ −½)h + O(h²), which looks O. K.

In the enforced method, one time-step is:

If θ = 1 (backward Euler), the Dirichlet condition is enforced at each step. If ½ < θ < 1, any boundary data is successively damped; for θ = ½ (Crank–Nicolson) any boundary data flip-flops in sign and for lower θ it's amplified. That's probably O. K. for ½ < θ < 1, and if the boundary data is zero then the other ordinate evolves exactly as in the condensed case.

gdmcbain commented 3 years ago

This does not seem to carry across to ex19 though:

L = diffusivity * asm(laplace, basis)
M = asm(mass, basis)
enforce(L, M, D=basis.find_dofs())

dt = .01
print(f'dt = {dt} µs')
theta = 1.0  # backward Euler
A = M + theta * L * dt
B = M - (1 - theta) * L * dt

# transpose as splu prefers CSC
backsolve = splu(A.T).solve

u_init = (np.cos(np.pi * mesh.p[0, :] / 2 / halfwidth[0])
          * np.cos(np.pi * mesh.p[1, :] / 2 / halfwidth[1]))

def step(t: float,
         u: np.ndarray) -> Tuple[float, np.ndarray]:
    return t + dt, backsolve(B @ u)

produces

0.01, 0.947, -0.0097
0.02, 0.906, -0.0085
0.03, 0.868, -0.0071
0.04, 0.832, -0.0052
0.05, 0.798, -0.0025
0.06, 0.767, +0.0015
0.07, 0.739, +0.0070
0.08, 0.714, +0.0142
0.09, 0.693, +0.0232
0.10, 0.675, +0.0341
0.11, 0.659, +0.0467
0.12, 0.647, +0.0611
0.13, 0.638, +0.0772
0.14, 0.631, +0.0949
0.15, 0.627, +0.1142
0.16, 0.625, +0.1350
0.17, 0.626, +0.1572
0.18, 0.629, +0.1808
0.19, 0.635, +0.2058
0.20, 0.642, +0.2320
0.21, 0.652, +0.2595
0.22, 0.664, +0.2882
0.23, 0.677, +0.3182
0.24, 0.693, +0.3494
0.25, 0.710, +0.3819
0.26, 0.730, +0.4156
0.27, 0.751, +0.4506

which is diverging: the temperature at the centre (second column) stops decreasing and starts increasing after t = 0.17.

Shame: the function step is much simpler. I'm not sure what's gone wrong here.

gdmcbain commented 3 years ago

Ah, of course:

backsolve = splu(A.T).solve

the left stepping matrix A is no longer symmetric so can't just be transposed willy-nilly to avoid the

SparseEfficiencyWarning: splu requires CSC matrix format

Replacing that with backsolve = splu(A).solve makes it run O. K.