JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
45.91k stars 5.49k forks source link

Make PETSc solvers available #2645

Closed stevengj closed 9 years ago

stevengj commented 11 years ago

Rather than rolling our own sparse-matrix library, it seems like it would be much better to use PETSc, which is the de-facto standard here.

PETSc would give us access to a huge number of sparse-direct back-ends (#2632), iterative solvers, sparse eigensolvers (#1573) via the SLEPc library, optimized implementations (#942), multiple output formats, and parallel support (currently completely missing and quite nontrivial to implement well).

Re-inventing the wheel here is a little like rewriting LAPACK.

lindahua commented 11 years ago

PETSc seems a great package.

When I looked at #942, I was surprised that the Sparse matrix multiplication was implemented using pure Julia with a simple loop.

High performance sparse numeric computation is known to be very difficult to optimize (in a right way), and typically takes hundreds of thousands lines of C/ASM code (some guy working in Mathworks told me this). This is something that should be delegated to highly optimized C libraries (e.g. Sparse BLAS).

lindahua commented 11 years ago

In terms of backend, it seems that Sparse BLAS + ScaLAPACK is also widely used.

stevengj commented 11 years ago

ScaLAPACK is for parallel dense solvers, totally different problem. (And not as useful: the N^3 scaling means that even if you have 1000x more processors and get linear speedup, you can only do a 10x bigger problem.)

Sparse BLAS is very low-level, and my understanding is that PETSc can utilize optimized sparse BLAS routines in its backend.

lindahua commented 11 years ago

Yes, you are right that ScaLAPACK is not for this purpose.

For Sparse BLAS, I have been using the version provided by MKL, which seems fine for me. However, the interface looks somewhat different from that in netlib.

mlubin commented 11 years ago

Wasn't this discussed at some point earlier? PETSc is great, but it's a huge and complex dependency (how do you deal with optional solver interfaces?). It should definitely live in a package for some time before considering making it part of base.

stevengj commented 11 years ago

I agree that PETSc is a huge dependency (with an insane build system to boot). Distributing it with Julia is probably not in the cards. But I would rather see work on a solid PETSc package than a lot of effort re-inventing the wheel in Julia.

ViralBShah commented 11 years ago

While petsc has all the various sparse solvers and such, which we should certainly leverage, I did not know that it had a drop-in sparse matrix library for doing things such as matrix indexing, concatenation, various matrix operations on all the various Number types? The current model is to have a basic sparse matrix library in julia, with all the solvers pulled in from various libraries, including PetSc.

stevengj commented 11 years ago

You can't really take advantage of using PETSc without using its storage format (which is based on CSR by the way) and primitives. Other missing operations can be built on top of this.

ViralBShah commented 11 years ago

We can certainly convert from the existing data structure to CSR, and also most solvers allow for transposed solves. Sequential sparse libraries typically have CSC storage, and a lot of sparse codes from Matlab that are ported will end up having poor performance if we use CSR.

stevengj commented 11 years ago

Even if we use CSR, I'm not sure if we can use PETSc without making a copy; some rooting around in the guts of the library may be required. And even then that would only be the serial case.

CSR and CSC are just transposes, and given a factorization of the transpose of a matrix it is trivial to infer a factorization of the matrix. Because of this, PETSc is able to use sparse-direct libraries that use either CSR or CSC, and it is able to solve problems with either the matrix or its transpose using the same factorization (i.e. it has A\ and A'\ routines).

Nor is this like the situation with dense matrices where all your loops are in the wrong order if you have a row-major array and were expecting column-major, because most codes don't traverse sparse matrices by explicit loops in this way. Can you give an example of a typical Matlab code using sparse matrices that would have problems?

stevengj commented 11 years ago

I think the right thing here is to have a PETSc package with a new PETScSparseMatrix subclass of AbstractSparseMatrix, which is a wrapper around a semi-opaque PETSc matrix object, and which uses PETSc operations to access data in the matrix, with all operations built on top of PETSc primitives.

The current SparseCSC stuff could be kept around for people who don't have PETSc and who aren't too picky about performance, but optimization of that code could be placed on a back burner.

ViralBShah commented 11 years ago

Agree. That is the general direction I was thinking of. The performance for CSC comparable in most cases to matlab.

With the PetSc sparse backend, would PetSc own the sparse matrix, or would it be julia? One option would be to have a barebones SparseMatrixCSR in Julia, but use as much of PetSc as possible. Some of the CSC code can be refactored so that it can be used for CSR as well. What I dread is having multiple sparse matrix formats, and having sparse functions work with multiple types of sparse inputs.

SciPy has taken this path, with 3 implementations (CSC, CSR, and COO) and I always found myself getting annoyed. This was almost 2 years back, and perhaps things have significantly improved now.

lindahua commented 11 years ago

Yes, things in Python change a lot. Scipy now have implemented 7 different kinds of sparse matrices (http://docs.scipy.org/doc/scipy/reference/sparse.html).

lindahua commented 11 years ago

Actually, not only scipy, a lot of other important libraries support multiple sparse formats, notably Intel MKL and cuSparse.

From a user's perspective, I don't think I would have any issue if Julia provide different kinds of sparse matrices as options, as long as they have consistent interfaces. I would simply choose the one that fits my problem naturally.

I understand that it would involve more work on Julia developer's side. But, for many of the functions defined on sparse matrices, it would just be a wrapper of some C functions of the underlying library of choice. So it should not be a huge amount of work if we do not choose to implement everything in Julia.

There is a Sparse BLAS interface standard (http://www.netlib.org/blas/blast-forum/chapter3.pdf). Here is a complete package of F95 implementation of the entire standard (http://www.netlib.org/toms/818), which supports 18 different formats (including CSC, CSR, and COO, etc). I believe it is in the public domain, so you can just grab and use them.

It would be great to have PETScSparseMatrix. But for SparseMatrixCSC and SparseMatrixCSR, why not just use Sparse BLAS? MKL also provides an implementation of Sparse BLAS. So, users who set USE_MKL=1 can enjoy the amazing performance provided by MKL.

I am not just being picky at performance. But in applications that use sparse matrices, sparse matrix multiplication and solving sparse equations are just the performance bottleneck. It is really important to get these fast.

mlubin commented 11 years ago

I've personally used SparseMatrixCSC with a non-numeric element type in a couple of cases. The parameterized and transparent implementation is really convenient and shouldn't become a second-class object. I strongly support having basic operations like transpose, mat-vec, and mat-mat, being efficiently implemented in Julia for SparseMatrixCSC so that they work for all element types. Making a special case for numeric types and calling Sparse BLAS is ok, but I'd like to see benchmarks on how much faster the reference implementation is versus Julia. Only a small minority of users is going to have MKL available.

It's really unavoidable to end up dealing with different sparse formats because different formats are more efficient in some applications than others.

mlubin commented 11 years ago

PETSc uses CSR of course because it's more convenient for mat-vec.

dmbates commented 11 years ago

Sorry for coming late to this party. I agree with @mlubin that keeping a general implementation if SparseMatrixCSC is desirable. As far as the direct sequential solvers go, my impression is that the SuiteSparse UMFPACK and CHOLMOD libraries are competitive with others in performance. They are a pain to link to because of the somewhat fanciful naming system but that work has now been done.

One thing I have tried to do in the umfpack.jl and cholmod.jl code is to integrate the sparse matrix solvers into the factorization framework so that a factorization can be reused. For sparse matrices it is also important to be able to separate the symbolic and numeric steps in a factorization. Right now there is the somewhat obscure chm_factorize! method to update a Cholesky factor from a matrix of the same sparsity pattern as was used to generate the factor. It would be useful to come up with a better name for updating a factorization and standardize the naming across interfaces to solvers. It is a bit confusing in CHOLMOD where "update" and "downdate" refer to modifications of the factorizations according to adding or removing data.

I haven't looked in depth at PETSc but I do know that it is huge and I suspect that there must be some overhead in providing a unified interface to many different solvers. (By the way, has anyone looked at the rudimentary Julia interface in the PETSc sources?) It also seems that the purpose of PETSc is partial differential equations solutions. I appreciate that it will provide an interface to sparse solvers but pulling in the whole thing to get one part is overkill perhaps. It is also important to keep in mind that it is easier to write Julia interfaces to C/Fortran libraries than it is to write interfaces from Matlab/Octave or R. I am the co-author of the Matrix package for R and it contains a huge amount of glue code written in C. The Julia interface for UMFPACK/CHOLMOD now requires only two, very brief C functions and those are for the purposes of obtaining the size of the cholmod_common struct and offsets into the struct.

There is always a balance in a new language between rewriting code and interfacing to code. Before going the PETSc route exclusively I would suggest trying to write Julia interfaces to other sparse solvers to see how far that approach can be carried. The PaStiX home page lists, in addition to PaStix

MUMPS : http://mumps.enseeiht.fr/ SuperLU : http://crd-legacy.lbl.gov/~xiaoye/SuperLU/ UMFPACK : http://www.cise.ufl.edu/research/sparse/umfpack/ TAUCS : http://www.tau.ac.il/~stoledo/taucs/ PARDISO : http://www.pardiso-project.org/ WSMP : http://researcher.ibm.com/view_project.php?id=1426 PSPASES : http://www-users.cs.umn.edu/~mjoshi/pspases/ HSL : http://www.hsl.rl.ac.uk/ BCSLib : http://www.boeing.com/phantom/bcslib/

Some of these may not be in contention. TAUCS, WSMP and PSPASES have not been updated in over 10 years. PARDISO, HSL and BCSLib have unsuitable licenses even for creating a Julia package

The CHOLMOD home page links to a 2005 paper, http://doi.acm.org/10.1145/1236463.1236465 , that provides some comparisons on speed and reliability.

stevengj commented 11 years ago

@lindahua, the sparse BLAS is not in the same position as the dense BLAS. If you are doing dense linear algebra, you are strongly motivated to use BLAS: there are multiple competing free fast implementations, and the most widely used linear-algebra library (LAPACK) is built on top of it. The sparse BLAS does not have so many implementations (the reference BLAS is of course rather simplistic), nor does it dominate the solver libraries (and it is irrelevant to sparse-direct solvers). The dominant solution is PETSc (with Trilinos being a runner-up).

@dmbates, while PETSc is motivated in part by sparse PDE problems, the PETSc library does not solve PDEs. Rather, most popular PDE solvers (libMesh, FENICS, deal.II) are built on top of PETSc. The PETSc library itself only provides sparse linear algebra (and closely related solvers like parallel Newton and sparse integrators). Moreover, there is a lot more out there than "just" (!!!) rolling our own interfaces to half a dozen sparse-direct solvers---there is the whole world of iterative solvers and approximate preconditioners as well. By interfacing one library, Julia gets access to all of these. It is such a huge waste of effort to roll our own interfaces and implementations of all these solvers.

lindahua commented 11 years ago

What about we just keep the sparse module as it is in Base temporarily, and try experimenting ideas using packages?

For example, we can create PETSc.jl to experiment with @stevengj's idea. (There is PETSc.jl here: https://www.stce.rwth-aachen.de/trac/petsc/browser/bin/julia/PETSc.jl?rev=23099%3Ab8e3a1c265f3) But it seems far from completed.

stevengj commented 11 years ago

@lindahua, we should definitely keep Base.SparseCSC as-is for now, and have a serious PETSc.jl package to experiment with. My main point is that using existing libraries (of which PETSc is currently the dominant choice) should be the focus of future developement on Julia's sparse-matrix support, not re-inventing the wheel in Julia.

ViralBShah commented 11 years ago

It may be worth also checking in with the guy did that PetSc.jl interface to see if he's interested in collaborating further, and perhaps even advertising the effort on the PetSc mailing list. We may be able to get collaborators that way for PetSc.jl.

ViralBShah commented 11 years ago

I found this PETSc package: https://github.com/qsnake/petsc/blob/master/bin/julia/PETSc.jl

@qsnake There seems to be more general interest in having Julia+PETSc.

stevengj commented 11 years ago

That package doesn't actually define a SparseMatrix subtype. It would be better to have integration with Julia's matrix types.

MySchizoBuddy commented 11 years ago

Trilinos project by Sandia labs (http://trilinos.sandia.gov/packages/) is mentioned above and should be considered seriously. it is modular than the monolith Petsc.

stevengj commented 11 years ago

Trilinos is C++, which makes it much harder to interface to Julia. (There is a CTrilinos package, but it seems to be poorly documented and much more limited.) (And frankly, I find Trilinos' subdivision into 50+ subpackages a dubious advantage.)

jedbrown commented 11 years ago

Hi, PETSc developer here. We are interested in assisting to provide PETSc to Julia users. A couple comments:

The PETSc.jl linked above is in our PETSc repository (https://github.com/petsc/petsc/blob/master/bin/julia/PETSc.jl). It surprises me that two unmaintained third-party mirrors are showing up in google search for "petsc.jl" before ours (bitbucket is our primary, but github.com/petsc/petsc is our mirror, and is updated regularly). @BarrySmith can comment further on those bindings.

PETSc can operate on arbitrary user-provided data structures, but to do that, you would have to provide the matrix kernels. Some applications do this to access Krylov methods, eigensolvers (via SLEPc), and nonlinear solvers. It makes a lot of sense for regularized inverse problems and some integral equations, where the matrix is dense, but can be applied in O(n) and has spectral decay inherited from the "compact+identity" structure of the underlying continuous problem. However, for sparse PDE problems, it is generally worth using our matrix implementations for preconditioning (crucial for iterative methods) and direct solvers. We provide a common interface to many third-party sparse direct solvers and preconditioners, including ML (from Trilinos), Hypre, MUMPS, UMFPACK, CHOLMOD, SuperLU, and PaStiX. We also support Elemental, a modern dense linear algebra library that is roundly better than ScaLAPACK.

Our recommendation for assembly is to allow PETSc to hold the data structure and use our API to set values. Internally, we can use any of several matrix formats based purely on run-time options. Additional formats can be supported via plugins. It would still be useful to have a "conversion" function, but the best performance and lowest memory use comes from assembling into the final data structure. (For standard CSR, you can use MatCreateSeqAIJWithArrays to avoid a copy, but other formats perform better in certain contexts, so we don't recommend cooking that into an interface.)

Finally, what is your story on parallelism? Is distributed memory via MPI something that you want to target? Do you want to call PETSc from within threads, or is it enough to call PETSc from one thread, but have PETSc use threads internally? We also have some GPU support that may be interesting.

I can answer further questions here, on your mailing list (I subscribe), or on ours (petsc-dev@mcs.anl.gov).

stevengj commented 11 years ago

@jedbrown, thanks for chiming in.

I think what we had in mind was precisely what you are suggesting: implement a new sparse-matrix type in Julia (a subclass of AbstractSparseMatrix) that wraps around a native PETSc matrix, and which uses the PETSc to set/access values and perform other operations (hidden behind the Julia interface).

In the longer run, it would be nice to also support passing any arbitrary AbstractMatrix type to PETSc by passing it as a matrix kernel. In principle, this is no problem since C callback functions can be written in Julia.

Regarding threads, in the short run (since Julia does not support shared-memory parallelism, issue #1790) we would probably just be calling PETSc from a single thread. PETSc can and should use threads internally (it might be nice to use OpenMP so that it can share the same thread pool as OpenBLAS and FFTW).

It is also important in the longer run to support distributed-memory parallelism, but that is probably on a back burner for now. Julia can call MPI, of course (like any C library), but Julia also has its own distributed-memory and distributed-array interface and figuring out how that should interact with MPI and PETSc distributed arrays is a longer-run problem.

jedbrown commented 11 years ago

Good to hear that callbacks are fully supported now. That was @BarrySmith's roadblock with petsc.jl: nonlinear solvers need callbacks. Let us know if there is anything specific that you'd like our assistance with. How would you prefer to package a PETSc interface? It could be in the PETSc repository, in the Julia repository, or as an independent package.

stevengj commented 11 years ago

Probably it would make the most sense as an separate repository (hosted wherever is convenient for you), but pointed to from Julia's Package-system metadata so that users can install it with Pkg.install("PETSc").

ViralBShah commented 11 years ago

@jedbrown Welcome! It would be great if the discussion can continue here or on the julia mailing lists for the integration with julia, as a number of people are interested and are already on the list. Your proposal, already what @stevengj had proposed, is certainly the right way to go.

MPI co-existence is hopefully not too far away. We would love to get distributed array capabilities by hooking into PetSc, and using Elemental that way. Although we will have the SuiteSparse solvers directly plugged in, we would prefer to get all the other sparse direct solvers and various iterative solvers through PetSc.

Clang.jl can help generate much of the interfaces to PetSc quicky, and help us focus on julia facing APIs.

MySchizoBuddy commented 11 years ago

offtopic but Jack Dongorra has published a updated survey of opensource Linear algebra packages http://www.netlib.org/utk/people/JackDongarra/la-sw.html

stevengj commented 11 years ago

I've started working on some improved PETSc wrappers at stevengj/PETSc.jl. They don't really do much yet, I'm just getting a feel for how things are working.

@jedbrown, one thing that would be helpful would be a way to find out the PETSc configuration at runtime (without parsing the header file), in order to determine the correct types for PetscReal, PetscScalar, and PetscInt (all of which are determined by configure-time options). Is there any way to get this information from a library call somewhere?

BarrySmith commented 11 years ago

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscDataTypeGetSize.html

For example

PetscDataTypeGetSize(PETSC_INT,&sz); PetscDataTypeGetSize(PETSC_REAL,&sz);

You can see if PETSc is built for complex with

PetscDataTypeGetSize(PETSC_REAK,&szreal); PetscDataTypeGetSize(PETSC_SCALAR,&szscalar); if (szreal == szscalar) then using real values; else using complex.

Let us know if you have any difficulties.

Barry

On Aug 22, 2013, at 10:37 PM, "Steven G. Johnson" notifications@github.com wrote:

I've started working on some improved PETSc wrappers at stevengj/PETSc.jl. They don't really do much yet, I'm just getting a feel for how things are working.

@jedbrown, one thing that would be helpful would be a way to find out the PETSc configuration at runtime (without parsing the header file), in order to determine the correct types for PetscReal, PetscScalar, and PetscInt (all of which are determined by configure-time options). Is there any way to get this information from a library call somewhere?

— Reply to this email directly or view it on GitHub.

stevengj commented 11 years ago

@BarrySmith, perfect, I had missed that function!

stevengj commented 11 years ago

@BarrySmith, that function doesn't seem to provide sizes for PetscReal, although PetscScalar is apparently given by the PETSC_COMPLEX constant. How am I to tell the difference between Petsc configured with single-precision or double precision, for example? (I can apparently tell whether it was configured for real or complex scalars by the fact that PetscDataTypeGetSize throws an error for PETSC_COMPLEX if scalars are real.)

jedbrown commented 11 years ago

@stevengj What makes you say that? Just pass PETSC_SCALAR or PETSC_REAL. The header petscsys.h defines them to PETSC_DOUBLE, PETSC_FLOAT, PETSC_COMPLEX, etc., as necessary. Or do you not have access to those definitions?

@BarrySmith We could make PETSC_REAL and PETSC_SCALAR variables instead of macros, but then we couldn't use them in switch statements. Should we store this configuration information some other way?

stevengj commented 11 years ago

@jedbrown, PETSC_SCALAR and PETSC_REAL are preprocessor macros, so they aren't available at runtime if you are just linking libpetsc. I could try to find petscsys.h and parse it, but that's what I'm trying to avoid.

Another alternative for you would be to define petsc_real and petsc_scalar global variables that mirror the values of the macros.

jedbrown commented 11 years ago

@stevengj How do you have access to the enum values? If we add something to the enum, will it break the code that depended on those values? (Of course doing this would change the ABI, so it only happens in major releases.) FWIW, I'd probably rather provide a string-to-enum translation than a bunch of symbols that mirror the enum.

Meanwhile, lots of libraries use compile-time constants so this seems like a recurring issue. How would Julia use MPI, for example? (MPI_INT is a macro, etc.)

BarrySmith commented 11 years ago

@stevengj Following on what Jed said we should do the following.

1) make PETSC_SCALAR and PETSC_REAL distinct enum values from all the other ones we currently have.

2) make a function that maps from string name to enum value for these values

We have most of the infrastructure for this already so this is easy.

jedbrown commented 11 years ago

I don't think we need or want (1). Just do (2), where "real" and "scalar" just map to the appropriate fully-resolved values.

StefanKarpinski commented 11 years ago

How would Julia use MPI, for example? (MPI_INT is a macro, etc.)

This is an ongoing problem but it strikes me as a library design issue. Well-designed libraries intended to be used as shared libraries should be callable without having to write a C wrapper that includes their header files. In particular, this means that a well-designed library shouldn't have essential functionality that is only available through macros. Otherwise providing a shared library is just a farce.

BarrySmith commented 11 years ago

@jedbrown I was thinking of (1) making them unique so in PETSc we could move in both directions from string to enum to back to string (which we can't do if real and scalar are not unique enums.

Maybe this is not imprtant?

BarrySmith commented 11 years ago

@jedbrown If I did (1) then I would have the full list of strings in PetscDataTypes[] and could use that exclusively to implement the new string to enum function. Without (1) I have to special case the "real" and "scalar" cases or duplicate all the string names somewhere to do the mapping; less desirble code I beleive.

jedbrown commented 11 years ago

@BarrySmith Our switches will be horribly messy cpp beasts if we keep the enum values distinct. I would just translate string to enum and put the generic names last so that we never get them when searching in the other direction. I don't think there is any value in retaining the context of whether the user referred to the type as PetscScalar or the more specific name.

@StefanKarpinski Well, I would say the original value in shared libraries is being able to upgrade the library for a period of time without needing to recompile client code. Different projects set different conditions under which a binary interface remains stable. (Often a major release, whatever that means.) Macros (value and function-style) and inline functions must indeed remain stable for the shared library to be stable, but this is different from asking that the headers avoid the preprocessor, enums, and typedefs entirely.

If the next release of PETSc adds an argument to one of its functions, how will we know to update the Julia interface? The header is what normally allows that type checking, but if you don't use it, the situation is as bad as Fortran 77.

stevengj commented 11 years ago

@jedbrown, I access the enum values by copying them into equivalent constant declarations in the Julia source. Yes, this depends on you not breaking the ABI, but this is true of everything else as well (e.g. I depend on you not changing the function signatures). You can add new enum values without a problem, as long as you don't change the old ones (just add the new ones onto the end).

In some cases (e.g. the MPI library) we may have no choice but to parse the header file or to write a little C glue, but this is something I only do as a last resort.

stevengj commented 11 years ago

@jedbrown, yes, if you break your ABI, then all non-C code will break until it is updated, and compiled C code will break until it is recompiled. This is why it is not a good idea to break the ABI. If you do break it, please announce it prominently in the NEWS, and we will try to track it. Not breaking the ABI casually is a part of quality library maintainance.

(Modern Fortran is not really in a better situation, since it can't directly use the C header files, it is just that you guys are updating your Fortran interface declarations in sync with your C ABI changes rather than relying on someone else doing it.)

If you have to add a parameter to one of your functions, the best thing is to change the function name in the ABI, so that compiled code (including Julia, but also including compiled C code) will get a link error rather than producing crashes or mysterious results.

stevengj commented 11 years ago

There are two basic problems with making compile-time configuration options, like the type of PetscReal, available only as preprocessor defines.

In a dynamic language like Julia, it is easy for us to load differently compiled versions of Petsc on the fly, but only if there is an easy way to interrogate the library to determine the ABI to use (e.g. for the real and scalar types).

BarrySmith commented 11 years ago

@stevengj @jedbrown I have added PetscDataTypeFromString() and the test example src/sys/examples/tutorials/ex11.c in the branch barry/petscdatatypefromstring

Please let us know if there is any problem.

@jedbrown I do not want to add "scalar" and "real" to the end of the PetscDatatypes[] array because the PETSc standard for the enum string list dictates what should go in that array in what order and treating PetscDatatypes[] different from all the other XXXs[] is not good.

BarrySmith commented 11 years ago

@stevengj You wrote " I access the enum values by copying them into equivalent constant declarations in the Julia source" not manually I hope. Since this part is done at compile time of Julia why can't you just use the PETSc includes? I understand not using the petsc includes at run time. Anyways for Julia runtime with respect to PetscDatatypes you will be using strings so the enums don't matter.

jedbrown commented 11 years ago

We generate F90 interfaces from the C code, so that can be caught at compile time. The Python interface (petsc4py) uses Cython, which actually includes the headers so we get compile-time checking.. Any system that does not use the header to match calling sequence relies on by-hand maintenance of the association. ABI stability is a compromise of many things, and unfortunately, decade-scale binary stability is only maintainable when the interface is thin (i.e., there is a clear separation between things that are done in a library and things that are done by users) or when the problem domain is extremely mature (new algorithms aren't being developed, machines aren't changing too much, etc.).

PETSc has at least two important groups of users: those that arrive wanting a "black-box" solver, which for many applications, evolves into a mildly customized solver, and those doing algorithmic research to develop better methods for various application areas. Note that the difference between the good method and the old method can be significantly different asymptotics leading to orders of magnitude different performance, or just plain not converging. Since methods are evolving, we want to give the algorithm developers (which include ourselves) the best possible set of tools. We take it as a principle that everything that can be implemented inside the PETSc library can also be implemented in user code and distributed as a binary plugin. There are few libraries in any domain that have a plugin infrastructure like that and an ABI that never changes.

Note that core functions used by the "black box" crowd almost never change. If that's all you want Julia to bind to, most of the issues go away. But if you want to be able to implement new multilevel solvers, new spatial discretizations, and new multiscale time integrators as plugin components for PETSc, then you'll probably rub against interfaces that change occasionally.