JuliaParallel / PETSc.jl

Julia wrappers for the PETSc library
Other
114 stars 37 forks source link

MPI Friendly GC #178

Open jkozdon opened 2 years ago

jkozdon commented 2 years ago

We should consider implementing what GridapPETSc.jl has done for GC with mpi objects.

Basically the julia finalizer registers the object for destruction with PetscObjectRegisterDestroy, see for example PETScLinearSolverNS

Of course this means the object is not destroyed until PETSc is finalized. If the user wants to destroy things sooner they can call a function gridap_petsc_gc:

# In an MPI environment context,
# this function has global collective semantics.
function gridap_petsc_gc()
  GC.gc()
  @check_error_code PETSC.PetscObjectRegisterDestroyAll()
end

By first calling GC.gc() all objects will be properly registered via PetscObjectRegisterDestroy and the call to PetscObjectRegisterDestroyAll actually destroys then.

The only change I would make is to suggest still allow manual destruction of objects is this is desired for performance reason (though I don't know if this is really ever needed).

h/t: @amartinhuertas in https://github.com/JuliaParallel/PETSc.jl/issues/146#issuecomment-987425710

amartinhuertas commented 2 years ago

@jkozdon Note there is a caveat here with the use of PetscObjectRegisterDestroy. PETSc holds a global data structure with registered objects for lazy destroy that has a maximum capacity. By default, it is 256 (although it can be increased via the corresponding CPP macro during configuration stage). If you exceed such size, then an error is produced. (see https://github.com/gridap/GridapPETSc.jl/pull/42 for more details). Our workaround here is to inject calls to gridap_petsc_gc() at strategic points within GridapPETSc.jl. I know it is far from ideal, but this is the best idea that came to my mind given such constraints.

jkozdon commented 2 years ago

Good to know. Thanks!