When wrapping extractions in an MPI python program, it is beneficial to load the PSF from disk in the root process and then broadcast that to the other processes (rather than having dozens of processes read the file concurrently).
This works fine for example with the spotgrid PSF:
import cPickle as pickle
from specter.psf import load_psf
spotpsf = load_psf('/project/projectdirs/desi/software/edison/desimodel/master/data/specpsf/psf-b.fits')
spotpkl = pickle.dumps(spotpsf)
However, when trying the same with a SPECEX gauss-hermite PSF:
ghpsf = load_psf('/project/projectdirs/desi/spectro/redux/dc3c/exposures/20160408/00000002/psf-b0-00000002.fits')
ghpkl = pickle.dumps(ghpsf)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/global/common/edison/contrib/desi/edison/hpcports_gnu-8.2/python-2.7.11_36ed69e6-8.2/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle function objects
Some googling seems to indicate this kind of error can occur when one tries to pickle a class rather than an instance of a class. However I have not explored this more yet.
When wrapping extractions in an MPI python program, it is beneficial to load the PSF from disk in the root process and then broadcast that to the other processes (rather than having dozens of processes read the file concurrently).
This works fine for example with the spotgrid PSF:
However, when trying the same with a SPECEX gauss-hermite PSF:
Some googling seems to indicate this kind of error can occur when one tries to pickle a class rather than an instance of a class. However I have not explored this more yet.