Closed e-koch closed 8 years ago
Not immediately, no. That's really strange. But it sounds most likely that this is happening somewhere in __getitem__
@low-sky can't reproduce the errors when reading in ALMA data (http://jvo.nao.ac.jp/portal/alma/archive.do?action=dataset.info&datasetId=ALMA01003453), which fails on read-in for me. I think this is a problem with my system. I'll close this issue and hopefully it doesn't come up for anyone else.
I'm going to continue on in this issue in case this crops up for someone else.
Problem persists after rebuilding my anaconda distribution.
I can produce a similar error when slicing a SpectralCube
(cube[chan]
) using multiprocessing.imap_unordered
if the chunksize is too large (regardless of the number of processes spawned). This does NOT occur when slicing filled_data
. It seems that there is a memory leak in __getitem__
, but this is somehow system dependent.
i'm getting a similar error message when accessing the header
>>>from spectral_cube import SpectralCube
>>>OneOneFile = 'Per2_VLA_NH3_11_mscale.fits'
>>>cube11sc = SpectralCube.read(OneOneFile)
PC01_01 = 1.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC02_01 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC03_01 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC01_02 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC02_02 = 1.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC03_02 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC01_03 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC02_03 = 0.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
PC03_03 = 1.000000000000E+00
indices in parameterized keywords must not have leading zeroes.
>>> import spectral_cube
>>>spectral_cube.__version__
Out[4]: u'0.3.3.dev1252'
>>>cube11sc.unit
Out[5]: Unit("Jy")
>>>cube11sc.header
*** buffer overflow detected ***: /alma/home/jpineda/anaconda/envs/linefit/bin/python terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fce9e34238f]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fce9e3d9c9c]
/lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fce9e3d8b60]
/lib/x86_64-linux-gnu/libc.so.6(+0x109069)[0x7fce9e3d8069]
/lib/x86_64-linux-gnu/libc.so.6(_IO_default_xsputn+0xbc)[0x7fce9e34a70c]
/lib/x86_64-linux-gnu/libc.so.6(_IO_vfprintf+0x1cd5)[0x7fce9e31a9c5]
/lib/x86_64-linux-gnu/libc.so.6(__vsprintf_chk+0x84)[0x7fce9e3d80f4]
/lib/x86_64-linux-gnu/libc.so.6(__sprintf_chk+0x7d)[0x7fce9e3d804d]
/alma/home/jpineda/.local/lib/python2.7/site-packages/astropy-1.2.1-py2.7-linux-x86_64.egg/astropy/wcs/_wcs.so(+0x6f419)[0x7fce87a58419]
/alma/home/jpineda/.local/lib/python2.7/site-packages/astropy-1.2.1-py2.7-linux-x86_64.egg/astropy/wcs/_wcs.so(wcshdo+0x54e3)[0x7fce87a5f123]
/alma/home/jpineda/.local/lib/python2.7/site-packages/astropy-1.2.1-py2.7-linux-x86_64.egg/astropy/wcs/_wcs.so(+0x8108b)[0x7fce87a6a08b]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8942)[0x7fce9f0bc5a2]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(+0x797e1)[0x7fce9f0387e1]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7fce9f008dc3]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyObject_CallFunction+0xac)[0x7fce9f00c08c]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(_PyObject_GenericGetAttrWithDict+0x17b)[0x7fce9f0540cb]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3b05)[0x7fce9f0b7765]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCode+0x32)[0x7fce9f0bd2e2]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x788d)[0x7fce9f0bb4ed]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(+0x798e8)[0x7fce9f0388e8]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7fce9f008dc3]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x62a7)[0x7fce9f0b9f07]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8596)[0x7fce9f0bc1f6]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7fce9f0bd1ce]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCode+0x32)[0x7fce9f0bd2e2]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyRun_FileExFlags+0xb0)[0x7fce9f0dd960]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0xef)[0x7fce9f0ddb3f]
/alma/home/jpineda/anaconda/envs/linefit/bin/../lib/libpython2.7.so.1.0(Py_Main+0xca4)[0x7fce9f0f3484]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fce9e2f0ec5]
/alma/home/jpineda/anaconda/envs/linefit/bin/python[0x400649]
The FITS header is found here: https://file.io/5itaT7
Any ideas of what is going on?
Could you run fitscheck
on the file and see if there is any problem with it?
Does calling cube11sc._nowcs_header
still cause a crash?
I had encountered something similar (different from what's above) opening a cube with an older version of astropy. The issue there was an older version of wcslib
. But you're using the latest release version already.
@keflavich this is what I got from fitscheck
>>> fitscheck Per2_VLA_NH3_11_mscale.fits
MISSING 'Per2_VLA_NH3_11_mscale.fits' .. Checksum not found in HDU #0
1 errors
@e-koch Actually cube11sc._nowcs_header
does not crash!
your file is 404: {"success":false,"error":404,"message":"Not Found"}
but I still want to access one of the keywords cube11sc.header['NAXIS3']
...
@keflavich I guess the file vanished, here is a more permanent link
oh... I can't test spectral-cube reading on a header alone
you can get the data here ... but it is a 900MB file, so it will take some time or you to download
aye, true. Can you tell me what naxis 1/2/3 are supposed to be?
here is the beginning of the header
SIMPLE = T /Standard FITS
BITPIX = -32 /Floating point (32 bit)
NAXIS = 3
NAXIS1 = 512
NAXIS2 = 512
NAXIS3 = 900
EXTEND = T
BSCALE = 1.000000000000E+00 /PHYSICAL = PIXEL*BSCALE + BZERO
BZERO = 0.000000000000E+00
BMAJ = 9.152630302641E-04
BMIN = 7.598075601790E-04
BPA = 8.740307617188E+01
BTYPE = 'Intensity'
OBJECT = 'Per2 '
BUNIT = 'Jy/beam ' /Brightness (pixel) unit
EQUINOX = 2.000000000000E+03
this works for me:
hdu = fits.PrimaryHDU(data=np.empty([900,512,512]), header=header)
cube = SpectralCube.read(hdu)
cube.header
I suspect either a problem w/wcslib or your data.
ok I run the same piece of code you posted... and I get the same error. So I guess my problem is with my wcslib.
Does wcslib come with astropy?
yes. what version of astropy do you have?
In [1]: import astropy
In [2]: astropy.__version__
Out[2]: u'1.2.1'
try upgrading to dev. might be fixed in 1.2.2
do you know how to install the dev version in conda?
fixed! the problem was that the latest version of astropy
was installed by pip as a dependency of another package :)
replying to your earlier message, you can use pip from within conda
I'm getting this
*** buffer overflow detected ***: /home/eric/anaconda/bin/python terminated
when using largeVaryingResolutionSpectralCube
s. This happens regardless of whethermemmap
is used. Also I can't reproduce the error with the multiple beam testing cube (vda_beams.fits
).Accessing 2D planes in the data works via
cube.filled_data[100]
, as does slicing out a 1D spectrum:cube[:, 500, 500]
. But directly slicing the cube to return aSlice
(cube[200]
) gives the wonderful looking:The issue appears to only be 2D slices that should return a
Slice
. The moment calculations work, slicing out a 3D subcube works, and using a version of the same data without the beam table extension in the FITS file works fine (so its a normalSpectralCube
).@keflavich - Any idea what's going on?