fermiPy / fermipy

Fermi-LAT Python Analysis Framework
http://fermipy.readthedocs.org/
BSD 3-Clause "New" or "Revised" License
51 stars 53 forks source link

Problem in the end of "gta.setup()": "TypeError: buffer is too small for requested array" #569

Closed JoaoPaiva21 closed 7 months ago

JoaoPaiva21 commented 7 months ago

Hi, I'm having an error at the beginning of a Fermipy analysis, using Windows with WSL 2 (Ubuntu), and a RAM of 10GB (in the WSL2)... It's happening in the gta.setup(). From what I understand, specifically, it's occurring in gtsrcmaps, when trying to finish this part. Before this step, everything seems ok. (it's able to do ltcube file, for example).

I put here a bit of the final step (srcmaps) and the error itself:

.....
2024-02-23 17:52:00 INFO    GTBinnedAnalysis.run_gtapp(): Generating SourceMap for 3FGL J1801.5+4403 24....................!
2024-02-23 17:53:12 INFO    GTBinnedAnalysis.run_gtapp(): Generating SourceMap for galdiff 24....................!
2024-02-23 17:54:05 INFO    GTBinnedAnalysis.run_gtapp(): Generating SourceMap for isodiff 24....................!
2024-02-23 17:56:14 INFO    GTBinnedAnalysis.run_gtapp(): Finished gtsrcmaps. Execution time: 300.01 s
2024-02-23 17:56:14 INFO    GTBinnedAnalysis.setup(): Finished setup for component 00
2024-02-23 17:56:14 INFO    GTBinnedAnalysis._create_binned_analysis(): Creating BinnedAnalysis for component 00.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[2], line 1
----> 1 gta.setup()

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/fermipy/gtanalysis.py:1090, in GTAnalysis.setup(self, init_sources, overwrite, **kwargs)
   1087     c.setup(overwrite=overwrite)
   1089 # Create likelihood
-> 1090 self._create_likelihood()
   1092 # Determine tmin, tmax
   1093 for i, c in enumerate(self._components):

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/fermipy/gtanalysis.py:1117, in GTAnalysis._create_likelihood(self, srcmdl)
   1115 self._like = SummedLikelihood()
   1116 for c in self.components:
-> 1117     c._create_binned_analysis(srcmdl)
   1118     self._like.addComponent(c.like)
   1120 self.like.model = self.like.components[0].model

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/fermipy/gtanalysis.py:5499, in GTBinnedAnalysis._create_binned_analysis(self, xmlfile, **kwargs)
   5496     self.like.logLike.saveSourceMaps(str(self.files['srcmap']))
   5498 # Apply exposure corrections
-> 5499 self._scale_srcmap(self._src_expscale)

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/fermipy/gtanalysis.py:5539, in GTBinnedAnalysis._scale_srcmap(self, scale_map, check_header, names)
   5535     hdu.data *= scale / old_scale
   5536     hdu.header['EXPSCALE'] = (scale,
   5537                               'Exposure correction applied to this map')
-> 5539 srcmap.writeto(self.files['srcmap'], overwrite=True)
   5540 srcmap.close()
   5542 # Force reloading the map from disk

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/hdulist.py:1021, in HDUList.writeto(self, fileobj, output_verify, overwrite, checksum)
   1019         for hdu in self:
   1020             hdu._prewriteto(checksum=checksum)
-> 1021             hdu._writeto(hdulist._file)
   1022             hdu._postwriteto()
   1023 finally:

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/base.py:710, in _BaseHDU._writeto(self, fileobj, inplace, copy)
    707     dirname = None
    709 with _free_space_check(self, dirname):
--> 710     self._writeto_internal(fileobj, inplace, copy)

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/base.py:716, in _BaseHDU._writeto_internal(self, fileobj, inplace, copy)
    714 if not inplace or self._new:
    715     header_offset, _ = self._writeheader(fileobj)
--> 716     data_offset, data_size = self._writedata(fileobj)
    718     # Set the various data location attributes on newly-written HDUs
    719     if self._new:

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/base.py:663, in _BaseHDU._writedata(self, fileobj)
    658         size += len(padding)
    659 else:
    660     # The data has not been modified or does not need need to be
    661     # rescaled, so it can be copied, unmodified, directly from an
    662     # existing file or buffer
--> 663     size += self._writedata_direct_copy(fileobj)
    665 # flush, to make sure the content is written
    666 fileobj.flush()

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/base.py:692, in _BaseHDU._writedata_direct_copy(self, fileobj)
    682 def _writedata_direct_copy(self, fileobj):
    683     """Copies the data directly from one file/buffer to the new file.
    684 
    685     For now this is handled by loading the raw data from the existing data
   (...)
    690     If this proves too slow a more direct approach may be used.
    691     """
--> 692     raw = self._get_raw_data(self._data_size, "ubyte", self._data_offset)
    693     if raw is not None:
    694         fileobj.writearray(raw)

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/hdu/base.py:547, in _BaseHDU._get_raw_data(self, shape, code, offset)
    545     return np.ndarray(shape, dtype=code, buffer=self._buffer, offset=offset)
    546 elif self._file:
--> 547     return self._file.readarray(offset=offset, dtype=code, shape=shape)
    548 else:
    549     return None

File ~/miniconda3/envs/fermipy/lib/python3.9/site-packages/astropy/io/fits/file.py:378, in _File.readarray(self, size, offset, dtype, shape)
    375             else:
    376                 raise
--> 378     return np.ndarray(
    379         shape=shape, dtype=dtype, offset=offset, buffer=self._mmap
    380     )
    381 else:
    382     count = reduce(operator.mul, shape)

TypeError: buffer is too small for requested array

I analysed different periods (year, months, one month) and at least one different source, but I got the same result every time. Am I missing something? I follow the procedure of installation of the software described in the webpage (using mamba and pip).

And at first, I thought it was because my memory was limited... But I have 10 GB and I remembered running a similar analysis in a computer with Ubuntu and 8 GB of memory.

JoaoPaiva21 commented 7 months ago

Just an update on the matter... Apparently, the problem occurred because WSL was not updated. Now, fermipy seems to run properly.