Open prerakgarg07 opened 2 years ago
I have figured out a workaround. I want to run it by you to make sure that this makes sense.
As an example, consider I want peeled images for a galaxy with 4500 wavelength points between 0.3 to 15 microns. I first divide it up into chunks of 500 wavelength points. With chunk 1 going from 0.3 to 0.45, chunk 2 going from 0.45 to 0.7, and so on and so forth. Then I call add_peeled_images() for each chunk by setting the wavelength range accordingly.
lam_lim = [0.3, 0.47, 0.7....,15]
for k in range(len(lam_lim)-1):
image = m_imaging.add_peeled_images(sed=False, image=True)
image.set_wavelength_range(500, lam_lim[k], lam_lim[k+1])
m_imaging.write(model.inputfile+'.image', overwrite=True)
m_imaging.run(model.outputfile+'.image', mpi=True, n_processes=par.n_MPI_processes, overwrite=True)
By doing so Hyperion saves the info for each chunk in a different group in the rtout file. Which I can then just access by passing the group number to the get_image() function. Does this approach seem reasonable? I want to make sure it's not breaking anything in the background that I should be worried about.
Thanks
I am trying to run binned images on 5000 wavelength points and the code throws an error when writing out the image file.
The error is thrown because the maximum chunk size of HDF5 is 4GB. Is there a way to set a lower chunk size?
The full error traceback is as follows: