ACCarnall / bagpipes

Bagpipes is a state of the art code for generating realistic model galaxy spectra and fitting these to spectroscopic and photometric observations. Users should install with pip, not by cloning the repository.
http://bagpipes.readthedocs.io
GNU General Public License v3.0
80 stars 42 forks source link

Data Fitting issue #45

Closed siangphillips closed 1 month ago

siangphillips commented 1 year ago

Hi,

I am excited to use Bagpipes - thank you for making this code accessible! I am new to using Bagpipes, and trying to perform a fit on NIRCam observations of a redshift ~ 7 galaxy. I am using filter files from the JWST documentation where the filters are in Angstroms. This is my input photometry:

FILTER FLUX_[muJY] FLUXERROR[muJY] F090W 0.046524 0.023233 F115W 0.313166 0.059797 F150W 0.326406 0.061017 F200W 0.387495 0.066461 F277W 0.469915 0.077651 F356W 0.992203 0.112826 F410M 1.503777 0.138910 F444W 0.829922 0.103326

I have managed to run the fit successfully with an exponential SFH model, but when I change the fit instructions by adding nebular emission or using a different SFH model, it fails. Copied below is an example of the errors that I get from these fits.

Any advice for resolving this problem would be appreciated, I have seen similar errors in previous Github issues (eg #9) but I have not been able to fix the issue myself from referring to the previous answers.

Thank you!

ln(ev)= -7.3335434946919520E-016 +/- 1.3540258024398900E-009 Total Likelihood Evaluations: 400

Completed in 4.2 seconds. Sampling finished. Exiting MultiNest


ValueError Traceback (most recent call last) Input In [25], in <cell line: 3>() 1 fit = pipes.fit(galaxy, fit_instructions, run='git_test_constant_burst_sfh') ----> 3 fit.fit(verbose=False)

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/bagpipes/fitting/fit.py:136, in fit.fit(self, verbose, n_live, use_MPI) 133 print("\nCompleted in " + str("%.1f" % runtime) + " seconds.\n") 135 # Load MultiNest outputs and save basic quantities to file. --> 136 samples2d = np.loadtxt(self.fname + "post_equal_weights.dat") 137 lnz_line = open(self.fname + "stats.dat").readline().split() 139 self.results["samples2d"] = samples2d[:, :-1]

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/numpy/lib/npyio.py:1338, in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows, quotechar, like) 1335 if isinstance(delimiter, bytes): 1336 delimiter = delimiter.decode('latin1') -> 1338 arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter, 1339 converters=converters, skiplines=skiprows, usecols=usecols, 1340 unpack=unpack, ndmin=ndmin, encoding=encoding, 1341 max_rows=max_rows, quote=quotechar) 1343 return arr

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/numpy/lib/npyio.py:999, in _read(fname, delimiter, comment, quote, imaginary_unit, usecols, skiplines, max_rows, converters, ndmin, unpack, dtype, encoding) 996 data = _preprocess_comments(data, comments, encoding) 998 if read_dtype_via_object_chunks is None: --> 999 arr = _load_from_filelike( 1000 data, delimiter=delimiter, comment=comment, quote=quote, 1001 imaginary_unit=imaginary_unit, 1002 usecols=usecols, skiplines=skiplines, max_rows=max_rows, 1003 converters=converters, dtype=dtype, 1004 encoding=encoding, filelike=filelike, 1005 byte_converters=byte_converters) 1007 else: 1008 # This branch reads the file into chunks of object arrays and then 1009 # casts them to the desired actual dtype. This ensures correct 1010 # string-length and datetime-unit discovery (like arr.astype()). 1011 # Due to chunking, certain error reports are less clear, currently. 1012 if filelike:

ValueError: could not convert string '0.521501846754526715-309' to float64 at row 0, column 2.

ACCarnall commented 1 year ago

This kind of error happens whenever there is a problem with the input model or data that results in the likelihood function within the code always returning NaN values. If you make changes to the model or data to try to fix this you'll need to delete the files in the pipes/posterior folder that MultiNest has generated during the bad run in order to reset things. If you do that and still get the same error please add here the model you're trying to fit and I'll let you know if there's anything wrong with it.

siangphillips commented 1 year ago

Thank you for responding! I'm sorry it has taken me so long to reply with an update. As you suggested, I have been deleting the files generated in pipes/posterior after every failed run, and the error recurs.

I am able to run some fits successfully. This is an example of a fit which ran and produced expected output:

exp = {}
exp["age"] = (0.1, 15.)
exp["tau"] = (0.1, 1.)
exp["massformed"] = (0.1, 11)
exp["metallicity"] = (0., 1)
exp["sfr"] = 18.9

nebular = {} nebular['logU'] = (-3, -2)

fit_instructions = {}
fit_instructions["redshift"] = 6.854 fit_instructions['exponential'] = exp fit_instructions["nebular"] = nebular

Other fits, particularly where a burst is included, fail. Here is an example of a fit which fails, for the same reason as the error I posted initially (although the specific ValueError is 'could not convert string '0.521501970767691538-309' to float64 at row 0, column 4.'):

burst = {} burst["age"] = 8 burst["metallicity"] = (0., 1)
burst["massformed"] = (0., 10)

constant = {} constant["age"] = 12 constant["agemin"] = 3

dust = {}
dust["type"] = "Calzetti"
dust["Av"] = (0., 5.)

fit_instructions = {}
fit_instructions["redshift"] = 6.854
fit_instructions["burst"] = burst fit_instructions["constant"] = constant fit_instructions["dust"] = dust

I see that in the file 0_summary.txt file produced in pipes/posterior, the value 0.521501970767691538-309 appears amongst other values which are all formatted with the exponents properly signified. I copy the contents of that file below:

0.505966229587793315E+01 0.507775479257107087E+00 0.246627289876341926E+01 0.277371888377242781E+01 0.293075173268171119E+00 0.143699198555776197E+01 0.668139159679412842E+01 0.766310453414916992E+00 0.367748022079467773E+01 0.694995760917663574E+01 0.838348448276519775E+00 0.496512204408645630E+01 -0.100000000000000002+101 0.521501970768758720-309 0.505966229587788163E+01 0.507775479257101314E+00 0.246627289876339262E+01 0.277371888377246556E+01 0.293075173268174338E+00 0.143699198555778063E+01 0.668139159679412842E+01 0.766310453414916992E+00 0.367748022079467773E+01 0.694995760917663574E+01 0.838348448276519775E+00 0.496512204408645630E+01 -0.733354349469195199E-15 0.521501970768758720-309

Similarly, in the 0_.txt file produced in pipes/posterior, the values in some columns have correctly formatted exponents (eg. 0.250000000000000309E-02) whereas others do not (eg. -0.104300394153538308-308).

I would appreciate any advice or help with this problem. I am aware of other people having encountered this same error in their work and have managed to avoid it by making an alteration to MultiNest.

ACCarnall commented 1 year ago

I think the issue here is with your constant component, you need the key "agemax" instead of just "age".

By the way, it's not possible to fix the star-formation rate as you're trying to do in your exponential component, the "sfr" argument won't be acknowledged by the code.

ACCarnall commented 1 year ago

Just realised these need underscores as well: age_min and age_max.

siangphillips commented 1 year ago

Thank you for the input - yes, I had made an oversight in my constant component, I rewrote it according to the specifications in the documentation. Since then I've tested various SFH components to see what would run and at what point the fit would fail, and I've found that even with a simple constant SFH component the error I was getting above persists.

This model, for example:

constant = {} constant["massformed"] = (0, 10) constant["metallicity"] = (0, 0.2) constant["age_max"] = 12 constant["age_min"] = 3

fit_instructions = {}
fit_instructions["redshift"] = 6.854
fit_instructions["constant"] = constant

Produces this error:

Bagpipes: fitting object 0

MultiNest Warning: no resume file found, starting from scratch


MultiNest v3.10 Copyright Farhan Feroz & Mike Hobson Release Jul 2015

no. of live points = 400 dimensionality = 2


Completed in 0.3 seconds.

ln(ev)= -9.9900000000000002E+099 +/- NaN Total Likelihood Evaluations: 400 Sampling finished. Exiting MultiNest

ValueError Traceback (most recent call last) Input In [12], in <cell line: 3>() 1 fit = pipes.fit(galaxy, fit_instructions, run='constant_test') ----> 3 fit.fit(verbose=False)

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/bagpipes/fitting/fit.py:136, in fit.fit(self, verbose, n_live, use_MPI) 133 print("\nCompleted in " + str("%.1f" % runtime) + " seconds.\n") 135 # Load MultiNest outputs and save basic quantities to file. --> 136 samples2d = np.loadtxt(self.fname + "post_equal_weights.dat") 137 lnz_line = open(self.fname + "stats.dat").readline().split() 139 self.results["samples2d"] = samples2d[:, :-1]

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/numpy/lib/npyio.py:1338, in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows, quotechar, like) 1335 if isinstance(delimiter, bytes): 1336 delimiter = delimiter.decode('latin1') -> 1338 arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter, 1339 converters=converters, skiplines=skiprows, usecols=usecols, 1340 unpack=unpack, ndmin=ndmin, encoding=encoding, 1341 max_rows=max_rows, quote=quotechar) 1343 return arr

File ~/opt/anaconda3/envs/bagpipesenv/lib/python3.10/site-packages/numpy/lib/npyio.py:999, in _read(fname, delimiter, comment, quote, imaginary_unit, usecols, skiplines, max_rows, converters, ndmin, unpack, dtype, encoding) 996 data = _preprocess_comments(data, comments, encoding) 998 if read_dtype_via_object_chunks is None: --> 999 arr = _load_from_filelike( 1000 data, delimiter=delimiter, comment=comment, quote=quote, 1001 imaginary_unit=imaginary_unit, 1002 usecols=usecols, skiplines=skiplines, max_rows=max_rows, 1003 converters=converters, dtype=dtype, 1004 encoding=encoding, filelike=filelike, 1005 byte_converters=byte_converters) 1007 else: 1008 # This branch reads the file into chunks of object arrays and then 1009 # casts them to the desired actual dtype. This ensures correct 1010 # string-length and datetime-unit discovery (like arr.astype()). 1011 # Due to chunking, certain error reports are less clear, currently. 1012 if filelike:

ValueError: could not convert string '-0.999000000000000022+100' to float64 at row 0, column 3.

siangphillips commented 1 year ago

I realise I was misinterpreting the 'time since SF switched on' as lookback time rather than time in the galaxy reference frame, so I was unintentionally asking Bagpipes to calculate a star formation history from before the beginning of the Universe - I have changed my ages to physically possible values now, and the fitting is working well. Thank you!