Closed yuanli57 closed 1 week ago
@yuanli57 Developer of nautilus
here. You report that the issue persists with multinest
. This seems to suggest this is not an issue with nautilus
. But please let me know if I missed something. Also, if you want more advice on how to run nautilus
and what parameters to choose, please feel free to reach out to me.
Thanks for the detailed info. I suspect you're probably accidentally introducing a NaN into your spectrum with whichever binning causes the issue. Please could you check for this? If this isn't it then please could you post the error message you get when running bagpipes with multinest?
@yuanli57 Developer of
nautilus
here. You report that the issue persists withmultinest
. This seems to suggest this is not an issue withnautilus
. But please let me know if I missed something. Also, if you want more advice on how to runnautilus
and what parameters to choose, please feel free to reach out to me.
Yes, Sure! Thank you so much. I'll let you know if the issue persists.
Thanks for the detailed info. I suspect you're probably accidentally introducing a NaN into your spectrum with whichever binning causes the issue. Please could you check for this? If this isn't it then please could you post the error message you get when running bagpipes with multinest?
Hi ACCarnall,
Thank you for your really quick reply! I've just re-tested for multinest and the issue persists, and I've put all the relevant things below. Looking forward to your opinion on this!
Best, Yuan
def bin(spectrum, binn):
""" Bins up two or three column spectral data by a specified factor. """
binn = int(binn)
nbins = len(spectrum) // binn
binspec = np.zeros((nbins, spectrum.shape[1]))
for i in range(binspec.shape[0]):
spec_slice = spectrum[i*binn:(i+1)*binn, :]
binspec[i, 0] = np.mean(spec_slice[:, 0])
binspec[i, 1] = np.mean(spec_slice[:, 1])
if spectrum.shape[1] == 3:
binspec[i,2] = (1./float(binn)
*np.sqrt(np.sum(spec_slice[:, 2]**2)))
return binspec
def load_spec(obj_name):
'''loads SDSS spectrum from a FITS file using astropy Table'''
spec_file = 'SDSS Spectra/{}.fits'.format(obj_name)
hdul = fits.open(jn(work_dir,spec_file))
t = Table(hdul[1].data)
t['wave'] = 10**t['loglam']
t['err'] = np.sqrt(t['ivar'])
wave = t['wave']
flux = t['flux']/1e17
err = t['err']/1e17
spectrum = np.c_[wave, flux, err]
return bin(spectrum,6)
bin(spectrum,1)
load_both(obj_name)[0].shape
Out [1]: (4586, 3)
np.isnan(load_both(obj_name)[0]).any()
Out [2]: False
Command used: fit.fit(sampler='multinest',verbose=True)
bin(spectrum,5)
, it just stalls there (for at least 10 minutes) MultiNest Warning: no resume file found, starting from scratch
*****************************************************
MultiNest v3.10
Copyright Farhan Feroz & Mike Hobson
Release Jul 2015
no. of live points = 400
dimensionality = 13
*****************************************************
Starting MultiNest
generating live points
Bagpipes: fitting object J091207+523960
bin(spectrum,6)
, it starts running as expected (in at most 1 minute):Bagpipes: Latex distribution not found, plots may look strange.
Starting dense_basis. Failed to load FSPS, only GP-SFH module will be available.
running without emcee
MultiNest Warning: no resume file found, starting from scratch
*****************************************************
MultiNest v3.10
Copyright Farhan Feroz & Mike Hobson
Release Jul 2015
no. of live points = 400
dimensionality = 13
*****************************************************
Starting MultiNest
generating live points
Bagpipes: fitting object J091207+523960
live points generated, starting sampling
Acceptance Rate: 1.000000
Replacements: 450
Total Samples: 450
Nested Sampling ln(Z): **************
Acceptance Rate: 0.988142
Replacements: 500
Total Samples: 506
Nested Sampling ln(Z): **************
Acceptance Rate: 0.964912
Replacements: 550
Total Samples: 570
Nested Sampling ln(Z): -344191.419191
Acceptance Rate: 0.917431
Replacements: 600
Total Samples: 654
Nested Sampling ln(Z): -42728.238755
Acceptance Rate: 0.874832
Replacements: 650
Total Samples: 743
Nested Sampling ln(Z): -27196.219099
Acceptance Rate: 0.815851
Replacements: 700
Total Samples: 858
Nested Sampling ln(Z): -19038.296888
Acceptance Rate: 0.782065
Replacements: 750
Total Samples: 959
Nested Sampling ln(Z): -13572.054302
Acceptance Rate: 0.741427
Replacements: 800
Total Samples: 1079
Nested Sampling ln(Z): -10187.653879
Acceptance Rate: 0.676213
Replacements: 850
Total Samples: 1257
Nested Sampling ln(Z): -8491.406932
Acceptance Rate: 0.647948
Replacements: 900
Total Samples: 1389
Nested Sampling ln(Z): -6804.005314
Acceptance Rate: 0.605096
Replacements: 950
Total Samples: 1570
Nested Sampling ln(Z): -5841.596660
Acceptance Rate: 0.566893
Replacements: 1000
Total Samples: 1764
Nested Sampling ln(Z): -5144.119656
Acceptance Rate: 0.570342
Replacements: 1050
Total Samples: 1841
Nested Sampling ln(Z): -4597.742294
Acceptance Rate: 0.571726
Replacements: 1100
Total Samples: 1924
Nested Sampling ln(Z): -4309.480572
Acceptance Rate: 0.572995
Replacements: 1150
Total Samples: 2007
Nested Sampling ln(Z): -4046.398725
Acceptance Rate: 0.573888
Replacements: 1200
Total Samples: 2091
Nested Sampling ln(Z): -3852.642584
Acceptance Rate: 0.569995
Replacements: 1250
Total Samples: 2193
Nested Sampling ln(Z): -3746.489883
Please could you run the following on your spectrum and share the results for binning factors of 1, 5 and 6
spec = load_both(obj_name)[0] print(np.isnan(spec).any(), np.min(spec[:, 1]), np.max(spec[:, 1]), np.min(spec[:, 2]), np.max(spec[:, 2]))
Yes, sure! Thank you for your reply! Here are the results:
#bin by 1 (no binning)
spec = load_both(obj_name)[0]
print(np.isnan(spec).any(), np.min(spec[:, 1]), np.max(spec[:, 1]), np.min(spec[:, 2]), np.max(spec[:, 2]))
False -1.7772859551340496e-16 9.842963304227542e-16 0.0 5.701526336989975e-17
spec.shape
Out [2]: (4586, 3)
#bin by 5
spec = load_both(obj_name)[0]
print(np.isnan(spec).any(), np.min(spec[:, 1]), np.max(spec[:, 1]), np.min(spec[:, 2]), np.max(spec[:, 2]))
False -2.1819045943123643e-17 5.443863811344443e-16 0.0 2.2449515065077682e-17
spec.shape
Out [2]: (917, 3)
#bin by 6
spec = load_both(obj_name)[0]
print(np.isnan(spec).any(), np.min(spec[:, 1]), np.max(spec[:, 1]), np.min(spec[:, 2]), np.max(spec[:, 2]))
False -4.700063401583455e-18 5.40884746930495e-16 7.782434356751894e-19 2.0528833887628065e-17
spec.shape
Out [2]: (764, 3)
Hi Yuan,
The error you're getting is probably being caused by the zero(s) in the error spectrum for the bin by 5 case. Zeros in the error spectrum cause a division by zero error in the likelihood function. Not sure why specifically this would be happening with that particular binning, but I'm sure you can figure it out by looking at the input data and the bin function you're using. Either way it's clearly an input issue rather than a bug within bagpipes, though it might be useful to have a check for this kind of thing somewhere in the code.
Cheers, Adam
Hi Adam,
Thank you for your help! I just masked off these 0-error pixels and it ran successfully! It is definitely not a bug within bagpipes. I didn't know 0 errors will be an issue but everything now works!
Thank you a lot for helping solving the problem!
Best, Yuan
To Whom It May Concern,
I'm experiencing an issue with bagpipes when using the nautilus sampler for spectral fitting. The problem arises when I bin my spectrum by smaller factors, e.g., 5 instead of 6, resulting in more data points. Specifically, the fitting process stalls and reports
logZ = -inf
. Below are the details:Spectrum Data:
(4586, 3)
(wavelength, flux, error).(764, 3)
spec+phot
) and spectrum-only data.(917, 3)
logZ = -inf
and does not progress.spec+phot
and spectrum-only data.Model Independence:
iyer2019
model or a simple burst fit instruction.Attempted Solutions:
Increased
n_live
and other related parameters, but there was no effect.Example of fit parameters from failed attempts:
Tried using the
multinest
sampler instead ofnautilus
, but the issue persisted.Fit Instruction Dictionary:
iyer2019
instructions dict:simple
burst
instructions dict:Observations:
When the spectrum has more data points (due to less aggressive binning), the fitting process fails.
Adjusting sampler parameters like n_live, n_eff, and n_networks does not resolve the issue.
The process gets stuck like this, with only Calls increasing:
I'd appreciate any guidance on resolving this issue or suggestions on what might be causing it. Thank you so much!
Best, Yuan