Closed cdfortenbach closed 1 year ago
My first thought is that Stage 2 should be run locally on your machine to skip the photom step - this step is run by default but does bad things to time series observations, and it's possible that the unit conversion from MJy/sr to e/s in our Stage 3 is slightly buggy as well. Running Stage 2 yourself and setting skip_photom to be True might help alleviate some of those issues. The next thing I'd check is that you're properly fitting for the noise is Stage 5, so make sure you're fitting for something like scatter_mult which should be around 1 but a bit higher since 1 is the idealized limit. It's possible you're not fitting for that properly which is resulting in your fitted values being overly constrained by your explicit or implicit priors.
As for the wavelengths, you can increase the xwindow in Stage 3 to capture the longer wavelengths if you want. Another option is that you can change your Stage 4 wave_max to be something like 5.20 which is safely within the wavelength range you have to work with. Alternatively, leaving things as is shouldn't mess anything up too much, it's just that your last wavelength bin will be narrower than the others I believe.
Thank you very much for your response.
I made the following changes: 1) used actual JWST Stage 1 rateints data (4 segments, ~ 21000 ints total) 2) set the 'skip_photom' variable in S2 to True 3) changed S4ecf 'wave_max' to None 4) changed S5epf 'scatter_mult' to a median of 2.0 and sdev of 1.0 5) started Eureka from S2
The run started off smoothly. It read the first segment rateints.fits file and started extracting the integration and applying the aperture correction. This ran through the first segment of 6100 integrations. Then, it generated a log file and the first x1dints.png, as well as the related calints and x1dints.fits files. It did the same for the second segment, with another 6100 ints. Then it started the third segment and this is where things went off the rails. Clearly the problem is in S2, and oddly somewhere during the seg 3 analysis.
I've attached a terminal dump file that hopefully will help to sort this out. Again, I would appreciate any suggestions. Terminaldump(20220930T0850).txt
Hmm, well basically the only time I've ever seen that "Killed" message is when your computer runs out of RAM. It's surprising that this would only happen on the third segment, but perhaps the STScI's Stage 2 code has a bit of a memory leak. NIRSpec is very much the instrument I'd expect this issue to be encountered on though as it's data file are very large. As the Stage 2 code isn't our own, there isn't much I can suggest other than trying to get access to a computer with more RAM or trying to break the Stage 1 output files into even smaller segments (while being sure to adjust all of the FITS header values and data extensions)
Maybe skip the extract_1d step
Thanks for the responses/suggestions. I think that @taylorbell57 was exactly right about the RAM problem. I monitored the RAM usage during a run, and it hit the ceiling just about the time that Eureka loaded the 3rd segment of rateints from JWST Stage 1. Over the past week I've set myself up with a lot more compute power and memory. I now have 28 vCPUs and 64 GB of RAM. This appears to solve the memory problem. I've made several runs and it works -- much faster too.
I'm now also skipping the extract_1d step in Stage 2 per @kevin218's suggestion.
I'm still having problems with the error bounds on the transit depth among other things.
I was originally using a 'scatter_mult' with a mean of 1.1 and sdev of 0.1. This didn't work (extremely small error bounds), so I tried several other normal distributions with higher means, trying to keep the low end tail ~ above 1.0. I also tried a uniform distribution with UL of 10 and LL of 1. This worked - sort of. I got larger y-error bounds. After some consideration, it seemed to me that a normal distribution would probably be more appropriate. As long as I could keep the low end tail near 1.0 very small, and extend the top end tail, so I tried the case with a mean of 12 and sdev of 3. This has a very small tail below 1.
This run ran to completion. I've plotted the results along with the published ERS-1366 results (attached). As you can see the mean value points in my run are consistent with the ERS-1366 results up to ~ 3.8 microns.
Issues: [1] At longer wavelengths (> 3.8 microns) my results for transit depth appear to drift high by ~ 0.1% to 0.2%.
[2] In addition, my error bars across the spectrum are too large by about a factor of 7 or so. Finally,
[3] my longest wavelength channel center was 4.9755 microns. I set 'wave_max' = None in the S4_ecf, so I was expecting the max wavelength center to be ~ 5.288 microns (as published).
Any suggestions that you might have regarding my issues would be appreciated.
The next step would probably be to apply group level background subtraction. Unfortunately, that pull request is still in the works. @erinmmay might be able to share a copy of her code.
I can push my Stage 1 files to my fork tomorrow for the GLBS and provide you the link to that code. @cdfortenbach, we can communicate over email for the new inputs as they're not yet documented. I did notice that you're using the rateints from MAST, so just an FYI that the GLBS will require you to run S1 yourself.
This may partially address the larger than expected uncertainties, but I don't think it can fully explain the 7x larger uncertainties. I think we'll need to see more inputs/outputs of various stages to work out the problem.
Thanks for your comments. I thought that I would give things another try. I downloaded the uncal files from MAST (4 segments) and set up the S1_ecf for the Stage 1 reduction. I used the template in GitHub (for the S1_ecf) and just used the default parameters (with updated paths, etc.). In the S4_ecf, I reduced the number of spec channels to 5 from 94 just to speed things up for a test. I also reset wave_max to 5.28.
Upon execution, the code appeared to load the 1st uncal segment and work its way through 6100 ints (the 1st segment, I think). Then it seems to have lost track (see terminal dump attached). I waited 45 minutes, but there was no longer any terminal activity. It did not write the log for segment 1, and appeared to just be hung. Any thoughts/suggestions on this would be most appreciated.
I tried another Stage 1 start. As before, I used the uncal files from MAST (4 segments) and set up the S1_ecf for the Stage 1 reduction. This time I changed the S1ecf 'ramp_fit_max_cores' from 'none' (not sure what that means) to 'half'. The run started out more smoothly. It proceeded through Stage 1, generating the full set of outputs including the rateints files. It also made it through Stage 2 and generated all of the calints files. It then seems to have gone partially through Stage 3 and failed. I don't think it likes what it sees in the calints files. The only change in this run from my previous runs that went to completion is that now I'm using the JWST uncal files to start and the Eureka Stage 1 reduction to generate the rateints files. I've attached a terminal dump that may give some clue about the failure. Any thoughts would be appreciated. Terminal_Dump_20221012T0929.txt
That's a new error message for me - not sure why the Stage 3 centroiding algorithm broke like that...
Thanks for letting us know what you had to do to get over that Stage 1 issue though - one of us devs should add that to our FAQ page as I'm pretty sure you're not the first one to have encountered that
It would appear that there may be some sort of flaw in the Eureka Stage 1 products (rateints) that causes this Stage 3 centroiding failure. I decided to revert to using the JWST Stage 1 rateints and start Eureka at Stage 2. I went for a full run of 94 spec channels again to match the ERS-1366 results (from 3.0 to 5.28 microns). Everything went well through Stage 4, and I believe I cured one of my issues. Previously I had set wave_max to 'None' in the S4_ecf. This had caused a strange cutoff of the Stage 4 results at 4.968 (?). So, I reset wave_max to 5.28 and that seems to have solved the problem - at least for Stage 4; it went through all 94 channels. Now the run continued smoothly into Stage 5 and worked through 83 of 94 spec channels and then failed. I have attached the S4_log, the S5_log, and the terminal dump for the failure. It appears to have been a dynesty problem. Seems like an odd place to fail. Any thoughts would be appreciated. S4_wasp39b.log S5_wasp39b.log Terminal_dump_20221013T1125.txt
I just noticed something that could help with diagnosing this problem. The log shows that during S4 at Bandpass 82 the 'Identified outliers' jumped from ~ 1 to 21500 and stay that high through the end of the stage (Bandpass 93). The S5 starts smoothly, but when it reaches the Bandpass/Channel 83 dynesty fails. Seems like these events are related. Any thoughts/suggestions would be appreciated.
Back on 10/8/22, I summarized a set of issues that I had encountered.
Issues: [1] At longer wavelengths (> 3.8 microns) my results appear to drift high by ~ 0.1% to 0.2%.
[2] In addition, my error bars across the spectrum are too large by about a factor of 5 to 10 or so. Finally,
[3] my longest wavelength channel center was ~ 4.976 microns. I set 'wave_max' = None in the S4_ecf, so I was expecting the max wavelength to be ~ 5.294 microns (consistent with the values given in the NASA Exoplanet Archive).
I believe that I've solved one of the issues: item #3. The strange failures that I was experiencing trying to push the long wavelength end of the analysis are the fault of not more carefully setting 'xwindow' in the S3_ecf. The default shown in the example control files is [60,410]. Unfortunately this will trim both the low and high end of the spectrum and will not allow a clean reproduction of the published results. After much trial and error I have reset 'xwindow' to [11,480], and it works. I get reasonable results out to ~ 5.294 microns. I haven't tested things carefully at the short end, so I may need to move that limit back up a bit to avoid edge issues. I guess this looks obvious in hindsight, but at the time it wasn't to me.
Regarding items #1 and #2, I've tried many different things, but so far no luck in improving the results. The 'scatter_mult' value seems to have a very strong influence on the error bounds (and probably the mean values for the data too). The original setup of scatter_mult with mean=1.1, and sdev=1, eliminates the error bounds altogether on all of the fit parameters. I opened up the scatter_mult prior by using a uniform distribution from 0.1 to 50. This seems to work well. The fit seems to work cleanly and the posterior peak for the scatter_mult is ~ 5 to 25. Now, all of the parameters have much more reasonable errors, except that the rp error is still high (by 5x to 10x). Again, any thoughts/suggestions would be appreciated.
To help avoid others encountering the first of your issues, perhaps someone can add onto that warning in Stage 4 to say to consider extending the xwindow in Stage 3 if the wave_min or wave_max are outside of the bounds of the array.
As for your other two issues, looking at Erin May's forked repository and her message above is likely the way to move forward. I believe each of the ERS teams made a fork of the Eureka repository to be able to make the rapid changes needed to handle their ERS data, and hopefully we'll see those changes get merged into the main branch soon, but for now those are all living in their respective forks.
Thanks very much for your quick response. I'll take a look again at @erinmay's note. It may just be time for me to take a pause and wait until 'those changes' are merged into the main branch.
Hi @cdfortenbach - I'm working on getting a variety of updates to S1 pushed to my fork here: https://github.com/erinmmay/Eureka and a PR will be opened soon. It's a combination of updates from a variety of people and includes options to run group level background subtraction (which helps make the ramp fits more accurate). You're welcome to start taking a look at my fork now and email me with questions (erin.may@jhuapl.edu) while I work on documentation.
Some possible answers to other issues you've been having:
One possible reason your own S1 rateints files weren't working well is the jumpstep, for the data set you're working on there are only 5 groups and the 1/f noise results in jumps being flagged in the ramp that aren't real, so we recommend not running that step at the moment (I believe that should be in the methods of the paper you're trying to reproduce).
I've also had similar issue in the centroiding in S3 when trying to use flux calibrated data due to problems with the current flat field available for NIRSpec. I recommend not running the flat field or photom step in S2, and that may help.
For the dynesty failure, this happens to me when there are no good data points in the bin that's being fit, which is common on the edges of the detectors. I've added options to skip bad light curves but that's a longer lead time for getting on github.
Thanks very much @erinmay. This is very helpful info. I will study the details, and try the suggested action. I may take a look at your fork and will email questions as you suggested.
Hi @cdfortenbach, as we've discussed this in person and over email and I was able to send you ECFs that were able to reasonably reproduce the ERS PRISM results, I am now going to close this issue.
FAQ check
Instrument
NIRSpec (Stages 1-3)
What happened?
I’m currently using Eureka v0.5 (c.9/20/2022). It is working, but I'm having some problems.
I've been trying to duplicate the WASP-39b transmission spectrum shown in Figure 2 of your (ERS-1366 Team) recent Nature paper (arXiv:2208.11692).
I’m starting with the full, JWST Stage 2 calints (4 segments, 21000 ints), and then running Eureka Stage 3 – 6. It runs to completion and generates all of the various graphs and tables.
The problem is that I can get a spectrum that’s in the general ballpark, but it is significantly off. I've attached the spectrum plot here.
The error bounds on the transit depth and other fit parameters are several orders of magnitude smaller than they should be. Of course, given that, there are no y-error bars shown on the final spectrum plot. The binned median values of the spectrum also don't really track to Figure 2. For the LC fits I'm using dynesty with 1024 live pts. I'm only considering 12 spectral channels to keep the test run time down. I've constrained the extracted wavelength range to between 3.0 - 5.25 microns.
The S6 log file did show a WARNING: The selected meta.wave_max (5.25) is larger than the longest wavelength (5.206340312957764). This might throw off the value in the last bin, but would it affect the others?
Any help would be appreciated.
Error traceback output
WARNING: The selected meta.wave_max (5.25) is larger than the longest wavelength (5.206340312957764).
What operating system are you using?
Ubuntu 18.04
What version of Python are you running?
python 3.9.7
What Python packages do you have installed?
packages in environment at /home/cfvm/anaconda3/envs/eureka:
#
Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
alabaster 0.7.12 pypi_0 pypi asdf 2.13.0 pypi_0 pypi asdf-astropy 0.2.2 pypi_0 pypi asdf-coordinates-schemas 0.1.0 pypi_0 pypi asdf-standard 1.0.3 pypi_0 pypi asdf-transform-schemas 0.3.0 pypi_0 pypi asdf-wcs-schemas 0.1.1 pypi_0 pypi asteval 0.9.27 pypi_0 pypi astraeus 0.2 pypi_0 pypi astropy 5.1 pypi_0 pypi astropy-healpix 0.7 pypi_0 pypi astroquery 0.4.6 pypi_0 pypi astroscrappy 1.1.0 pypi_0 pypi asttokens 2.0.8 pypi_0 pypi attrs 22.1.0 pypi_0 pypi babel 2.10.3 pypi_0 pypi backcall 0.2.0 pypi_0 pypi batman-package 2.4.9 pypi_0 pypi bayesicfitting 3.0.1 pypi_0 pypi beautifulsoup4 4.11.1 pypi_0 pypi bokeh 2.4.3 pypi_0 pypi bottleneck 1.3.5 pypi_0 pypi ca-certificates 2022.07.19 h06a4308_0
ccdproc 2.3.1 pypi_0 pypi celerite 0.4.2 pypi_0 pypi certifi 2022.9.14 py39h06a4308_0
cffi 1.15.1 pypi_0 pypi cftime 1.6.2 pypi_0 pypi charset-normalizer 2.1.1 pypi_0 pypi cloudpickle 2.2.0 pypi_0 pypi contourpy 1.0.5 pypi_0 pypi corner 2.2.1 pypi_0 pypi crds 11.16.13 pypi_0 pypi cryptography 38.0.1 pypi_0 pypi cycler 0.11.0 pypi_0 pypi cython 0.29.32 pypi_0 pypi dask 2022.6.0 pypi_0 pypi decorator 5.1.1 pypi_0 pypi docutils 0.19 pypi_0 pypi drizzle 1.13.6 pypi_0 pypi dynesty 1.2.3 pypi_0 pypi emcee 3.1.2 pypi_0 pypi eureka 0.5 pypi_0 pypi executing 1.0.0 pypi_0 pypi exotic-ld 2.0.0 pypi_0 pypi filelock 3.8.0 pypi_0 pypi fonttools 4.37.3 pypi_0 pypi fsspec 2022.8.2 pypi_0 pypi future 0.18.2 pypi_0 pypi george 0.4.0 pypi_0 pypi gwcs 0.18.2 pypi_0 pypi h5netcdf 1.0.2 pypi_0 pypi h5py 3.1.0 pypi_0 pypi html5lib 1.1 pypi_0 pypi idna 3.4 pypi_0 pypi imageio 2.22.0 pypi_0 pypi imagesize 1.4.1 pypi_0 pypi importlib-metadata 4.12.0 pypi_0 pypi iniconfig 1.1.1 pypi_0 pypi ipython 8.5.0 pypi_0 pypi jaraco-classes 3.2.2 pypi_0 pypi jedi 0.18.1 pypi_0 pypi jeepney 0.8.0 pypi_0 pypi jinja2 3.1.2 pypi_0 pypi jmespath 1.0.1 pypi_0 pypi jsonschema 4.9.1 pypi_0 pypi jwst 1.6.0 pypi_0 pypi keyring 23.9.3 pypi_0 pypi kiwisolver 1.4.4 pypi_0 pypi ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
lmfit 1.0.3 pypi_0 pypi locket 1.0.0 pypi_0 pypi lxml 4.9.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi matplotlib 3.6.0 pypi_0 pypi matplotlib-inline 0.1.6 pypi_0 pypi more-itertools 8.14.0 pypi_0 pypi ncurses 6.3 h5eee18b_3
netcdf4 1.6.1 pypi_0 pypi networkx 2.8.6 pypi_0 pypi numpy 1.23.3 pypi_0 pypi numpydoc 1.4.0 pypi_0 pypi openssl 1.1.1q h7f8727e_0
packaging 21.3 pypi_0 pypi pandas 1.5.0 pypi_0 pypi parsley 1.3 pypi_0 pypi parso 0.8.3 pypi_0 pypi partd 1.3.0 pypi_0 pypi pexpect 4.8.0 pypi_0 pypi photutils 1.5.0 pypi_0 pypi pickleshare 0.7.5 pypi_0 pypi pillow 9.2.0 pypi_0 pypi pip 22.1.2 py39h06a4308_0
pluggy 1.0.0 pypi_0 pypi poppy 1.0.3 pypi_0 pypi prompt-toolkit 3.0.31 pypi_0 pypi psutil 5.9.2 pypi_0 pypi ptyprocess 0.7.0 pypi_0 pypi pure-eval 0.2.2 pypi_0 pypi py 1.11.0 pypi_0 pypi pycparser 2.21 pypi_0 pypi pyerfa 2.0.0.1 pypi_0 pypi pygments 2.13.0 pypi_0 pypi pyparsing 3.0.9 pypi_0 pypi pyrsistent 0.18.1 pypi_0 pypi pysynphot 2.0.0 pypi_0 pypi pytest 7.1.3 pypi_0 pypi python 3.9.7 h12debd9_1
python-dateutil 2.8.2 pypi_0 pypi pytz 2022.2.1 pypi_0 pypi pyvo 1.3 pypi_0 pypi pywavelets 1.4.1 pypi_0 pypi pyyaml 6.0 pypi_0 pypi readline 8.1.2 h7f8727e_1
reproject 0.9 pypi_0 pypi requests 2.28.1 pypi_0 pypi scikit-image 0.19.3 pypi_0 pypi scipy 1.9.1 pypi_0 pypi secretstorage 3.3.3 pypi_0 pypi semantic-version 2.10.0 pypi_0 pypi setuptools 63.4.1 py39h06a4308_0
setuptools-scm 7.0.5 pypi_0 pypi six 1.16.0 pypi_0 pypi snowballstemmer 2.2.0 pypi_0 pypi soupsieve 2.3.2.post1 pypi_0 pypi spherical-geometry 1.2.22 pypi_0 pypi sphinx 5.1.1 pypi_0 pypi sphinxcontrib-applehelp 1.0.2 pypi_0 pypi sphinxcontrib-devhelp 1.0.2 pypi_0 pypi sphinxcontrib-htmlhelp 2.0.0 pypi_0 pypi sphinxcontrib-jsmath 1.0.1 pypi_0 pypi sphinxcontrib-qthelp 1.0.3 pypi_0 pypi sphinxcontrib-serializinghtml 1.1.5 pypi_0 pypi sqlite 3.39.2 h5082296_0
stack-data 0.5.0 pypi_0 pypi stcal 1.1.0 pypi_0 pypi stdatamodels 0.4.3 pypi_0 pypi stpipe 0.4.2 pypi_0 pypi stsci-image 2.3.5 pypi_0 pypi stsci-imagestats 1.6.3 pypi_0 pypi stsci-stimage 0.2.5 pypi_0 pypi svo-filters 0.4.4 pypi_0 pypi tifffile 2022.8.12 pypi_0 pypi tk 8.6.12 h1ccaba5_0
tomli 2.0.1 pypi_0 pypi toolz 0.12.0 pypi_0 pypi tornado 6.2 pypi_0 pypi tqdm 4.64.1 pypi_0 pypi traitlets 5.4.0 pypi_0 pypi tweakwcs 0.8.0 pypi_0 pypi typing-extensions 4.3.0 pypi_0 pypi tzdata 2022c h04d1e81_0
uncertainties 3.1.7 pypi_0 pypi urllib3 1.26.12 pypi_0 pypi wcwidth 0.2.5 pypi_0 pypi webencodings 0.5.1 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0
wiimatch 0.3.1 pypi_0 pypi xarray 2022.6.0 pypi_0 pypi xz 5.2.5 h7f8727e_1
zipp 3.8.1 pypi_0 pypi zlib 1.2.12 h5eee18b_3
Code of Conduct