Closed taylorbell57 closed 4 months ago
Hi Taylor!
Thanks for helping resolve this issue.
For the HST data I'm testing the code fixes on, I was receiving the error message attached,
where the centroid_sy attribute wasn't being created. To correct this, I replaced lines 381-396 in s5_fit.py with the code shown below:
`if hasattr(lc_whites[pi], 'centroid_x'):
xpos_temp = np.ma.masked_invalid(
lc_whites[pi].centroid_x.values)
else:
xpos_temp = None
if hasattr(lc_whites[pi], 'centroid_sx'):
xwidth_temp = np.ma.masked_invalid(
lc_whites[pi].centroid_sx.values)
else:
xwidth_temp = None
if hasattr(lc_whites[pi], 'centroid_y'):
ypos_temp = np.ma.masked_invalid(
lc_whites[pi].centroid_y.values)
else:
ypos_temp = None
if hasattr(lc_whites[pi], 'centroid_sy'):
ywidth_temp = np.ma.masked_invalid(
lc_whites[pi].centroid_sy.values)
else:
ypos_temp = None
ywidth_temp = None`
After making this revision, the manual clipping seems to be working as intended!! Woo!
But, one thing that might have broken in the process is the HST ramp model. A screenshot of the error encountered is attached. For context, I'm fitting 3 light curves simultaneously, but in the error dispayed, the data for the second light curve seems to be triple the size of what it should be.
Alright, I implemented a patch for that first error you encountered. As for the ValueError: operands could not be
... error, those three array lengths are not individual lightcurves. It is saying that the model should be 171 integrations long but the model was 3x too long for some reason. I've merged in recent commits to the main branch in the hopes that'll help, but if it doesn't resolve the issue, can do add a print(component.name)
in the line just above where it crashed so that we can at least know which of the models is the one returning an array of the wrong shape. I suspect you're right that it's probably the HSTRampModel
, but it'd be nice to confirm that since a cursory glance at that model doesn't show any obvious issues
The issue that @astroRez is having with the three LCs should be solved in #609. Essentially, Line 129 of HSTRampModel.py is using the wrong time variable.
@kevin218, alright I'll try to review that PR today or on Monday
@astroRez, alright the code in this PR should now solve all of the problems you mentioned - can you please test this on an HST dataset to make sure it all works well?
Everything is working now, with the use of a workaround for one last identified bug! Exciting times!!
if multwhite == True, and if fixed parameters are used for the hstramp coefficients, it results in a kernel crash (no error message b/c its a kernel crash). For now, I'm using Kevin's clever workaround of just setting very tight bounds and running them as free parameters. This seems to get the job done in the meantime, and I'm finally getting some spectra! woo!
Thank you both for your help. I for one appreciate it a lot.
Very strange, since that's an unrelated issue you should definitely open a new issue documenting that and hopefully we can determine the source of that bug (likely with a ton of temporary print statements). @kevin218, I think you should be fine to approve this PR then
@kevin218, just had some merge conflicts to resolve and should be good to go for a final approval
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 58.82%. Comparing base (
413299e
) to head (e328c77
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This hopefully resolves #601
Storing the scandir values in meta is a pain since they don't get treated in the same way as the other data (e.g., manual clipping needs to separately be applied to the meta variable). These values were already being stored in the data/spec/lc variables, so now I exclusively use those wherever possible.
I know that the tests all pass (at least on my machine), but I don't know if I messed anything up along the way and don't have a quick and easy WFC3 dataset that I can test this on. If @kevin218 or @astroRez can test these edits out on some real data, that'd be really helpful