Closed hughbg closed 3 weeks ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 93.34%. Comparing base (
4ed6ae8
) to head (b950bff
). Report is 19 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thanks @hughbg. I can't seem to make comments directly on the notebooks in reviewnb for some reason, so I will just leave comments here:
hera_sim_defaults
_check_path
and _read_npy
). Where did these functions go (@r-pascua ?) and is there a better way to present this?hera_sim_simulator
list_all_components
) there is a new section called "example" with entries testadd
, testmult
, testalias
, which I presume aren't meant to be there? @r-pascua ? Hmmm, maybe it's due to running the notebook cells out of order? I think they're supposed to show up later.hera_sim_tour
_check_path
, _read_npy
, _get_hera_bm_poly
...)polybeam_simulation
black
style in the notebooks as much as possible. When constructing the PerturbedPolyBeam
, for instance, put the perturb_coeffs=
on the next line indented.run_sim
, you shouldn't make the list the same length as the number of antennas. You should only give unique beams. If there is only one unique beam, you can simply give it a list of one beam (if you have more than one unique beam, you also need to pass the beam_ids
list). What you've done will work but it is not the most efficient.visibility_simulator
HealVis
description should be prefaced with the warning that it is deprecated.VisibilitySimulator
class. But in another PR is fine.Thanks for the updates @hughbg. I'm a little confused about some of the things that I'm seeing. According to some of the output from the Simulator
tutorial notebook, it looks like your version is up-to-date, so I think the parts that are confusing me are likely the result of running cells out of order. When you get a chance, could you re-run the notebooks with freshly started kernels and push the results?
I'm just going to add some comments in response to the things @steven-murray raised in his review, as well as maybe a few of my own (or some suggestions), with the hope that the input is useful.
hera_sim_defaults
hera_sim
does not initially set the defaults—I can't remember exactly why, but there was something in one of the changes that made it impossible (as far as I could tell) to set the defaults on importing hera_sim
(so they're initially inactive). Running a test with a freshly installed version of hera_sim
confirms that the defaults are not set when hera_sim
is imported, so we can clear this up by running the notebook with a fresh kernel._check_path
and _read_npy
functions live in the interpolators
module, so there is no need to re-define those functions here. We shouldn't actually be using these anywhere in these notebooks—this issue is actually coupled with @steven-murray's next question..._get_hera_bm_poly
private function no longer exists, and neither does the one for getting the bandpass. Copying their old definitions here does not expose them to the defaults
manager, so changing the defaults won't give you different results. The notebook should instead do something like this:
freqs = np.linspace(0.1, 0.2, 100)
hera_sim.defaults.set('h1c')
h1c_omega_p = hera_sim.defaults('omega_p')(freqs)
h1c_bandpass = hera_sim.defaults('bp_poly')(freqs)
hera_sim.defaults.set('h2c')
h2c_omega_p = hera_sim.defaults('omega_p')(freqs)
h2c_bandpass = hera_sim.defaults('bp_poly')(freqs)
defaults
object, and doesn't require using private methods.defaults
-related stuff to know about the models we're using.hera_sim_simulator
example_blah
stuff definitely is not in there when you do a fresh import of hera_sim
. This has to be due to the cells being run out of order.hera_sim_tour
foregrounds.py
), so no point source simulation should look so stripy. Tsky_mdl=hera_sim.noise.HERA_Tsky_mdl['xx']
to the function call.rfi_func(lsts, freqs, **kwargs)
. So the call signature is wrong, and we shouldn't expect the simulation to give something sensible. Maybe we could protect against this a bit by adding something like if np.any(lsts < 0) or np.any(lsts > 2*np.pi): raise ValueError(...)
to things that require both lsts
and freqs
.uvtools.plot.labeled_waterfall
for generating the waterfalls (so that the waterfall axes are... labeled). Definitely not a priority, but at some point I'd like to see these improvements implemented.As for the other notebooks, I don't have any comments beyond what @steven-murray has pointed out. Thanks again for your work @hughbg! Hopefully it'll only be a few iterations for us to get everything wrangled.
I've uploaded a new version of hera_sim_defaults. Rerunning the cells in order fixed some issues. I switched the beam and bandpass code to
freqs = np.linspace(0.1, 0.2, 100)
hera_sim.defaults.set('h1c')
h1c_omega_p = hera_sim.defaults('omega_p')(freqs)
h1c_bandpass = hera_sim.defaults('bp_poly')(freqs)
hera_sim.defaults.set('h2c')
h2c_omega_p = hera_sim.defaults('omega_p')(freqs)
h2c_bandpass = hera_sim.defaults('bp_poly')(freqs)
The old code was returning model values whereas this code is returning values interpolated from the models. That is in accord with the y-label axes which have always referred to beam and bandpass values (area and gains) rather than models. The plots are quite different now.
@r-pascua and @hughbg, I have finally gotten back to this PR.
I did the following:
ruff
so they all look a bit better now.hera_sim
.View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:23Z ----------------------------------------------------------------
Line #7. ax1.set_xlabel(None)
I think a better way to do this is to replace this line and line 8 with ax1.xaxis.set_visible(False)
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:23Z ----------------------------------------------------------------
Is the change to np.int64(ant)
a ruff
formatting thing?
steven-murray commented on 2024-09-09T12:54:56Z ----------------------------------------------------------------
no, it's a numpy 2 thing.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:24Z ----------------------------------------------------------------
I'm guessing False
-> np.False_
is another ruff
thing?
steven-murray commented on 2024-09-09T12:55:12Z ----------------------------------------------------------------
Also a numpy 2 thing.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:25Z ----------------------------------------------------------------
Something has broken. I'm suspecting that it has to do with the change to (baseline, pol)
evident from all the warning messages, but this needs to be dug into. The simplest check is to see whether these two are equivalent by relaxing the check from a == b
to np.isclose(a,b)
.
steven-murray commented on 2024-09-09T13:08:28Z ----------------------------------------------------------------
OK, I tried some stuff out and it turns out that the differences are due to the original simulation having some negative real parts of the autos. When this is saved to file and re-read, the imaginary parts are forced to zero, and the real parts are made positive. Not sure what to do about that.
r-pascua commented on 2024-09-09T21:32:29Z ----------------------------------------------------------------
OK, my fix was pretty coarse: I just made it so that autocorrelations are modified via vis = np.abs(vis).astype(complex)
in the hera_sim.eor.NoiselikeEoR.__call__
method. Figure it shouldn't be too much of an issue since I imagine this functionality really isn't used all that frequently.
steven-murray commented on 2024-09-10T10:29:34Z ----------------------------------------------------------------
That sounds fine to me
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:25Z ----------------------------------------------------------------
Something has broken here--the first value should be False
. Will need to dig into this.
r-pascua commented on 2024-09-09T21:33:19Z ----------------------------------------------------------------
Fixed this issue. The Simulator
just wasn't caching the baseline filters.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:26Z ----------------------------------------------------------------
Line #5. np.any(vis_A[blt_inds]), # The first application give (0, 1) noise,
Looks like I forgot to add "didn't" to the comment.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:27Z ----------------------------------------------------------------
Line #18. Smin=0.5,
Might be a good idea to bump this up quite a bit, since it leads to a bit of cognitive dissonance in the following plot (not all parts of the "bright sources" plot are higher amplitude than any part of the "faint sources" plot---this is fine, but the reason why it's fine is perhaps a bit nuanced).
I set it to 5-20 instead of 0.5-10. I think it looks pretty reasonable now.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:27Z ----------------------------------------------------------------
Ah. I think in order to fix this bug we need to add attrs_to_pull = dict(ants="antpos")
as a class attribute after line 3. I forget exactly when I added this change, but it's a more explicit way of telling the Simulator
which of its attributes need to be used as parameter values when adding a particular systematic effect. When the Simulator
is trying to figure out how to set all of the model parameters, it first queries the model to see which parameters can be retrieved from the Simulator
(meta)data and determines the default parameter settings according to the defaults
singleton, then modifies the parameters that are passed to the model in accordance with any keyword arguments provided by the user.
steven-murray commented on 2024-09-09T13:17:26Z ----------------------------------------------------------------
Good - fixed.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:28Z ----------------------------------------------------------------
It looks like mutual coupling isn't showing up in the list for some reason?
steven-murray commented on 2024-09-09T13:19:10Z ----------------------------------------------------------------
Huh. Just ran it again and it's there
r-pascua commented on 2024-09-09T17:45:40Z ----------------------------------------------------------------
weird
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:29Z ----------------------------------------------------------------
Line #1. print("\n".join(str(f) for f in tempdir.iterdir()))
Maybe change this to sorted(tempdir.iterdir())
so it looks more sensible?
DOne.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:29Z ----------------------------------------------------------------
Line #1. print("\n".join(str(f) for f in tempdir.iterdir() if str(f).endswith("files.uvh5")))
Same as previous comment
Done
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:30Z ----------------------------------------------------------------
Maybe change the reflections to a bandpass, or make the reflection amplitude lower, since the 1%-level reflections are making the figure below look kind of insane.
steven-murray commented on 2024-09-09T13:21:43Z ----------------------------------------------------------------
I set it to amp=1e-4
r-pascua commented on 2024-09-09T21:36:51Z ----------------------------------------------------------------
Huh, looks like that didn't help much. Probably some annoying thing with a small set of observing times leading to the mock EoR component dominating the signal? Looks like this has been an issue for a while, so I suppose I'm fine with leaving it as-is.
steven-murray commented on 2024-09-10T10:30:17Z ----------------------------------------------------------------
Yeah, something to dig into later if we need to.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:31Z ----------------------------------------------------------------
Line #2. import aipy
We're not using aipy
anymore, are we?
Not in general. I've swapped this out to use astropy.units.sday
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:32Z ----------------------------------------------------------------
Line #1. uvtools.plot.labeled_waterfall(nos_jy, mode="log", freqs=fqs*1e9, lsts=lsts)
Can we restrict the dynamic range on this plot? Right now we have too much dynamic range, so it's very hard to see that there are noise fluctuations (it just looks kind of grainy). I think it's as simple as setting dynamic_range=3
in the call to labeled_waterfall
.
Done.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:33Z ----------------------------------------------------------------
Looks like we lost the impulse and scatter RFI? I'm guessing that it's there, but the amplitude is too low.
steven-murray commented on 2024-09-09T14:32:35Z ----------------------------------------------------------------
I can see it if I take the other RFI components away, so it is there. Their magnitude is so different that they don't show well in this plot. Plotting on log scale doesn't work either because most of the array is zero so it just bombs out.
r-pascua commented on 2024-09-09T21:39:01Z ----------------------------------------------------------------
Gotcha. Thanks for checking!
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:33Z ----------------------------------------------------------------
Line #8. MXG = MX + np.log10(gainscale)
We can remove this line, as well as the one above it.
Yup, good.
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:34Z ----------------------------------------------------------------
Line #3. uvtools.plot.labeled_waterfall(vis_total, mode="log", freqs=fqs*1e9, lsts=lsts)
I think this plot will also benefit from limiting the dynamic range.
Done
View / edit / reply to this conversation on ReviewNB
r-pascua commented on 2024-08-27T22:44:35Z ----------------------------------------------------------------
Line #1. uvtools.plot.labeled_waterfall(vis_xtalk, mode="log", freqs=fqs*1e9, lsts=lsts)
Same dynamic range comment
Done
OK, I tried some stuff out and it turns out that the differences are due to the original simulation having some negative real parts of the autos. When this is saved to file and re-read, the imaginary parts are forced to zero, and the real parts are made positive. Not sure what to do about that.
View entire conversation on ReviewNB
I set it to 5-20 instead of 0.5-10. I think it looks pretty reasonable now.
View entire conversation on ReviewNB
Not in general. I've swapped this out to use astropy.units.sday
View entire conversation on ReviewNB
I can see it if I take the other RFI components away, so it is there. Their magnitude is so different that they don't show well in this plot. Plotting on log scale doesn't work either because most of the array is zero so it just bombs out.
View entire conversation on ReviewNB
OK, my fix was pretty coarse: I just made it so that autocorrelations are modified via vis = np.abs(vis).astype(complex)
in the hera_sim.eor.NoiselikeEoR.__call__
method. Figure it shouldn't be too much of an issue since I imagine this functionality really isn't used all that frequently.
View entire conversation on ReviewNB
Fixed this issue. The Simulator
just wasn't caching the baseline filters.
View entire conversation on ReviewNB
Huh, looks like that didn't help much. Probably some annoying thing with a small set of observing times leading to the mock EoR component dominating the signal? Looks like this has been an issue for a while, so I suppose I'm fine with leaving it as-is.
View entire conversation on ReviewNB
OK, I think I've fixed the last of the issues and removed debug things from the notebooks. @steven-murray please take one last look to ensure we didn't miss anything before merging!
Updated notebooks. They run, but may not make use of all the new features properly, so that needs to be examined.