HERA-Team / hera_qm

HERA Data Quality Metrics
MIT License
2 stars 2 forks source link

Fix test files to be compatible with pyuvdata 2.4 #443

Closed bhazelton closed 1 year ago

bhazelton commented 1 year ago

This updates old test files to be compatible with pyuvdata 2.4. Note that the only tests that were failing on pyuvdata 2.4 were in test_xrfi.py, so these file updates are specifically many of the files used in that module. Edit: I also fixed a test file (antenna_flags.h5) that appears to not be used by tests in this repo but it is used by tests in hera_cal, so I fixed it here to fix hera_cal errors. I think it should probably be moved to that repo instead, but I'll leave that up to the owners of these repos.

I do worry that these test files are so old that they do not represent what is actually coming off of HERA now, so I think it'd be better to replace them with newer files. But this patch at least keeps the tests working.

I'll note that there are two tests that are failing on my machine. But when I check out older versions of pyuvdata they still fail, so I think they're related to some other dependency change, not pyuvdata. Edit: those tests are passing on the python 3.8 builds, which do have numpy < 1.25, so it seems that the failures are happening with numpy>=1.25.

For reference, the test output with the two failures is copied below:

============================================= FAILURES =============================================
________________________________________ test_xrfi_run_step ________________________________________

tmpdir = local('/private/var/folders/l8/d94924jn1yv3jpz9mkwt2r480000gr/T/pytest-of-bryna/pytest-212/test_xrfi_run_step0')

    def test_xrfi_run_step(tmpdir):
        # setup files in tmpdir.
        tmp_path = tmpdir.strpath
        fake_obs = 'zen.2457698.40355.HH'
        ocal_file = os.path.join(tmp_path, fake_obs + '.omni.calfits')
        shutil.copyfile(test_c_file, ocal_file)
        acal_file = os.path.join(tmp_path, fake_obs + '.abs.calfits')
        shutil.copyfile(test_c_file, acal_file)
        raw_dfile = os.path.join(tmp_path, fake_obs + '.uvh5')
        shutil.copyfile(test_uvh5_file, raw_dfile)
        model_file = os.path.join(tmp_path, fake_obs + '.omni_vis.uvh5')
        shutil.copyfile(test_uvh5_file, model_file)
        a_priori_flag_integrations = os.path.join(tmp_path, 'a_priori_flags_integrations.yaml')
        shutil.copyfile(test_flag_integrations, a_priori_flag_integrations)
        # if run_filter is false, then uv should not be None but everything else should be None
        uv1, uvf1, uvf_f1, uvf_a1, metrics1, flags1 = xrfi.xrfi_run_step(uv_files=raw_dfile, run_filter=False, dtype='uvdata')
        assert issubclass(uv1.__class__, UVData)
        assert uvf1 is None
        assert uvf_f1 is None
        assert uvf_a1 is None
        assert len(metrics1) == 0
        assert len(flags1) == 0

        # test expected output formats if run_filter is True.
        uv1, uvf1, uvf_f1, uvf_a1, metrics1, flags1 = xrfi.xrfi_run_step(uv_files=raw_dfile, run_filter=True, dtype='uvdata')
        assert len(flags1) == 1
        assert len(metrics1) == 1
        assert uvf_a1 is None

        # test expected output formats when calculate_uvf_apriori is True
        uv1, uvf1, uvf_f1, uvf_a1, metrics1, flags1 = xrfi.xrfi_run_step(uv_files=raw_dfile,
        calculate_uvf_apriori=True, run_filter=True, dtype='uvdata', wf_method='mean')
        assert len(flags1) == 2
        assert len(metrics1) == 1
        assert uvf_a1 is not None

        # now test partial i/o
        uv2, uvf2, uvf_f2, uvf_a2, metrics2, flags2 = xrfi.xrfi_run_step(uv_files=raw_dfile,
        calculate_uvf_apriori=True, run_filter=True, dtype='uvdata', Nwf_per_load=1, wf_method='mean')
        assert len(flags2) == 2
        assert len(metrics2) == 1
        assert np.all(np.isclose(uvf_f1.flag_array, uvf_f2.flag_array))
        assert np.all(np.isclose(uvf_a1.flag_array, uvf_a2.flag_array))
>       assert np.all(np.isclose(uvf1.metric_array, uvf2.metric_array))
E       assert False
E        +  where False = <function all at 0x10faef130>(array([[[ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],...       [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True]]]))
E        +    where <function all at 0x10faef130> = np.all
E        +    and   array([[[ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],...       [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True]]]) = <function isclose at 0x10faff3f0>(array([[[ 1.02109335e+03],\n        [ 6.48398590e-01],\n        [ 5.36547043e-01],\n        [ 2.34808101e-01],\n        [ ...338e-02],\n        [-6.75277721e-01],\n        [-7.68172854e-01],\n        [-6.75277721e-01],\n        [-1.47319597e+00]]]), array([[[ 1.02109322e+03],\n        [ 6.48398106e-01],\n        [ 5.36546764e-01],\n        [ 2.34808168e-01],\n        [ ...498e-02],\n        [-6.75277721e-01],\n        [-7.68172930e-01],\n        [-6.75277721e-01],\n        [-1.47319604e+00]]]))
E        +      where <function isclose at 0x10faff3f0> = np.isclose
E        +      and   array([[[ 1.02109335e+03],\n        [ 6.48398590e-01],\n        [ 5.36547043e-01],\n        [ 2.34808101e-01],\n        [ ...338e-02],\n        [-6.75277721e-01],\n        [-7.68172854e-01],\n        [-6.75277721e-01],\n        [-1.47319597e+00]]]) = <pyuvdata.uvflag.uvflag.UVFlag object at 0x1b9b37550>.metric_array
E        +      and   array([[[ 1.02109322e+03],\n        [ 6.48398106e-01],\n        [ 5.36546764e-01],\n        [ 2.34808168e-01],\n        [ ...498e-02],\n        [-6.75277721e-01],\n        [-7.68172930e-01],\n        [-6.75277721e-01],\n        [-1.47319604e+00]]]) = <pyuvdata.uvflag.uvflag.UVFlag object at 0x1b9b99910>.metric_array

hera_qm/tests/test_xrfi.py:1306: AssertionError
__________________________________________ test_xrfi_run ___________________________________________

tmpdir = local('/private/var/folders/l8/d94924jn1yv3jpz9mkwt2r480000gr/T/pytest-of-bryna/pytest-212/test_xrfi_run0')

    def test_xrfi_run(tmpdir):
        # The warnings are because we use UVFlag.to_waterfall() on the total chisquareds
        # This doesn't hurt anything, and lets us streamline the pipe
        mess1 = ['This object is already a waterfall']
        messages = 8 * mess1
        cat1 = [UserWarning]
        categories = 8 * cat1
        # Spoof a couple files to use as extra inputs (xrfi_run needs two cal files and two data-like files)
        tmp_path = tmpdir.strpath
        fake_obs = 'zen.2457698.40355.HH'
        ocal_file = os.path.join(tmp_path, fake_obs + '.omni.calfits')
        shutil.copyfile(test_c_file, ocal_file)
        acal_file = os.path.join(tmp_path, fake_obs + '.abs.calfits')
        shutil.copyfile(test_c_file, acal_file)
        raw_dfile = os.path.join(tmp_path, fake_obs + '.uvh5')
        shutil.copyfile(test_uvh5_file, raw_dfile)
        model_file = os.path.join(tmp_path, fake_obs + '.omni_vis.uvh5')
        shutil.copyfile(test_uvh5_file, model_file)

        # check warnings
        with pytest.warns(None) as record:
            xrfi.xrfi_run(ocal_file, acal_file, model_file, raw_dfile, history='Just a test', kt_size=3)
        assert len(record) >= len(messages)
        n_matched_warnings = 0
        for i in range(len(record)):
            if mess1[0] in str(record[i].message) and cat1[0] == record[i].category:
                n_matched_warnings += 1
        assert n_matched_warnings == 8

        outdir = os.path.join(tmp_path, 'zen.2457698.40355.xrfi')
        ext_labels = {'ag_flags1': 'Abscal gains, median filter. Flags.',
                      'ag_flags2': 'Abscal gains, mean filter. Flags.',
                      'ag_metrics1': 'Abscal gains, median filter.',
                      'ag_metrics2': 'Abscal gains, mean filter.',
                      'apriori_flags': 'A priori flags.',
                      'ax_flags1': 'Abscal chisq, median filter. Flags.',
                      'ax_flags2': 'Abscal chisq, mean filter. Flags.',
                      'ax_metrics1': 'Abscal chisq, median filter.',
                      'ax_metrics2': 'Abscal chisq, mean filter.',
                      'omnical_chi_sq_flags1': 'Omnical overall modified z-score of chisq. Flags.',
                      'omnical_chi_sq_flags2': 'Omnical overall z-score of chisq. Flags.',
                      'omnical_chi_sq_renormed_metrics1': 'Omnical overall modified z-score of chisq.',
                      'omnical_chi_sq_renormed_metrics2': 'Omnical overall z-score of chisq.',
                      'abscal_chi_sq_flags1': 'Abscal overall modified z-score of chisq. Flags.',
                      'abscal_chi_sq_flags2': 'Abscal overall z-score of chisq. Flags.',
                      'abscal_chi_sq_renormed_metrics1': 'Abscal overall modified z-score of chisq.',
                      'abscal_chi_sq_renormed_metrics2': 'Abscal overall z-score of chisq.',
                      'combined_flags1': 'Flags from combined metrics, round 1.',
                      'combined_flags2': 'Flags from combined metrics, round 2.',
                      'combined_metrics1': 'Combined metrics, round 1.',
                      'combined_metrics2': 'Combined metrics, round 2.',
                      'cross_flags1': 'Crosscorr, median filter. Flags.',
                      'cross_flags2': 'Crosscorr, mean filter. Flags.',
                      'auto_flags1': 'Autocorr, median filter. Flags.',
                      'auto_flags2': 'Autocorr, mean filter. Flags.',
                      'auto_metrics2': 'Autocorr, mean filter.',
                      'auto_metrics1': 'Autocorr, median filter.',
                      'cross_metrics2': 'Crosscorr, mean filter.',
                      'cross_metrics1': 'Crosscorr, median filter.',
                      'flags1': 'ORd flags, round 1.',
                      'flags2': 'ORd flags, round 2.',
                      'og_flags1': 'Omnical gains, median filter. Flags.',
                      'og_flags2': 'Omnical gains, mean filter. Flags.',
                      'og_metrics1': 'Omnical gains, median filter.',
                      'og_metrics2': 'Omnical gains, mean filter.',
                      'ox_flags1': 'Omnical chisq, median filter. Flags.',
                      'ox_flags2': 'Omnical chisq, mean filter. Flags.',
                      'ox_metrics1': 'Omnical chisq, median filter.',
                      'ox_metrics2': 'Omnical chisq, mean filter.',
                      'v_flags1': 'Omnical visibility solutions, median filter. Flags.',
                      'v_flags2': 'Omnical visibility solutions, mean filter. Flags.',
                      'v_metrics1': 'Omnical visibility solutions, median filter.',
                      'v_metrics2': 'Omnical visibility solutions, mean filter.'}
        for ext, label in ext_labels.items():
            # by default, only cross median filter / mean filter is not performed.
            if not ext in['cross_metrics1', 'cross_flags1']:
                out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
                assert os.path.exists(out)
                uvf = UVFlag(out, use_future_array_shapes=True)
                assert uvf.label == label
        # cleanup
        for ext, label in ext_labels.items():
            out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
            if os.path.exists(out):
                os.remove(out)

        # now really do everything.
        uvf_list1 = []
        uvf_list1_names = []
        with pytest.warns(None) as record:
            xrfi.xrfi_run(ocal_file, acal_file, model_file, raw_dfile,
                          history='Just a test', kt_size=3, cross_median_filter=True)
        assert len(record) >= len(messages)
        n_matched_warnings = 0
        for i in range(len(record)):
            if mess1[0] in str(record[i].message) and cat1[0] == record[i].category:
                n_matched_warnings += 1
        assert n_matched_warnings == 8

        for ext, label in ext_labels.items():
            out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
            assert os.path.exists(out)
            uvf = UVFlag(out, use_future_array_shapes=True)
            uvf_list1.append(uvf)
            uvf_list1_names.append(out)
            assert uvf.label == label
        # cleanup
        for ext, label in ext_labels.items():
            out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
            if os.path.exists(out):
                os.remove(out)
        # now do partial i/o and check equality of outputs.
        uvf_list2 = []
        uvf_list2_names = []
        with pytest.warns(None) as record:
            xrfi.xrfi_run(ocal_file, acal_file, model_file, raw_dfile, Nwf_per_load=1,
                          history='Just a test', kt_size=3, cross_median_filter=True)
        assert len(record) >= len(messages)
        n_matched_warnings = 0
        for i in range(len(record)):
            if mess1[0] in str(record[i].message) and cat1[0] == record[i].category:
                n_matched_warnings += 1
        assert n_matched_warnings == 8

        for ext, label in ext_labels.items():
            out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
            assert os.path.exists(out)
            uvf = UVFlag(out, use_future_array_shapes=True)
            uvf_list2.append(uvf)
            uvf_list2_names.append(out)
            assert uvf.label == label
        # cleanup
        for ext, label in ext_labels.items():
            out = os.path.join(outdir, '.'.join([fake_obs, ext, 'h5']))
            if os.path.exists(out):
                os.remove(out)
        # compare
        for uvf1, uvf2 in zip(uvf_list1, uvf_list2):
            if uvf1.mode == 'flag':
                assert np.all(np.isclose(uvf1.flag_array, uvf2.flag_array))
            elif uvf1.mode == 'metric':
>               assert np.all(np.isclose(uvf1.metric_array, uvf2.metric_array))
E               assert False
E                +  where False = <function all at 0x10faef130>(array([[[ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],...       [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True]]]))
E                +    where <function all at 0x10faef130> = np.all
E                +    and   array([[[ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],...       [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True],\n        [ True]]]) = <function isclose at 0x10faff3f0>(array([[[ 1.00024272e+04],\n        [ 2.04849239e+00],\n        [ 2.00204721e+00],\n        [-6.60763167e-01],\n        [-...990e-01],\n        [ 6.42813182e-01],\n        [ 0.00000000e+00],\n        [ 1.51638818e+00],\n        [ 2.80091438e+00]]]), array([[[ 1.00024270e+04],\n        [ 2.04849230e+00],\n        [ 2.00204721e+00],\n        [-6.60763149e-01],\n        [-...003e-01],\n        [ 6.42813281e-01],\n        [ 0.00000000e+00],\n        [ 1.51638780e+00],\n        [ 2.80091498e+00]]]))
E                +      where <function isclose at 0x10faff3f0> = np.isclose
E                +      and   array([[[ 1.00024272e+04],\n        [ 2.04849239e+00],\n        [ 2.00204721e+00],\n        [-6.60763167e-01],\n        [-...990e-01],\n        [ 6.42813182e-01],\n        [ 0.00000000e+00],\n        [ 1.51638818e+00],\n        [ 2.80091438e+00]]]) = <pyuvdata.uvflag.uvflag.UVFlag object at 0x1b9c96190>.metric_array
E                +      and   array([[[ 1.00024270e+04],\n        [ 2.04849230e+00],\n        [ 2.00204721e+00],\n        [-6.60763149e-01],\n        [-...003e-01],\n        [ 6.42813281e-01],\n        [ 0.00000000e+00],\n        [ 1.51638780e+00],\n        [ 2.80091498e+00]]]) = <pyuvdata.uvflag.uvflag.UVFlag object at 0x1b9ca3010>.metric_array

hera_qm/tests/test_xrfi.py:1646: AssertionError
========================================= warnings summary =========================================
hera_qm/xrfi.py:22
  Please use `convolve` from the `scipy.ndimage` namespace, the `scipy.ndimage.filters` namespace is deprecated.

hera_qm/tests/test_ant_class.py: 1 warning
hera_qm/tests/test_auto_metrics.py: 1 warning
hera_qm/tests/test_xrfi.py: 14 warnings
  Casting complex values to real discards the imaginary part

hera_qm/tests/test_ant_metrics.py::test_calc_corr_stats
hera_qm/tests/test_ant_metrics.py::test_find_totally_dead_ants
hera_qm/tests/test_ant_metrics.py::test_ant_metrics_run_and_load_antenna_metrics
  invalid value encountered in divide

hera_qm/tests/test_ant_metrics.py::test_calc_corr_stats
hera_qm/tests/test_ant_metrics.py::test_find_totally_dead_ants
hera_qm/tests/test_ant_metrics.py::test_ant_metrics_run_and_load_antenna_metrics
hera_qm/tests/test_auto_metrics.py::test_get_auto_spectra
hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
  Mean of empty slice

hera_qm/tests/test_ant_metrics.py: 2 warnings
hera_qm/tests/test_auto_metrics.py: 6 warnings
hera_qm/tests/test_firstcal_metrics.py: 2 warnings
hera_qm/tests/test_xrfi.py: 3 warnings
  All-NaN slice encountered

hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
  K1 value 8 is larger than the data of dimension 4; using the size of the data for the kernel size

hera_qm/tests/test_auto_metrics.py::test_auto_metrics_run
  Degrees of freedom <= 0 for slice.

hera_qm/tests/test_firstcal_metrics.py: 12 warnings
hera_qm/tests/test_omnical_metrics.py: 29 warnings
hera_qm/tests/test_xrfi.py: 1 warning
  ['antenna_positions'] are not set or being overwritten. Using known values for HERA.

hera_qm/tests/test_firstcal_metrics.py: 145 warnings
  The optimal value found for dimension 0 of parameter k1__length_scale is close to the specified lower bound 0.01. Decreasing the bound and calling fit again may find a better value.

hera_qm/tests/test_firstcal_metrics.py: 33 warnings
  The optimal value found for dimension 0 of parameter k1__length_scale is close to the specified upper bound 1.0. Increasing the bound and calling fit again may find a better value.

hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
  JSON-type files can still be written but are no longer written by default.
  Write to HDF5 format for future compatibility.

hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
hera_qm/tests/test_metrics_io.py::test_process_ex_ants_string_and_file[json]
hera_qm/tests/test_metrics_io.py::test_process_ex_ants_string_and_file[json]
hera_qm/tests/test_utils.py::test_metrics2mc
  JSON-type files can still be read but are no longer written by default.
  Write to HDF5 format for future compatibility.

hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
  Pickle-type files can still be written but are no longer written by default.
  Write to HDF5 format for future compatibility.

hera_qm/tests/test_firstcal_metrics.py::test_write_load_metrics
  Pickle-type files can still be read but are no longer written by default.
  Write to HDF5 format for future compatibility.

hera_qm/tests/test_firstcal_metrics.py::test_init_two_pol
hera_qm/tests/test_firstcal_metrics.py::test_init_two_pol
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
  Mean of empty slice.

hera_qm/tests/test_firstcal_metrics.py::test_init_two_pol
hera_qm/tests/test_firstcal_metrics.py::test_init_two_pol
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
hera_qm/tests/test_firstcal_metrics.py::test_run_metrics_two_pols
  invalid value encountered in scalar divide

hera_qm/tests/test_utils.py::test_apply_yaml_flags_uvcal[a_priori_flags_integrations.yaml]
hera_qm/tests/test_utils.py::test_apply_yaml_flags_uvcal[a_priori_flags_jds.yaml]
hera_qm/tests/test_utils.py::test_apply_yaml_flags_uvcal[a_priori_flags_lsts.yaml]
hera_qm/tests/test_utils.py::test_apply_yaml_flags_uvcal[a_priori_flags_no_integrations.yaml]
hera_qm/tests/test_utils.py::test_apply_yaml_flags_uvcal[a_priori_flags_no_chans.yaml]
  Cannot preserve total_quality_array when changing number of antennas; discarding

hera_qm/tests/test_utils.py: 4 warnings
hera_qm/tests/test_xrfi.py: 15 warnings
  Passing None has been deprecated.
  See https://docs.pytest.org/en/latest/how-to/capture-warnings.html#additional-use-cases-of-warnings-in-tests for alternatives in common use cases.

hera_qm/tests/test_vis_metrics.py::test_check_noise_variance
hera_qm/tests/test_vis_metrics.py::test_check_noise_variance_inttime_error
hera_qm/tests/test_vis_metrics.py::test_vis_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_bl_scatter
hera_qm/tests/test_vis_metrics.py::test_sequential_diff
  The shapes of several attributes will be changing in the future to remove the deprecated spectral window axis. You can call the `use_future_array_shapes` method to convert to the future array shapes now or set the parameter of the same name on this method to both convert to the future array shapes and silence this warning. See the UVData tutorial on ReadTheDocs for more details about these shape changes.

hera_qm/tests/test_vis_metrics.py::test_vis_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_bl_scatter
  It is not clear from the file if the data are projected or not. Since the 'epoch' variable is not present it will be labeled as unprojected. If that is incorrect you can use the 'projected' parameter on this method to set it properly.

hera_qm/tests/test_vis_metrics.py::test_vis_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_cov
hera_qm/tests/test_vis_metrics.py::test_plot_bl_bl_scatter
  Fixing auto-correlations to be be real-only, after some imaginary values were detected in data_array. Largest imaginary component was 3.4088176725788344e-09, largest imaginary/real ratio was 1.912405234172354e-10.

hera_qm/tests/test_xrfi.py: 173 warnings
  K1 value 8 is larger than the data of dimension 3; using the size of the data for the kernel size

hera_qm/tests/test_xrfi.py: 302 warnings
  The shapes of several attributes will be changing in the future to remove the deprecated spectral window axis. You can call the `use_future_array_shapes` method to convert to the future array shapes now or set the parameter of the same name on this method to both convert to the future array shapes and silence this warning.

hera_qm/tests/test_xrfi.py::test_xrfi_h1c_run_uvfits_no_xrfi_path
hera_qm/tests/test_xrfi.py::test_xrfi_h1c_run_uvfits_xrfi_path
hera_qm/tests/test_xrfi.py::test_xrfi_h1c_run_uvfits_model
  Fixing auto-correlations to be be real-only, after some imaginary values were detected in data_array. Largest imaginary component was 9.119837107718354e-10, largest imaginary/real ratio was 6.280261727331649e-11.

hera_qm/tests/test_xrfi.py::test_xrfi_h1c_run_incorrect_model
  Fixing auto-correlations to be be real-only, after some imaginary values were detected in data_array. Largest imaginary component was 1.084438761012052e-09, largest imaginary/real ratio was 3.159844386146915e-09.

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
===================================== short test summary info ======================================
FAILED hera_qm/tests/test_xrfi.py::test_xrfi_run_step - assert False
FAILED hera_qm/tests/test_xrfi.py::test_xrfi_run - assert False
===================== 2 failed, 256 passed, 798 warnings in 141.98s (0:02:21) ======================
codecov[bot] commented 1 year ago

Codecov Report

Patch coverage: 100.00% and no project coverage change.

Comparison is base (fc599c3) 97.09% compared to head (cc65ee3) 97.09%.

:exclamation: Current head cc65ee3 differs from pull request most recent head 7d95af4. Consider uploading reports for the commit 7d95af4 to get more accurate results

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #443 +/- ## ======================================= Coverage 97.09% 97.09% ======================================= Files 11 11 Lines 3540 3540 ======================================= Hits 3437 3437 Misses 103 103 ``` | [Files Changed](https://app.codecov.io/gh/HERA-Team/hera_qm/pull/443?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=HERA-Team) | Coverage Δ | | |---|---|---| | [hera\_qm/xrfi.py](https://app.codecov.io/gh/HERA-Team/hera_qm/pull/443?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=HERA-Team#diff-aGVyYV9xbS94cmZpLnB5) | `99.49% <100.00%> (ø)` | |

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

steven-murray commented 1 year ago

Looks like we need to update some calls to np.product -> np.prod. Shall we do that in this PR as well?

bhazelton commented 1 year ago

The idea is to make UVFlag more self-sufficient. Adding telescope metadata was requested by users. We considered not requiring them on waterfall objects, but we discussed it in several telecons and the consensus was that made the code overly complicated and made UVFlag less functional.

We can revisit that decision, although it would have been nice to hear about concerns when the deprecation warnings were added rather than after the functionality was actually removed.

I do think that now that waterfall objects have telescope metadata, we could make it optional to pass in UVData and UVCal objects for the to_antenna and to_baseline methods. I could imagine something where if you don't pass those in, the methods use all the antennas in the metadata to do the inflation.

bhazelton commented 1 year ago

Looks like we need to update some calls to np.product -> np.prod. Shall we do that in this PR as well?

Sure, feel free.

It looks like maybe those two tests that are failing are failing only on numpy>=1.25. The python 3.8 tests are passing and they use numpy 1.24.

bhazelton commented 1 year ago

I just realized that there's a file in this repo (antenna_flags.h5) that is apparently not used in tests in this repo but is used in hera_cal tests that also needed to be updated. I fixed it, but it seems like bad organization to have this file on this repo.