tardis-sn / tardis

TARDIS - Temperature And Radiative Diffusion In Supernovae
https://tardis-sn.github.io/tardis
204 stars 406 forks source link

Move to qgridnext #2814

Closed sarthak-dv closed 2 months ago

sarthak-dv commented 2 months ago

:pencil: Description

Type: :roller_coaster: infrastructure

To avoid any future errors that may be caused by the deprecated qgrid library we are temporarily moving to qgridnext until we find a reliable alternative.

:pushpin: Resources

Examples, notebooks, and links to useful references.

:vertical_traffic_light: Testing

How did you test these changes?

:ballot_box_with_check: Checklist

Note: If you are not allowed to perform any of these actions, ping (@) a contributor.

tardis-bot commented 2 months ago

*beep* *bop* Hi human, I ran ruff on the latest commit (31569583732e2845c5fccecc4778a3e7c723a6cd). Here are the outputs produced. Results can also be downloaded as artifacts here. Summarised output:

```diff 2 I001 [*] Import block is un-sorted or un-formatted 2 E999 [ ] SyntaxError: Expected an expression 1 E712 [*] Avoid equality comparisons to `False`; use `if not ...:` for false checks 1 F401 [*] `re` imported but unused ```

Complete output(might be large):

```diff .github/workflows/tests.yml:7:4: E999 SyntaxError: Expected an expression tardis/visualization/widgets/line_info.py:3:1: I001 [*] Import block is un-sorted or un-formatted tardis/visualization/widgets/line_info.py:3:8: F401 [*] `re` imported but unused tardis/visualization/widgets/line_info.py:305:21: E712 Avoid equality comparisons to `False`; use `if not ...:` for false checks tardis/visualization/widgets/util.py:3:1: I001 [*] Import block is un-sorted or un-formatted tardis_env3.yml:3:10: E999 SyntaxError: Expected an expression Found 6 errors. [*] 3 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option). ```
codecov[bot] commented 2 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 70.79%. Comparing base (b79b61e) to head (ecc90b1). Report is 1 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #2814 +/- ## ========================================== - Coverage 70.96% 70.79% -0.18% ========================================== Files 209 209 Lines 15638 15637 -1 ========================================== - Hits 11098 11070 -28 - Misses 4540 4567 +27 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

jamesgillanders commented 2 months ago

@andrewfullard We think it makes sense that this benchmark test fails but we need a second opinion. If it's a problem let us know -- if it isn't, then we will merge once Jaladh has reviewed it

andrewfullard commented 2 months ago

Seems that qgridnext doesn't exist in conda, should be a pip install instead

sarthak-dv commented 2 months ago

Seems that qgridnext doesn't exist in conda, should be a pip install instead

Yes, that seems to be the issue here, I've added pip install qgridnext in tests.yml, do I need to add it somewhere else for the benchmarks to pass?

jaladh-singhal commented 2 months ago

@sarthak-dv benchmarks "should" pass after the suggested change.

Unrelated, @andrewfullard do we need to update conda lock files as well?

andrewfullard commented 2 months ago

@sarthak-dv benchmarks "should" pass after the suggested change.

Unrelated, @andrewfullard do we need to update conda lock files as well?

Yes, we will need to regenerate the environment.

tardis-bot commented 2 months ago

*beep* *bop* Hi human, I ran benchmarks as you asked comparing master (be4ec9a4f9423392bc1aa4a6f3316267faa70093) and the latest commit (31569583732e2845c5fccecc4778a3e7c723a6cd). Here are the logs produced by ASV. Results can also be downloaded as artifacts here.

Significantly changed benchmarks:

```diff ```

All benchmarks:

```diff Benchmarks that have stayed the same: | Change | Before [be4ec9a4] | After [31569583] | Ratio | Benchmark (Parameter) | |----------|------------------------------|---------------------|---------|-------------------------------------------------------------------------------------------------------------------------------------| | | 22.7±6μs | 28.6±8μs | ~1.26 | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_generate_rpacket_last_interaction_tracker_list | | | 2.59±0.5ms | 2.89±0.4ms | ~1.12 | transport_montecarlo_single_packet_loop.BenchmarkTransportMontecarloSinglePacketLoop.time_single_packet_loop | | | 252±7ns | 204±0.01ns | ~0.81 | spectrum_formal_integral.BenchmarkTransportMontecarloFormalIntegral.time_intensity_black_body | | | 3.80±1μs | 2.99±2μs | ~0.79 | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_bad_vpacket | | | 1.67±0.01ms | 1.82±0.01ms | 1.09 | transport_montecarlo_main_loop.BenchmarkTransportMontecarloMontecarloMainLoop.time_montecarlo_main_loop | | | 3.19±0.02ms | 3.44±0.01ms | 1.08 | opacities_opacity_state.BenchmarkOpacitiesOpacityState.time_opacity_state_initialize('scatter') | | | 44.0±30μs | 47.0±20μs | 1.07 | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_line_emission | | | 65.7±0.3ms | 69.5±2ms | 1.06 | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_rpacket_trackers_to_dataframe | | | 7.10±2μs | 7.36±2μs | 1.04 | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket_volley | | | 39.4±0.1s | 40.5±0.3s | 1.03 | run_tardis.BenchmarkRunTardis.time_run_tardis | | | 1.04±0m | 1.07±0m | 1.03 | run_tardis.BenchmarkRunTardis.time_run_tardis_rpacket_tracking | | | 714±0.3ns | 737±1ns | 1.03 | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_thomson_scatter | | | 3.72±0.01ms | 3.77±0.01ms | 1.01 | opacities_opacity_state.BenchmarkOpacitiesOpacityState.time_opacity_state_initialize('macroatom') | | | 2.07±0m | 2.07±0m | 1.00 | spectrum_formal_integral.BenchmarkTransportMontecarloFormalIntegral.time_FormalIntegrator_functions | | | 31.4±0.09μs | 31.5±0.02μs | 1.00 | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_generate_rpacket_tracker_list | | | 1.20±0μs | 1.16±0μs | 0.97 | transport_geometry_calculate_distances.BenchmarkTransportGeometryCalculateDistances.time_calculate_distance_boundary | | | 2.21±2μs | 2.14±1μs | 0.97 | transport_montecarlo_estimators_radfield_estimator_calcs.BenchmarkMontecarloMontecarloNumbaPacket.time_update_line_estimators | | | 521±100ns | 501±100ns | 0.96 | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_compton_opacity_calculation | | | 561±200ns | 531±100ns | 0.95 | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_pair_creation_opacity_calculation | | | 6.95±0.5μs | 6.63±0.8μs | 0.95 | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket | | | 631±100ns | 591±100ns | 0.94 | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_photoabsorption_opacity_calculation | | | 1.60±0.4μs | 1.50±0.4μs | 0.94 | transport_geometry_calculate_distances.BenchmarkTransportGeometryCalculateDistances.time_calculate_distance_line | | | 3.37±0.6μs | 3.16±0.3μs | 0.94 | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket_within_shell | | | 46.4±30μs | 43.3±30μs | 0.93 | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_line_scatter | ```

If you want to see the graph of the results, you can check it here

sarthak-dv commented 2 months ago

The tests in test_spectrum_solver.py are showing error, I don’t think it is related to the changes made in this PR. Other than that the benchmarks build is passing after making the changes @jaladh-singhal suggested.

andrewfullard commented 2 months ago

The tests in test_spectrum_solver.py are showing error, I don’t think it is related to the changes made in this PR. Other than that the benchmarks build is passing after making the changes @jaladh-singhal suggested.

Yeah that's fine- the test failures are because of changes to regression data in support of another PR

tardis-bot commented 2 months ago

*beep* *bop*

Hi, human.

The docs workflow has failed :x:

Click here to see the build log.

andrewfullard commented 2 months ago

Looks like the docs build from the lock file, so they won't work until that is updated too.