openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
715 stars 38 forks source link

[REVIEW]: GraphNeT: Graph neural networks for neutrino telescope event reconstruction #4971

Closed editorialbot closed 1 year ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@asogaard<!--end-author-handle-- (Andreas Søgaard) Repository: https://github.com/graphnet-team/graphnet Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@dfm<!--end-editor-- Reviewers: @JostMigenda, @GageDeZoort Archive: 10.5281/zenodo.7928487

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/eecab02fb1ecd174a5273750c1ea0baf"><img src="https://joss.theoj.org/papers/eecab02fb1ecd174a5273750c1ea0baf/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/eecab02fb1ecd174a5273750c1ea0baf/status.svg)](https://joss.theoj.org/papers/eecab02fb1ecd174a5273750c1ea0baf)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@JostMigenda & @GageDeZoort, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @dfm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @JostMigenda

📝 Checklist for @GageDeZoort

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=2.19 s (62.6 files/s, 7816.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                         114           2570           3532           9745
YAML                            11             21             24            341
TeX                              1             32             12            327
Markdown                         6            114              0            281
DOS Batch                        1              8              1             26
JSON                             1              0              0             24
TOML                             1              0              0             10
make                             1              4              7              9
reStructuredText                 1              6              5              7
-------------------------------------------------------------------------------
SUM:                           137           2755           3581          10770
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago

Wordcount for paper.md is 1193

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

dfm commented 1 year ago

@JostMigenda, @GageDeZoort — This is the review thread for the paper. All of our correspondence will happen here from now on. Thanks again for agreeing to participate!

👉 Please read the "Reviewer instructions & questions" in the first comment above, and generate your checklists by commenting @editorialbot generate my checklist on this issue ASAP. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4971 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please try to make a start ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6720188 is OK
- 10.5281/zenodo.3828935 is OK
- 10.5281/zenodo.3952674 is OK
- 10.1088/1748-0221/12/03/P03012 is OK
- 10.1016/j.nima.2011.06.103 is OK
- 10.1088/0954-3899/43/8/084001 is OK
- 10.1088/1361-6471/abbd48 is OK
- 10.1088/1361-6471/44/5/054006 is OK
- 10.1051/epjconf/201819101006 is OK
- 10.1038/s41550-020-1182-4 is OK
- 10.1016/j.astropartphys.2011.01.003 is OK
- 10.3847/1538-3881/aa9709 is OK
- 10.1016/j.nima.2004.01.065 is OK
- 10.1088/1748-0221/16/07/P07041 is OK
- 10.1088/1748-0221/15/10/P10005 is OK
- 10.1088/1748-0221/9/03/P03009 is OK
- 10.1088/1748-0221/16/08/P08034 is OK
- 10.1016/j.nima.2013.10.074 is OK

MISSING DOIs

- None

INVALID DOIs

- None
JostMigenda commented 1 year ago

Review checklist for @JostMigenda

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

GageDeZoort commented 1 year ago

Review checklist for @GageDeZoort

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

dfm commented 1 year ago

@JostMigenda, @GageDeZoort, @asogaard — Happy new year! I'm writing to check in on the progress of this review, and to keep it on your radars. Please let me know if there are any major stoppers or if there's anything I can do to help move things along. Thanks!

JostMigenda commented 1 year ago

Happy new year, everyone! 👋 Here’s a bit of a fundamental question that I’ve been mulling over the holidays; I would love your thoughts on this and on how to proceed. Basically, GraphNeT is largely reliant on IceTray, which is proprietary software by the IceCube collaboration; though it can be installed stand-alone. Both modes have advantages and disadvantages, which impact this JOSS review. I’ll discuss my understanding of them here; @asogaard, please correct me if I misunderstood something!

Stand-Alone Installation

I’ve installed the software as stand-alone thus far. This has worked in principle (kudos to @asogaard for the very quick response to graphnet-team/graphnet#373!), but I have been unable to run any example scripts (see graphnet-team/graphnet#374). Looking at the code of GraphNeT itself, it appears that this is not a simple bug in the script, but that major parts of functionality are not available in this mode. For example, graphnet.deployment currently has a single sub-module i3modules, which requires imports from icecube.icetray to function; so it’s apparently not possible to use any trained GraphNeT models in stand-alone mode.

Stand-alone installation is valuable to ensure that GraphNeT is usable without IceTray and can be adopted by other experiments. This is a good example for how to decouple Open Source software from a proprietary dependency so I commend you for the work on this! At the same time, adopting GraphNeT for use with another experiment probably requires months of work to integrate with that experiment’s data formats and tools; so it is not feasible to test this as part of a JOSS review.

Tangent Regarding Adoption by Other Experiments Let’s say I work on JANE (Jost’s Amazing Neutrino Experiment) with its TARZAN toolchain (Totally Awesome Reconstruction Zoftware for Astrophysical Neutrinos)—what do I need to do to adopt GraphNeT? To return to the example above—if I write a `graphnet.deployment` submodule to integrate into TARZAN, does GraphNeT require any specific code structure, class or method names, etc.? Right now, I think I would need to look at IceCube-specific submodules as an example; but if I’m not already familiar with IceTray, it may not be clear which parts of the submodule design are required by GraphNeT and which are IceTray-specific and may not make sense for me.

Installation with IceTray

IceTray is IceCube-proprietary software. Non-members of IceCube can apparently access it in principle, but I’ve run into issues while trying. (@asogaard, I think that should be opened as an issue in the IceTray repo (which I cannot do, since it appears to be private); but please let me know if you’d like me to file an issue in the GraphNeT repo about that, even if it’s just so you can refer to it in internal discussions.)

Most parts of the functionality should be available in this mode, so this is suitable for testing as part of this JOSS review. The proprietary dependencies are not ideal, of course, but looking at JOSS’ policy on proprietary languages and development environments, that’s not a dealbreaker.

Next Steps

Since the stand-alone mode is not reasonably testable as part of a JOSS review, I think the best way forward is for me to test the IceTray mode only and have a high-level look at the code itself to ensure there aren’t any major issues in stand-alone mode and that documentation is reasonably complete and accurate. @dfm, does that sound alright?

Independent of how we proceed with this review, I think the overall issue requires a bit more documentation: What GraphNeT functionality (example scripts, submodules, …) is available in which mode? When should I use which mode? (Is it as simple as using IceTray mode to work on IceCube and stand-alone mode to work on another experiment? Or are there cases where I should use IceTray even if I don’t work on IceCube?) Right now, the README file says that the main difference between the two installation modes is support for I3 files, but that example scripts should work either way; implying that functionality is otherwise largely identical. I was quite surprised initially to find that’s not the case.

asogaard commented 1 year ago

Hi @dfm, @JostMigenda,

Happy New Years! 😊 I will be on my other gig for the rest of the week, but I will try to provide inputs here in a timely fashion. For now, just a quick note on @JostMigenda's points above:

Dependence on IceTray It is true that parts of GraphNeT depend on IceTray: graphnet.data and graphnet.deployment, specifically; whereas graphnet.models and graphnet.training don't. This is a deliberate choice to facilitate the end-to-end physics pipeline, from experiment-specific data to deployment in experiment-specific code. We're working on onboarding other experiments (P-ONE, KM3NeT), which would mean extending these two modules with the requisite experiment-specific code. But the "main" parts of the package — the parts that actually build and train GNNs — are decoupled from IceTray. Based on @JostMigenda's request, I am working on restructuring our example scripts to make it more clear which depend on other software and which are entirely stand-alone, as well as provide additional experiment-independent test data for use in these examples.

Installation The stand-alone installation is simply a subset of the installation with IceTray. Installing without IceTray simply means that the IceCube-specific data conversion and deployment is not available (see above). We also have a Docker image that comes with IceTray and GraphNeT installed together, so if you're keen to try the IceTray build, @JostMigenda, I think that would be the easiest way to do it. (We don't use it that much ourselves currently, so it might be a wee bit rusty, but I'd be happy to assist in getting it working for you.)

dfm commented 1 year ago

Thanks @JostMigenda for this thorough summary and interesting questions!! And thanks to @asogaard for the response!

It sounds like after https://github.com/graphnet-team/graphnet/pull/378 has been merged, it would be possible to properly review the standalone version as well. My (weak!) preference would be to try to provide feedback on both the standalone and the version including the public binary of IceTray if possible, but if that's not feasible or timely, I'd be perfectly happy with just a review for the full version, as long as the docs are clear about any limitations of the standalone version. This is exactly the kind of issue that we're looking to diagnose with these JOSS reviews so thanks both for all your work on this!!

GageDeZoort commented 1 year ago

Apologies for my delayed response here - my feedback largely echoes @JostMigenda's comments above, specifically that the examples need more thorough documentation about what's required to run them. So far, I've had no trouble installing the standalone GraphNeT, but had similar trouble running the examples due to their dependencies on IceTray.

The changes outlined so far (e.g. graphnet-team/graphnet#378) make a lot of sense. I'd also suggest adding (at the top of each example .py file or in a bulleted list in the ReadMe) info about how to run each script, what the expected output is, hard-coded lines that need to be changed (e.g. W&B config info), and what dependencies are required to run the script (stand-alone vs. IceTray).

I have not yet tried the IceTray-dependent installation, but would be happy to try using the IceTray+GraphNeT Docker image as @dfm and @asogaard suggestion. I'll look more into this in the afternoon.

asogaard commented 1 year ago

Hi @dfm, @JostMigenda, and @GageDeZoort,

A brief status update: We just merged https://github.com/graphnet-team/graphnet/pull/378. It was a considerable effort, but I think the package is all the better for it. Thanks for the suggestions, @JostMigenda and @GageDeZoort!

Please let me know if this does the job, once you get a chance to have a second look. I will move on to have a look at the Docker image to make sure it is still works as intended. I will report back here once ready.

JostMigenda commented 1 year ago

Thanks @asogaard for this major reworking of the examples! It indeed looks a lot better now; I’ve gone through most of the example scripts this afternoon and have opened issues for the few cases where I ran into problems. (And it looks like you already fixed one of them before I was done—very speedy! 🚀)

I see the Docker image is ready now; I’ll try to run that in a bit and try out the scripts in examples/01_icetray/. Just one more thing for now: The README file says that PISA isn’t available in the Docker image—is there any particular reason not to include that?

JostMigenda commented 1 year ago

I tested the Docker image in the last few days and ran the IceTray examples; apart from one remaining bug, this looks quite good now. Once the fixes in graphnet-team/graphnet#396 are merged, I think the paper is also ready. I think we’re getting fairly close to the end of my review; here’s a list of all open questions that I see remaining right now:

[*] A few examples of what I would do to improve the autogenerated graphnet.training page:

In my experience, sphinx.ext.autodoc’s automodule and autoclass directives are flexible enough that it should be possible to make all those changes while still keeping most of the benefits of completely auto-generated docs.

GageDeZoort commented 1 year ago

I've been going through the code and compiling some feedback. All-in-all things appear to be running well. I'll post some more detailed items soon, but in the meantime I've got a few high-level questions/comments for the authors:

asogaard commented 1 year ago

Hi @dfm, @GageDeZoort, @JostMigenda,

I am just back from parental leave and wanted to circle back to this. It seems like the review checklists are very close to complete. Therefore, and considering the review has been ongoing since November 2022, I'd be keen to push towards acceptance in JOSS while this effort is still a priority to the GraphNeT community.

There has been a number of questions, requests, etc. surfaced by the reviewers over the past few months. In order to allow us to assess the scope of the remaining work and to work effectively towards acceptance, I was wondering whether I could ask @GageDeZoort and @JostMigenda to each summarise what you see as outstanding issues that must be resolved before they can consider their review complete?

GageDeZoort commented 1 year ago

Hi, hope everything went well on your parental leave! My remaining checklist items really depend on your answers to my questions from Feb 14th (see above).

asogaard commented 1 year ago

Thanks, @GageDeZoort, I'm providing some answers below. I hope this provides enough context to be able to itemise the remaining acceptance-blockers as you see them.


  • The EdgeConv-based architecture is clearly quite performative, but if this package is billed as a more all-encompassing package, I'd expect to see some additional commonly used GNN layers available. For example, Interaction Networks pop up frequently at the LHC. There are also FPGA-accelerated variants of EdgeConv (called GravNet/GarNet) available, so it could be nice to provide versions of these algorithms here. I think the PyG strategy is best, to provide the layers themselves but to let the users explore how to integrate them into architectures.

The goal is not provide an all-encompassing package, but rather provide a framework which supports the end-to-end use of GNNs and is easy to extend with new layers and architectures. We provide two example GNN layers/architectures, and users are more than welcome to contribute new ones, like the two you suggest.

  • The documentation/paper could benefit from additional clarity about when to interface with external models, clustering algorithms, etc., vs. using the versions provided in GraphNeT.

Which external models, clustering algorithms, etc. are you thinking of?

  • Are there plans to include hyperparameter scanner utilities? I'm not convinced they need to be provided here, I'm mainly curious how you thought about this. Perhaps this could also go into the workflow diagram in the paper?

We have users who have used W&B to do hyperparameter sweeps, but while we plan to provide examples of how to do this at some point, it is not considered core functionality.

  • Can you please explain the choice to provide custom implementations of GNNs instead of leveraging the PyG MessagePassing class?

I think there is a good argument to be made that our GNN-type modules could benefit from leveraging MessagePassing. I think this choice was mainly because we primarily wanted consistency across out different module types (Detector, GNN, Coarsening, Task, and so on) and not all these modules are message-passing-type and wouldn't easily fit the MessagePassing.forward syntax. But for a later release we are thinking about how to make our models jittable to support simpler deployment, but that is a ways further down the road.

  • The documentation is still relatively sparse - it might be nice to populate it with write-ups of some of the example scripts. Otherwise, a write-up (e.g. text surrounding Jupyter notebook cells) showing an example workflow you might hand to a student looking to develop their own GNN might be useful.

We are working on a tutorial notebook of sorts that works through the main training example scripts and tries to do a more detailed job of explaining the different bits, how they fit together, and how they can be changed.

asogaard commented 1 year ago

Hi @dfm, @GageDeZoort, @JostMigenda,

I just wanted to follow up on this to check whether, in addition to the answers to @GageDeZoort's questions above, there is anything you need from me in order to itemise the remaining acceptance-blockers as you see them?

JostMigenda commented 1 year ago

Sorry for the slow response; between strikes at UK universities and an upcoming conference I’m co-organising, I’ve got a bit of a backlog right now. Congratulations on the new kid, @asogaard! Hope you enjoyed the parental leave and are occasionally getting some sleep … 😉

The only significant issue I still see is related to the dependence on IceTray. For graphnet.data, I think the experiment-independent examples are very helpful, so that’s resolved. However, for graphnet.deployment I’m not sure the current status is clear enough. To quote my tangent from an earlier post:

Let’s say I work on JANE (Jost’s Amazing Neutrino Experiment) with its TARZAN toolchain (Totally Awesome Reconstruction Zoftware for Astrophysical Neutrinos)—what do I need to do to adopt GraphNeT?

To return to the example above—if I write a graphnet.deployment submodule to integrate into TARZAN, does GraphNeT require any specific code structure, class or method names, etc.? Right now, I think I would need to look at IceCube-specific submodules as an example; but if I’m not already familiar with IceTray, it may not be clear which parts of the submodule design are required by GraphNeT and which are IceTray-specific and may not make sense for me.

Ideally, there would be full documentation on this that allows a researcher on a new experiment to deploy GraphNeT without requiring help from GraphNeT developers. In practice, due to the complexity of operating such a neutrino telescope, maybe it’s reasonable to say that integration details can’t be documented well because they differ too much between experiments? (You said that you’re already working with P-ONE and KM3NeT—maybe you have some experience of how much this differs already?)

If you say that, in practice, it will almost certainly be necessary for a new experiment to collaborate directly with GraphNeT developers anyway, then at the very least I think it would be good to have a brief explanation in the graphnet.deployment docstring. (Very roughly: “Deployment submodules are closely linked to each experiment’s proprietary software/DAQ/…, so they will differ significantly between experiments and there is no generic documentation. GraphNeT developers are happy to collaborate with new experiments on this integration task; please contact us by opening a GitHub issue, if you are interested.”)

asogaard commented 1 year ago

Thanks for the update @JostMigenda — it's good to have you back. 😊 I think the above is a very reasonable point, and a pretty manageable one. I have opened this issue in graphnet: https://github.com/graphnet-team/graphnet/issues/472 to allow us to discuss potential solutions and work on these. Once we reach an acceptable solution to this, do I take it that you find the graphnet package and companion paper to be acceptable for JOSS publication?

asogaard commented 1 year ago

@GageDeZoort, I just wanted to highlight that we have added a GETTING_STARTED.md file at the root of the repository. This is exactly aimed at being "a write-up (e.g. text surrounding Jupyter notebook cells) showing an example workflow you might hand to a student looking to develop their own GNN" which you requested early on. Please let me know how it looks to you, and if you have any other things that you consider to be acceptance-blockers.

GageDeZoort commented 1 year ago

Thanks for the update - this looks very nice, I'm glad to see the mention of users providing more GNN functionality. With this and the answers you provided above considered, I'm satisfied and have checked all the review boxes!

JostMigenda commented 1 year ago

The GETTING_STARTED.md looks very nice indeed! I see there are still a few @TODO markers left, so I hope you will be able to fill in those sections soon. However, that should not delay this review any longer.

@dfm I am very happy to recommend that JOSS accept this submission! 🎉

dfm commented 1 year ago

@editorialbot generate pdf

dfm commented 1 year ago

@editorialbot check references

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

dfm commented 1 year ago

@JostMigenda, @GageDeZoort — Thanks for your thorough and constructive reviews!!

@asogaard — I'm going through the final checks here, and I may have some small edits to the paper which I'll provide as a PR. In the meantime, can you:

  1. Take one last read through the manuscript (linked by the bot above ☝️) to make sure that you're happy with it (it's harder to make changes later!), especially the author names and affiliations.
  2. Increment the version number of the software and report that version number back here.
  3. Create an archived release of that version of the software (using Zenodo or something similar). Please make sure that the metadata (title and author list) exactly match the JOSS paper. Then report the DOI of the release back to this thread.
asogaard commented 1 year ago

Hi @dfm,

Thanks for getting us to the finish line! 🚀 I will read through the manuscript again and report back here with a DOI for the archived release of the software corresponding to the paper. Related to that: Should I just report the incremented version number here in the thread, or do I need to enter it somewhere?

JostMigenda commented 1 year ago

@editorialbot commands

editorialbot commented 1 year ago

Hello @JostMigenda, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Generates the pdf paper
@editorialbot generate pdf

# Generates a LaTeX preprint file
@editorialbot generate preprint

# Get a link to the complete list of reviewers
@editorialbot list reviewers
dfm commented 1 year ago

@asogaard just reporting it here is fine! Thank you!!

JostMigenda commented 1 year ago

Huh. Wasn't there some command for the editorialbot to set the version? Or did it not list this because as a reviewer, I don't have permission to do that? 🤔

dfm commented 1 year ago

@JostMigenda — that's right! The bot can do that, but only when I ask it to 😀

dfm commented 1 year ago

@editorialbot check references

We're not sure why this didn't work last time...

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6720188 is OK
- 10.5281/zenodo.3828935 is OK
- 10.5281/zenodo.3952674 is OK
- 10.1088/1748-0221/12/03/P03012 is OK
- 10.1016/j.nima.2011.06.103 is OK
- 10.1088/0954-3899/43/8/084001 is OK
- 10.1088/1361-6471/abbd48 is OK
- 10.1088/1361-6471/44/5/054006 is OK
- 10.1051/epjconf/201819101006 is OK
- 10.1038/s41550-020-1182-4 is OK
- 10.1016/j.astropartphys.2011.01.003 is OK
- 10.3847/1538-3881/aa9709 is OK
- 10.1016/j.nima.2004.01.065 is OK
- 10.1088/1748-0221/16/07/P07041 is OK
- 10.1088/1748-0221/15/10/P10005 is OK
- 10.1088/1748-0221/9/03/P03009 is OK
- 10.1088/1748-0221/16/08/P08034 is OK
- 10.1016/j.nima.2013.10.074 is OK

MISSING DOIs

- Errored finding suggestions for "PyTorch: An Imperative Style, High-Performance Dee...", please try later

INVALID DOIs

- None
dfm commented 1 year ago

@asogaard — I wanted to check to see if you've had a chance to mint a DOI for the archived software yet. Let me know - thanks!

asogaard commented 1 year ago

Hi @dfm,

Sorry for the wait, we just needed to get the last few loose ends in order.

  1. I have added a missing line to the main figure in the paper and updated the author list to reflect contributors since the initial submission.
  2. I have incremented the version number to v1.0.0 (as of today)
  3. The Zenodo DOI corresponding to this version is 10.5281/zenodo.7928487. I have checked that the metadata match the paper draft exactly.

Please let me know if you need anything else from me. :)

dfm commented 1 year ago

@editorialbot set v1.0.0 as version

editorialbot commented 1 year ago

Done! version is now v1.0.0

dfm commented 1 year ago

@editorialbot set 10.5281/zenodo.7928487 as archive

editorialbot commented 1 year ago

Done! archive is now 10.5281/zenodo.7928487

dfm commented 1 year ago

@editorialbot check references

dfm commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6720188 is OK
- 10.5281/zenodo.3828935 is OK
- 10.5281/zenodo.3952674 is OK
- 10.1088/1748-0221/12/03/P03012 is OK
- 10.1016/j.nima.2011.06.103 is OK
- 10.1088/0954-3899/43/8/084001 is OK
- 10.1088/1361-6471/abbd48 is OK
- 10.1088/1361-6471/44/5/054006 is OK
- 10.1051/epjconf/201819101006 is OK
- 10.1038/s41550-020-1182-4 is OK
- 10.1016/j.astropartphys.2011.01.003 is OK
- 10.3847/1538-3881/aa9709 is OK
- 10.1016/j.nima.2004.01.065 is OK
- 10.1088/1748-0221/16/07/P07041 is OK
- 10.1088/1748-0221/15/10/P10005 is OK
- 10.1088/1748-0221/9/03/P03009 is OK
- 10.1088/1748-0221/16/08/P08034 is OK
- 10.1016/j.nima.2013.10.074 is OK

MISSING DOIs

- None

INVALID DOIs

- doi.org/10.48550/arXiv.1912.01703 is INVALID because of 'doi.org/' prefix
editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

dfm commented 1 year ago

Thanks @asogaard! We're just about there! I've opened a tiny PR, and once that's merged, I'm happy to continue with acceptance.

asogaard commented 1 year ago

Thanks @dfm, that should be sorted now!