JuliaCon / proceedings-review

7 stars 1 forks source link

[REVIEW]: RobustNeuralNetworks.jl: a Package for Machine Learning and Data-Driven Control with Certified Robustness #163

Open editorialbot opened 2 months ago

editorialbot commented 2 months ago

Submitting author: !--author-handle-->@nic-barbara<!--end-author-handle-- (Nicholas Barbara) Repository: https://github.com/acfr/RobustNeuralNetworks.jl Branch with paper.md (empty if default branch): paper Version: v0.3.2 Editor: !--editor-->@lucaferranti<!--end-editor-- Reviewers: @asinghvi17, @pevnak Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://proceedings.juliacon.org/papers/3dcae3a583464b727f2d025602a01762"><img src="https://proceedings.juliacon.org/papers/3dcae3a583464b727f2d025602a01762/status.svg"></a>
Markdown: [![status](https://proceedings.juliacon.org/papers/3dcae3a583464b727f2d025602a01762/status.svg)](https://proceedings.juliacon.org/papers/3dcae3a583464b727f2d025602a01762)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@asinghvi17 & @pevnak, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @lucaferranti know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @asinghvi17

📝 Checklist for @pevnak

editorialbot commented 2 months ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper source files, type:

@editorialbot generate pdf
editorialbot commented 2 months ago

Software report:

github.com/AlDanial/cloc v 1.90  T=0.15 s (610.0 files/s, 133598.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
SVG                             11              0              1          10174
TeX                             16            416            210           3159
Julia                           40            856            841           2113
Markdown                        16            551              0           1591
YAML                             5              2              2            122
TOML                             3              5              0             50
Ruby                             1              8              4             45
-------------------------------------------------------------------------------
SUM:                            92           1838           1058          17254
-------------------------------------------------------------------------------

Commit count by author:

   265  nic-barbara
    72  nicBarbara
    12  CompatHelper Julia
    12  MrstupidJ
    10  Johnny Cheng
     5  Jerome Justin
     4  Eccidio Eliott
     4  Nic Barbara
     2  johnnyCheng09
     2  yuruizhang06
editorialbot commented 2 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/TAC.2022.3183966 is OK
- 10.1109/CDC51059.2022.9992684 is OK
- 10.1109/ICRA48506.2021.9561814 is OK
- 10.1109/TAC.1971.1099826 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1137/141000671 is OK
- 10.21105/joss.00602 is OK
- 10.1109/CDC49753.2023.10383269 is OK
- 10.1109/CDC49753.2023.10383704 is OK
- 10.1109/TAC.1976.1101223 is OK
- 10.1109/LCSYS.2021.3050444 is OK
- 10.1109/LCSYS.2022.3184847 is OK
- 10.1109/TAC.2023.3294101 is OK
- 10.23919/ACC50511.2021.9483025 is OK
- 10.23919/ACC53348.2022.9867842 is OK

MISSING DOIs

- No DOI given, and none found for title: Lipschitz constant estimation for 1D convolutional...
- No DOI given, and none found for title: ReinforcementLearning.jl: A Reinforcement Learning...
- No DOI given, and none found for title: MNIST handwritten digit database
- No DOI given, and none found for title: Don’t Unroll Adjoint: Differentiating SSA-Form Pro...
- No DOI given, and none found for title: Direct Parameterization of Lipschitz-Bounded Deep ...
- No DOI given, and none found for title: Contraction Theory for Dynamical Systems
- No DOI given, and none found for title: Adversarial attacks on neural network policies
- No DOI given, and none found for title: Adam: A method for stochastic optimization
- No DOI given, and none found for title: Reinforcement Learning: An Introduction
- No DOI given, and none found for title: LipsNet: A Smooth and Robust Neural Network with A...

INVALID DOIs

- None
editorialbot commented 2 months ago

Paper file info:

📄 Wordcount for paper.tex is 162

🔴 Failed to discover a Statement of need section in paper

editorialbot commented 2 months ago

License info:

✅ License found: MIT License (Valid open source OSI approved license)

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

lucaferranti commented 2 months ago

Hi @asinghvi17 and @pevnak :wave: ,

thank you very much for volunteering as reviewers. I will be the editor for this submission, feel free to ping me if you have any questions. Also maake sure to check the reviewer guide

You can generate your reviewer checklist with

@editorialbot generate my checklist

As you go through the checklist and review the paper, you can either leave comments here or open issues in the linked repository.

asinghvi17 commented 2 months ago

Review checklist for @asinghvi17

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Paper format

Content

pevnak commented 4 weeks ago

Review checklist for @pevnak

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Paper format

Content

pevnak commented 4 weeks ago

I do not know, where I can find the paper.tex file. I have looked at https://arxiv.org/pdf/2306.12612 which has the affiliations and references. Most of them has DOI, but those referencing to Arxiv naturally don't.

For me, this is very good contribution. I have tried the package and it works as advertised. I have skim-read the papers describing the implemented methods and love them.

I have not find the statement of need, but the motivation in papers, documentations, and implemented work is more than enough. Moreover, any person who is interested enough in fundamental principles of NNs (and machine learning) would value this work. I am excited about this.

lucaferranti commented 4 weeks ago

I do not know, where I can find the paper.tex file.

@pevnak thank you for highlighting this! So, in JuliaCon proceedings the paper lives in a paper folder and we allow two options: 1. to have that on the main branch 2. to have that on a separate branch. The second option (which is the case here), is particularly popular if the submission is a package and the authors want to keep the main branch minimal. Long story short, if you go to the repository, there is a paper branch which has the paper/paper.tex .

However actually authors names and affiliations should not be in the paper.tex. They are in a separate header.tex which is automatically generaed from the paper.yml (this is because the bot needs a paper.yml). That checkbox is hence inaccurate. Thank you very much for bringing this to my attention, I'll fix it in the checklist template!. For that checkbox, it is enough to check that the final paper pdf has a correct list of authors and affiliations.

lucaferranti commented 4 weeks ago

I have not find the statement of need, but the motivation in papers, documentations, and implemented work is more than enough.

Yet another thing I need to fix on the checklist template :sweat_smile: . The important thing is that the need for the presented work is clearly conveyed from the paper. A section with the title "Statement of Need" verbatim is not required

nic-barbara commented 4 weeks ago

Thanks for the review @pevnak, much appreciated! I've taken care of the small bug in the documentation you pointed out too.

@lucaferranti regarding DOIs, as far as I'm aware the citations with missing DOIs don't actually have a DOI that I can add in. I've included one wherever possible.

lucaferranti commented 4 weeks ago

@editorialbot check references

editorialbot commented 4 weeks ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/TAC.2022.3183966 is OK
- 10.1109/CDC51059.2022.9992684 is OK
- 10.1109/ICRA48506.2021.9561814 is OK
- 10.1109/TAC.1971.1099826 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1137/141000671 is OK
- 10.21105/joss.00602 is OK
- 10.1109/CDC49753.2023.10383269 is OK
- 10.1109/CDC49753.2023.10383704 is OK
- 10.1109/TAC.1976.1101223 is OK
- 10.1109/LCSYS.2021.3050444 is OK
- 10.1109/LCSYS.2022.3184847 is OK
- 10.1109/TAC.2023.3294101 is OK
- 10.23919/ACC50511.2021.9483025 is OK
- 10.23919/ACC53348.2022.9867842 is OK

MISSING DOIs

- No DOI given, and none found for title: Lipschitz constant estimation for 1D convolutional...
- No DOI given, and none found for title: ReinforcementLearning.jl: A Reinforcement Learning...
- No DOI given, and none found for title: MNIST handwritten digit database
- No DOI given, and none found for title: Don’t Unroll Adjoint: Differentiating SSA-Form Pro...
- No DOI given, and none found for title: Direct Parameterization of Lipschitz-Bounded Deep ...
- No DOI given, and none found for title: Contraction Theory for Dynamical Systems
- No DOI given, and none found for title: Adversarial attacks on neural network policies
- No DOI given, and none found for title: Adam: A method for stochastic optimization
- No DOI given, and none found for title: Reinforcement Learning: An Introduction
- No DOI given, and none found for title: LipsNet: A Smooth and Robust Neural Network with A...

INVALID DOIs

- None
lucaferranti commented 4 weeks ago

@pevnak @nic-barbara For the DOIs, the rule of thumb is "DOIs should be included whenever possible" You can use editorialbot to check the references DOI status (as I just did). The summary can be interpreted as follows:

pevnak commented 3 weeks ago

I think the paper is missing a section Statement of need. Not sure it is needed, as from introduction and examples, this is needed. Also, I have not seen similar methods implemented elsewhere.

nic-barbara commented 1 week ago

Hi @lucaferranti, @pevnak, @asinghvi17, just wondering if there's anything I can do to expedite this review process? As far as I'm aware there's nothing I need to change so far: it seems that an explicit statement of need section is not required according to @lucaferranti, and the papers with missing DOIs don't actually have DOIs. Is this correct?

lucaferranti commented 1 week ago

Hi @asinghvi17 :wave: ,

could you update us on how it is going with the review? Do you have a time estimate?

lucaferranti commented 1 week ago

it seems that an explicit statement of need section is not required according to @lucaferranti, and the papers with missing DOIs don't actually have DOIs. Is this correct?

It is important for the paper to clearly convey its motivation, but this can be done e..g as part of the introduction or other parts of the paper. A section called "statement of need" is not required, this has been recently updated on hte checklist

the papers with missing DOIs don't actually have DOIs

correct, the check is more of a fyi guideline and not assertive. If you have checked that those papers don't have a DOI, then it's ok. In a lot of cases the bot is smart enough to find and suggest a potential DOI, since it doens't find any in this case, it is likely that they don't have one