Closed editorialbot closed 2 years ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.20 s (284.5 files/s, 108029.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
XML 15 801 445 8751
Python 34 808 833 4478
HTML 2 167 0 3404
Jupyter Notebook 1 0 1216 443
TeX 1 17 1 129
reStructuredText 2 41 18 53
Markdown 1 8 0 27
TOML 1 0 0 3
-------------------------------------------------------------------------------
SUM: 57 1842 2513 17288
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 418
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.00396 is OK
- 10.2514/6.2002-4482 is OK
- 10.7717/peerj-cs.103 is OK
- 10.2514/6.2019-2901 is OK
- 10.2514/6.2020-1012 is OK
- 10.1007/s40295-019-00191-2 is OK
- 10.2514/6.2021-0762 is OK
- 10.2514/6.2021-0764 is OK
- 10.5281/zenodo.3940699 is OK
- 10.21105/joss.01745 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thank you, @athulpg007 and @aliaksei135, for agreeing to be reviewers. Please read the first couple of comments in this thread and create your checklist. You can read the reviewer guidelines here. Also, you can browse the closed "REVIEW" issues on the "joss-reviews" repository to get some ideas on how to complete the reviews. Good luck!
@ixjlyons In the paper, could you briefly describe the problem setup (just stating the problem you are trying to solve in a few lines) for at least one of the sixteen examples you have and show an illustrative result (such as a plot showing result from a simulation, as well as any data you have for comparison) to demonstrate the application of this package?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @athulpg007, thanks for your comments. I've added a diagram and wording to help convey the simulation setup in general, rather than giving detailed treatment of one of the test cases. The example scripts themselves each have doc strings describing the physical system being simulated and some have comments for understanding API usage. Does this help with clarity?
Hi @ixjlyons, I have completed the first review of this work. Here are my comments:
It turns out all the test cases in this package are already well documented in this NASA report. I suggest adding this to "NESC Test Cases" section to the README file, list the titles of all 16 test cases used, and providing a link to this report. This will greatly help the reader understand the kind of problems that can be solved using this package and provide them with a well-documented list of use-cases both in the package and the NASA report. Optionally, you can also include the report PDF in the repository so users have it offline when they clone the repo.
I suggest adding an "Example" section in the paper with the following simple example.
Problem: (from page 62 of report)
============== =============== Verifies Inertial coupling [dynamic damping model] Gravitation J2 Geodesy WGS-84 rotating Atmosphere US 1976 STD Winds still air Vehicle Dragless rotating brick with aero damping Notes Drag coefficient set to zero ============== =============== """
from simupy.block_diagram import BlockDiagram
import simupy_flight
import numpy as np
from nesc_testcase_helper import plot_nesc_comparisons, int_opts, benchmark
from nesc_testcase_helper import ft_per_m, kg_per_slug
planet = simupy_flight.Planet(
gravity=simupy_flight.earth_J2_gravity,
winds=simupy_flight.get_constant_winds(),
atmosphere=simupy_flight.atmosphere_1976,
planetodetics=simupy_flight.Planetodetic(
a=simupy_flight.earth_equitorial_radius,
omega_p=simupy_flight.earth_rotation_rate,
f=simupy_flight.earth_f,
),
)
Ixx = 0.001894220 * kg_per_slug / (ft_per_m**2) # slug-ft2
Iyy = 0.006211019 * kg_per_slug / (ft_per_m**2) # slug-ft2
Izz = 0.007194665 * kg_per_slug / (ft_per_m**2) # slug-ft2
Ixy = 0.0 * kg_per_slug / (ft_per_m**2) # slug-ft2
Iyz = 0.0 * kg_per_slug / (ft_per_m**2) # slug-ft2
Izx = 0.0 * kg_per_slug / (ft_per_m**2) # slug-ft2
m = 0.155404754 * kg_per_slug # slug
x = 0.0
y = 0.0
z = 0.0
# %%
# Configure the constant dynamic damping
S_A = 0.22222 / (ft_per_m**2)
b_l = 1 / (3 * ft_per_m)
c_l = 2 / (3 * ft_per_m)
a_l = b_l
aero_model = simupy_flight.get_constant_aero(Cp_b=-1.0, Cq_b=-1.0, Cr_b=-1.0)
vehicle = simupy_flight.Vehicle(
base_aero_coeffs=aero_model,
m=m,
I_xx=Ixx,
I_yy=Iyy,
I_zz=Izz,
I_xy=Ixy,
I_yz=Iyz,
I_xz=Izx,
x_com=x,
y_com=y,
z_com=z,
x_mrc=x,
y_mrc=y,
z_mrc=z,
S_A=S_A,
a_l=a_l,
b_l=b_l,
c_l=c_l,
d_l=0.0,
)
BD = BlockDiagram(planet, vehicle)
BD.connect(planet, vehicle, inputs=np.arange(planet.dim_output))
BD.connect(vehicle, planet, inputs=np.arange(vehicle.dim_output))
lat_ic = 0.0 * np.pi / 180
long_ic = 0.0 * np.pi / 180
h_ic = 30_000 / ft_per_m
V_N_ic = 0.0
V_E_ic = 0.0
V_D_ic = 0.0
psi_ic = 0.0 * np.pi / 180
theta_ic = 0.0 * np.pi / 180
phi_ic = 0.0 * np.pi / 180
omega_X_ic = 10.0 * np.pi / 180
omega_Y_ic = 20.0 * np.pi / 180
omega_Z_ic = 30.0 * np.pi / 180
planet.initial_condition = planet.ic_from_planetodetic(
long_ic, lat_ic, h_ic, V_N_ic, V_E_ic, V_D_ic, psi_ic, theta_ic, phi_ic
)
planet.initial_condition[-3:] = omega_X_ic, omega_Y_ic, omega_Z_ic
with benchmark() as b:
res = BD.simulate(30, integrator_options=int_opts)
plot_nesc_comparisons(res, "03")
Results and Discussion: (add a brief discussion, for example the effect of aero damping causing the body rates to to go to zero.
I ran almost all the test cases supplied, except some which required more time than a few minutes. All the cases I ran exited normally. It appears that matplotlib
, pandas
, and ndsplines
are needed for these examples. Consider adding them to the list of install_requires
in setup.py.
In readme, the SimuPy's DynamicalSystem interface link is broken. (extra '>'). Please correct this.
Please consider adding an API reference for this package using sphinx or something similar. It would be nice to have a dedicated readthedocs for this package with the API reference. If not, the documentation from sphinx can be included it in a docs folder in the repository.
I would like to know if there are any other tools out there which you are aware of that offer similar functionality. If so, how is this work different compared to existing tools?
Let me know if you have any questions.
@athulpg007: thanks for your review.
Your suggestions have been implemented here: https://github.com/nasa/simupy-flight/pull/6
- It turns out all the test cases in this package are already well documented in this NASA report. I suggest adding this to "NESC Test Cases" section to the README file, list the titles of all 16 test cases used, and providing a link to this report. This will greatly help the reader understand the kind of problems that can be solved using this package and provide them with a well-documented list of use-cases both in the package and the NASA report. Optionally, you can also include the report PDF in the repository so users have it offline when they clone the repo.
We had been thinking about making a documentation page to enumerate the examples and show the output, so we've put that here: https://nasa.github.io/simupy-flight/nesc_test_cases/index.html
A link to the Appendix of the Tech Memo describing the test cases was added to the README as well.
- I suggest adding an "Example" section in the paper with the following simple example. [snip]
Our understanding of the JOSS format is that the paper is intended to be somewhat of a minimal citable description of the software and motivation for writing it (see the last paragraph of "What should my paper contain") -- sample code and outputs seems to be more in the realm of documentation or a full software paper. Does having the example gallery in the documentation help address your concern? @prashjha any thoughts here?
- I ran almost all the test cases supplied, except some which required more time than a few minutes. All the cases I ran exited normally. It appears that
matplotlib
,pandas
, andndsplines
are needed for these examples. Consider adding them to the list ofinstall_requires
in setup.py.
We did intentionally include only strict dependencies for the library in install_requires
in case a user doesn't need the others. There is however an "extra" for running the test cases (e.g. pip install .[test]
). This was probably not that clear in the README previously, so instructions for installing those dependencies has been moved to be just before instructions for running the test cases.
- In readme, the SimuPy's DynamicalSystem interface link is broken. (extra '>'). Please correct this.
Fixed, thanks.
- Please consider adding an API reference for this package using sphinx or something similar. It would be nice to have a dedicated readthedocs for this package with the API reference. If not, the documentation from sphinx can be included it in a docs folder in the repository.
Done. Like I said, we had been thinking about doing this, and your review was helpful for motivation.
- I would like to know if there are any other tools out there which you are aware of that offer similar functionality. If so, how is this work different compared to existing tools?
The other tools used for the NESC report are described in Appendix B.6. The only one that is available publicly is JSBSim. SimuPy Flight is somewhat leaner and takes advantage of existing libraries for ODE integration, atmospherics, etc.
Hello @ixjlyons, I have gone through the bulk of the testing code.
I have run the test cases and have had a few failures and things to note during the testing:
RuntimeWarning
are expected would be good. I assume the RuntimeWarning
s are caused by zero drag being enforced.python-libsbml
and lxml
are required, but do not seem to be installed anywherederivation
extras package, jupyter-lab
is not a valid package. You probably meant just jupyter
process_NESC_DaveML.py
, you should use the fully qualified package name to import the parse_daveml
, otherwise it only runs from a single dir:
from parse_daveml import ProcessDaveML
# Should be:
from simupy_flight.parse_daveml import ProcessDaveML
process_NESC_DaveML.py
seem to have duplicated the first directory in all paths. I cannot find where these paths would correspond to and suspect this is a mistake.process_NESC_DaveML.py
results in an error within the simupy
code generation module:
(sf_review) P:\Dev\PycharmProjects\simupy-flight\NESC_data\All_models>python ..\..\nesc_test_cases\process_NESC_DaveML.py
Skipping header
P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy_flight\parse_daveml.py:255: UserWarning: trimmedTheta variable already in symbol table!
warnings.warn(
Completed
Traceback (most recent call last):
File "..\..\nesc_test_cases\process_NESC_DaveML.py", line 23, in <module>
ProcessDaveML(filename)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy_flight\parse_daveml.py", line 110, in __init__
code = printer.codeprint(
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy\codegen.py", line 306, in codeprint
lines.append(self.doprint(*func_arg_expr))
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy\codegen.py", line 377, in doprint
funcbody.append(self._exprrepr(Assignment(lhs, rhs)))
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\sympy\printing\codeprinter.py", line 100, in doprint
lines = self._print(expr).splitlines()
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\sympy\printing\printer.py", line 287, in _print
return getattr(self, printmethod)(expr, **kwargs)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy\codegen.py", line 271, in _print_Assignment
return super()._print_Assignment(expr)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy\codegen.py", line 201, in _print_Assignment
return super()._print_Assignment(expr)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\sympy\printing\codeprinter.py", line 327, in _print_Assignment
rhs_code = self._print(rhs)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\sympy\printing\printer.py", line 287, in _print
return getattr(self, printmethod)(expr, **kwargs)
File "P:\Dev\PycharmProjects\sf_review\lib\site-packages\simupy\codegen.py", line 256, in _print_Function
if module != '':
UnboundLocalError: local variable 'module' referenced before assignment
You should consider adding daveml
as an additional extras package with the dependencies above. There also doesn't seem to be explicit instructions for testing of the DaveML module.
Let me know if you need more input from my side or have any questions.
@ixjlyons Thanks for incorporating my suggestions. The examples https://nasa.github.io/simupy-flight/nesc_test_cases/index.html and the API section is excellent and very useful.
I recommend that you include an "Examples" section in the paper. It can just contain one line with a link to the examples in the documentation.
I have concluded my review and am happy to accept this submission.
Hi @aliaksei135, thanks for the review. Your comments are addressed below.
I have run the test cases and have had a few failures and things to note during the testing:
- cases 5, 7, 8, 13p1 have failed
- cases 15 and 16 do not indicate whether they have passed or not
- I can see that the test case runs in the documentation have the same errors, but an explicit note that the "missing_*" and
RuntimeWarning
are expected would be good. I assume theRuntimeWarning
s are caused by zero drag being enforced.- Is there any appetite to automate the rest of test runs? Not a blocker for the review, but would make sense in the long term. Seems it is only case 1 that is being run in Github Actions.
We now test only the position and orientation states to tighter tolerances since these "lowest order" states should capture the effect of every computation in the model while integrating out minor numerical differences. We've updated the continuous integration testing to run all cases (including 15 and 16) across a range of platforms and Python versions.
We also added a note to the README and case 1 to describe why RuntimeWarning
is being emitted from the generated code.
- The packages
python-libsbml
andlxml
are required, but do not seem to be installed anywhere- In the
derivation
extras package,jupyter-lab
is not a valid package. You probably meant justjupyter
You should consider adding
daveml
as an additional extras package with the dependencies above. There also doesn't seem to be explicit instructions for testing of the DaveML module.
Thanks for catching these issues. There's now a daveml
extras list to install dependencies for the DaveML parsing module. The README has instructions for using it to re-generate the F16 component models, and these generated files now check the model against provided I/O data when they're run.
- In
process_NESC_DaveML.py
, you should use the fully qualified package name to import theparse_daveml
, otherwise it only runs from a single dir:- The paths in
process_NESC_DaveML.py
seem to have duplicated the first directory in all paths. I cannot find where these paths would correspond to and suspect this is a mistake.- Running
process_NESC_DaveML.py
results in an error within thesimupy
code generation module:
Thanks for catching these as well - they've been fixed. Part of the resolution involved updating simupy, so you'll have to update it to re-run.
Thanks for making the changes @ixjlyons
I've run everything through again and it works well.
I am happy to conclude my review and recommend accepting this submission.
Hi, @aliaksei135, got it, and thank you for your effort.
Hi, @athulpg007, thank you for your effort.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.00396 is OK
- 10.2514/6.2002-4482 is OK
- 10.7717/peerj-cs.103 is OK
- 10.2514/6.2019-2901 is OK
- 10.2514/6.2020-1012 is OK
- 10.1007/s40295-019-00191-2 is OK
- 10.2514/6.2021-0762 is OK
- 10.2514/6.2021-0764 is OK
- 10.5281/zenodo.3940699 is OK
- 10.21105/joss.01745 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi, @ixjlyons, I am reading the draft one last time. If I have any edits or suggestions, I will let you know soon.
Meanwhile, can you do (if not done already) a 'tagged' release of your code, and archive the release using zenedo or other methods? Make sure that the title of zenedo archive matches the title of this JOSS submission.
@prashjha ok, I pushed tag v0.1.0 and archived it here: https://zenodo.org/record/6789504 (DOI 10.5281/zenodo.6789504)
@editorialbot set https://zenodo.org/record/6789504 as archive
Done! Archive is now https://zenodo.org/record/6789504
@editorialbot set 10.5281/zenodo.6789504 as archive
Done! Archive is now 10.5281/zenodo.6789504
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.00396 is OK
- 10.2514/6.2002-4482 is OK
- 10.7717/peerj-cs.103 is OK
- 10.2514/6.2019-2901 is OK
- 10.2514/6.2020-1012 is OK
- 10.1007/s40295-019-00191-2 is OK
- 10.2514/6.2021-0762 is OK
- 10.2514/6.2021-0764 is OK
- 10.5281/zenodo.3940699 is OK
- 10.21105/joss.01745 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@ixjlyons, congratulations. I have recommended acceptance to EiC who will make the final decision.
@athulpg007, @aliaksei135, thank you both for your time and efforts in reviewing this submission. It is very much appreciated. :)
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/3364, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
@ixjlyons - I'm the AEiC this week, and working on the processing of your submission. It all looks good, except I think that there are some small issue in the bib, as indicated in https://github.com/nasa/simupy-flight/pull/12 - If you can merge these (or let me know what you disagree with), we can continue the processing.
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.00396 is OK
- 10.2514/6.2002-4482 is OK
- 10.7717/peerj-cs.103 is OK
- 10.2514/6.2019-2901 is OK
- 10.2514/6.2020-1012 is OK
- 10.1007/s40295-019-00191-2 is OK
- 10.2514/6.2021-0762 is OK
- 10.2514/6.2021-0764 is OK
- 10.5281/zenodo.3940699 is OK
- 10.21105/joss.01745 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/3367, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
@editorialbot accept
Doing it live! Attempting automated processing of paper acceptance...
π¦π¦π¦ π Tweet for this paper π π¦π¦π¦
π¨π¨π¨ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! π¨π¨π¨
Here's what you must now do:
Any issues? Notify your editorial technical team...
Congratulations to @ixjlyons (Kenneth Lyons) and co-author!!
And thanks to @athulpg007 and @aliaksei135 for reviewing, and to @prashjha for editing! We couldn't do this without you
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04299/status.svg)](https://doi.org/10.21105/joss.04299)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.04299">
<img src="https://joss.theoj.org/papers/10.21105/joss.04299/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.04299/status.svg
:target: https://doi.org/10.21105/joss.04299
This is how it will look in your documentation:
We need your help!
The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Submitting author: !--author-handle-->@ixjlyons<!--end-author-handle-- (Kenneth Lyons) Repository: https://github.com/nasa/simupy-flight Branch with paper.md (empty if default branch): Version: 0.0.1 Editor: !--editor-->@prashjha<!--end-editor-- Reviewers: @athulpg007, @aliaksei135 Archive: 10.5281/zenodo.6789504
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@athulpg007 & @aliaksei135, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @prashjha know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @aliaksei135
π Checklist for @athulpg007