Closed editorialbot closed 5 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=1.10 s (79.0 files/s, 367152.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 23 762 1110 2629
reStructuredText 43 999 1917 566
Jupyter Notebook 10 0 395383 478
TeX 1 21 0 201
Markdown 3 77 0 155
YAML 5 19 15 116
DOS Batch 1 8 1 26
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 87 1890 398433 4180
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 647
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.esr.2023.101171 is OK
- 10.1016/j.esd.2023.05.004 is OK
- 10.1016/j.enconman.2023.117223 is OK
- 10.1016/j.segan.2023.101043 is OK
- 10.1016/j.segy.2022.100088 is OK
- 10.1016/j.apenergy.2022.118676 is OK
- 10.1016/j.esd.2021.10.009 is OK
- 10.1016/j.esd.2020.07.002 is OK
- 10.1016/j.energy.2019.04.097 is OK
- 10.1016/j.segan.2023.101120 is OK
- 10.1016/j.joule.2022.05.009 is OK
- 10.3390/app10217445 is OK
- 10.5281/zenodo.10275752 is OK
MISSING DOIs
- 10.24251/hicss.2023.097 may be a valid DOI for title: Sustainable Energy System Planning in Developing Countries: Facilitating Load Profile Generation in Energy System Simulations
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@FLomb: Could you clarify the individual contributions of the authors?
@editorialbot add @trevorb1 as reviewer
@trevorb1 added to the reviewers list!
@FLomb: Could you clarify the individual contributions of the authors?
Hi @noah80, the contributions are the following (thanks for asking; we didn't know they were requested):
@FabianHofmann, @trevorb1 π Don't hesitate to reach out if you need help getting started with your reviewer checklistπ
Hey @FLomb, very nice project! To get the ball rolling here is the first round of comments from my side.
joss_paper
branch of the repo.household_1
object. The error message is clear (there are no appliances), but for newbies this might be confusing. Also empty objects which seems to be that basic to the whole software should be displayable. The Readthedocs documentation is great with all its examples. Two minor remarks.
Again, nice collaborative project!
Hi @FabianHofmann,
Thanks for the first round of comments! We will work on those as soon as possible. In the meantime, I am providing below some answers to questions that have a quick solution:
General checks
Documentation
Thanks again for the valuable comments; we'll be in touch soon about the rest
@AdamRJensen - We wanted to ask how we should address the review comments? Should we commit them directly on the review-branch? Or should we open PR onto the review-branch?
Thanks for editing our paper :)
@AdamRJensen - We wanted to ask how we should address the review comments? Should we commit them directly on the review-branch? Or should we open PR onto the review-branch?>
Personally, I would prefer a PR as this gives a good overview for the reviewers of what changes your making to address their comments (you can tag the reviewers in the relevant PRs). Also, if things are multiple major things, then I would make multiple PRs.
A pleasure to serve as your editor π
Personally, I would prefer a PR as this gives a good overview for the reviewers of what changes your making to address their comments (you can tag the reviewers in the relevant PRs). Also, if things are multiple major things, then I would make multiple PRs.
Thanks for your suggestion, we will address the comments in PRs then! :)
If I understood well the JOSS docs, once the review process is over we create a new tagged release which encompasses the changes done in context of the review, and this new release will then be associated with the JOSS publication. Is that correct?
If I understood well the JOSS docs, once the review process is over we create a new tagged release which encompasses the changes done in context of the review, and this new release will then be associated with the JOSS publication. Is that correct?
That is correct, it's the final version at the end of the review process that will be associated with the JOSS paper. This way we get all the good reviewer feedback included.
Hi @FLomb; thanks so much for submitting RAMP and suggesting me to review! The collaboration, documentation, and existing use in literature of the RAMP project is super nice! Please find below my comments.
$ ramp -i example_excel_usecase.xlsx -n 10
ValueError: x and y must have same first dimension, but have shapes (1440,) and (1,)
I couldn't find where package dependencies are listed. When I tried to run tests, I got a ModuleNotFoundError: No module named 'scipy'
error. The only place I saw dependences was in the ramp/__init__.py
file, but scipy
is not listed. If I pip install scipy
into my environment and run pytest tests/
I get a ModuleNotFoundError: No module named 'nbconvert'
. I had to pip install nbconvert
as well to get tests to run. Clarifying the dependencies for using vs. developing would help!
If I run tests (with the extra pip installs), I get one failure:
FAILED tests/test_switch_on.py::TestRandSwitchOnWindow::test_coincidence_normality_on_peak - AssertionError: The 'coincidence' values are not normally distributed.
assert 0.029748765751719475 > 0.05
In the Quick Start section, appliances are added through the method User.Appliance(...)
. Later on in the examples section, appliances are added through the method User.add_appliance(...)
. Based on the API reference, the Appliance(...)
method should only be used for when working with legacy code? If users are encouraged to use the add_appliance(...)
method, updating docs to reflect this would be good! (Same issue also appears in "Appliances with multiple cycles" example)
I find it a little confusing how the documentation site contribution guidelines and file CONTRIBUTING.md
in the repository don't match?
It may be worth noting in the contributing guidelines that developers should follow Black formatting, and contributions will be checked for it with your actions (although, I do see you have the Black badge on the README.md). Alternatively, having a pre-commit file, or specifying in the contributing guidelines how to install Black in VSCode (or similar) to autolint contributions would be good!
In contributions guidelines, it may be good to specify what to include in a new issue ticket and PR (or even create templates for these)? For example, for a new issue, do you want to know OS the user is running, version of RAMP they are running, ect.
In contribution guidelines on the doc site, you mention contributors should perform qualitative testing only. On the repository CONTRIBUTING.md
file, it mentions to also run tests via pytest. Please clarify what tests contributors should be running.
At the bottom of the introduction page, there is a note that says "This project is under active development!". Can you please clarify what the active development means? Just want to ensure this does not mean we can't trust the results. (Sorry, I know this one is a little pedantic!)
The examples are great at walking the reader through the many different functions of RAMP! However, I think in the first example it would be beneficial to explicitly write out what all the different arguments in the User(...)
and add_appliance(...)
calls are doing. This can maybe be done through a top level description like in some of your other examples (as also suggested by @FabianHofmann). At first, especially for the add_appliance(...)
method, it wasn't necessarily clear to me what the different arguments did. While I eventually did find the info I was looking for in the Appliance.__init__
API ref, this was not the first place I intuitively thought to look for the information (which was the add_appliance(...)
method API ref).
@FabianHofmann , @trevorb1 thanks for your insightful and helpful comments! We will start addressing them in PRs from the joss-paper
branch (https://github.com/RAMP-project/RAMP/tree/joss-paper) :)
3. If I run tests (with the extra pip installs), I get one failure:
@trevorb1 This is actually a known issue, this tests fails randomly : https://github.com/RAMP-project/RAMP/issues/99 I will use this review to push myself to investigate it further. If you have good suggestion of methodology/articles about stochastic code testing, they are welcome :)
- In the "Using real calendar days to generate profiles" example, it walks through how to use the CLI to generate a number of profiles. I tried to use the excel file generated from the "Using tabular inputs to build a model" as an input, but received a matplotlib error for the second plot. The command I ran and the error I received is shown below. I can attach the full trace back if that helps. Not sure if I am just misunderstanding what my input file should be?
@trevorb1 - this is now fixed in https://github.com/RAMP-project/RAMP/pull/126
How should we proceed, you write on the PR that you agree with the proposed changes and then we merge them? I guess it make sense to link to those PRs at least once in this thread so that the history of the review is all accessible form the joss-review repository
2. After having followed the installation instructions, which work fine, I had a look at the example at https://github.com/RAMP-project/RAMP?tab=readme-ov-file#building-a-model-with-a-python-script. What I find confusing is that I cannot display the
household_1
object. The error message is clear (there are no appliances), but for newbies this might be confusing. Also empty objects which seems to be that basic to the whole software should be displayable.
@FabianHofmann - we propose a fix for this in https://github.com/RAMP-project/RAMP/pull/122
Let us know if you would find another way of displaying it more intuitive
JOSS asks for reproducibility of the figure in the paper (Data Sharing & Reproducibility). Is the code online somewhere? I could not see it in the joss_paper branch of the repo.
Same question as @FabianHofmann; is there code where I can reproduce Figure 1?
Thank you both for pointing this out; we had somehow missed this requirement. We will address this issue shortly.
2. Your reference of "Generating high-resolution multi-energy load profiles for remote areas with an open-source stochastic model" is a 2019 paper in Energy describing the first version of RAMP and validating its functionality, I believe? If so, can you please describe (high level) how this paper is different from the one mentioned? This doesn't necessarily need to go in the paper, just for my reference clarifying the differences please. (Sidenote; I know other packages like pyam are published in different journals - ORE and JOSS in that case - so Im not trying to attack this submission or anything! Just trying to understand the differences.)
Sure, happy to provide further information about that here. In the 2019 paper, we introduced the model from a 'scientific' perspective (i.e., what are the rationale, the math, and the algorithms behind it), discussed its merits over existing alternative demand simulation approaches, and validated it against metered data. Nonetheless, from the software point of view, the code back then was just a single Python file that, despite being made openly available on GitHub, did not meet any of the standards of reproducible scientific software, nor was intended for use beyond the scope of the kind of application proposed in the paper. In other words, in the original paper, the concept and the 'science' behind the model were peer-reviewed, but the software itself was not and was not meant to.
Over the years, and thanks to the collaboration of more institutions who joined the project, RAMP has grown into a more professional, documented and tested software, including more functionalities and opportunities for customisation and for broad application. Having achieved this degree of software quality, with this paper, we set out to let the software itself undergo a peer review process. In doing so, we follow in the footsteps of many other projects that were initially published as supporting code for scientific publications in a rough, limitedly documented form and only later developed into more professional software published in JOSS (for example, Calliope: https://doi.org/10.21105/joss.00825).
How should we proceed, you write on the PR that you agree with the proposed changes and then we merge them?
@Bachibouzouk, sounds great! Your fix solved the problem I described. However, I ran into another issue completing the rest of the CLI examples. I left a comment in https://github.com/RAMP-project/RAMP/pull/126 describing the issue. Thanks!
Sure, happy to provide further information about that here.
Thank you so much for the detailed description, @FLomb! This fully answers my question, as I am now clear on the differences behind the Energy and JOSS publications of RAMP. It is really great to see so much collaboration behind this project!
2. I couldn't find where package dependencies are listed. When I tried to run tests, I got a
ModuleNotFoundError: No module named 'scipy'
error. The only place I saw dependences was in theramp/__init__.py
file, butscipy
is not listed. If Ipip install scipy
into my environment and runpytest tests/
I get aModuleNotFoundError: No module named 'nbconvert'
. I had topip install nbconvert
as well to get tests to run. Clarifying the dependencies for using vs. developing would help!
@trevorb1 - I address this point in https://github.com/RAMP-project/RAMP/pull/129, specifically on 9ccd839e6f0edfe7bb00d14a51301ebf4fcbc439
@trevorb1 This is actually a known issue, this tests fails randomly : RAMP-project/RAMP#99 I will use this review to push myself to investigate it further. If you have good suggestion of methodology/articles about stochastic code testing, they are welcome :)
@trevorb1 - this should now be addressed in https://github.com/RAMP-project/RAMP/pull/129, specifically on https://github.com/RAMP-project/RAMP/pull/129/commits/b4e728a4d82804a1009b0dde4dadabae9246ae77
@Bachibouzouk; thanks for the updates! From my perspective, points 2 and 3 under my functionality comments have now been addressed.
- The readme proposes to create a python environment with python=3.8. This python version is soon outdated. Is the software supporting newer python versions? If yes, I would suggest updating the python version in the Readme.
@FabianHofmann - https://github.com/RAMP-project/RAMP/pull/123 bumps the python version used in RAMP to 3.10
5. In contribution guidelines on the doc site, you mention contributors should perform qualitative testing only. On the repository
CONTRIBUTING.md
file, it mentions to also run tests via pytest. Please clarify what tests contributors should be running.
@trevorb1 - This has been addressed in https://github.com/RAMP-project/RAMP/pull/129 more specifically in https://github.com/RAMP-project/RAMP/pull/129/commits/9ccd839e6f0edfe7bb00d14a51301ebf4fcbc439
2. After having followed the installation instructions, which work fine, I had a look at the example at https://github.com/RAMP-project/RAMP?tab=readme-ov-file#building-a-model-with-a-python-script. What I find confusing is that I cannot display the
household_1
object. The error message is clear (there are no appliances), but for newbies this might be confusing. Also empty objects which seems to be that basic to the whole software should be displayable.
@FabianHofmann - This issue should be addressed within https://github.com/RAMP-project/RAMP/pull/122
2. the navigation on the left sidebar (with the partial TOC) is a bit counter-intuitive, I find. I am getting lost easily without knowing where I am. If you agree, I would suggest increasing the scope of the sidebar's TOC.
@FabianHofmann - this is addressed in https://github.com/RAMP-project/RAMP/pull/137, you can see the built documentation in the PR checks, here is the direct link as well : https://rampdemand-rtd--137.org.readthedocs.build/en/137/intro.html
2. I find it a little confusing how the documentation site contribution guidelines and file
CONTRIBUTING.md
in the repository don't match?
@trevorb1 - this should be now fixed in https://github.com/RAMP-project/RAMP/pull/138
4. In contributions guidelines, it may be good to specify what to include in a new issue ticket and PR (or even create templates for these)? For example, for a new issue, do you want to know OS the user is running, version of RAMP they are running, ect.
@trevorb1 - this has been added in https://github.com/RAMP-project/RAMP/pull/135
- there are some examples which do not have an intro at the beginning, like the fixed flat-appliance example. These are however very valuable to understand the context.
@FabianHofmann - This should have been addressed in https://github.com/RAMP-project/RAMP/pull/124
3. What is the test coverage of the project? Since there are not that many, it would be good to check that minimum coverage is realised.
@FabianHofmann - The test coverage has been realized for the first time and is stated to be 75.745% in https://github.com/RAMP-project/RAMP/pull/136
- In the Quick Start section, appliances are added through the method
User.Appliance(...)
. Later on in the examples section, appliances are added through the methodUser.add_appliance(...)
. Based on the API reference, theAppliance(...)
method should only be used for when working with legacy code? If users are encouraged to use theadd_appliance(...)
method, updating docs to reflect this would be good! (Same issue also appears in "Appliances with multiple cycles" example)7. The examples are great at walking the reader through the many different functions of RAMP! However, I think in the first example it would be beneficial to explicitly write out what all the different arguments in the
User(...)
andadd_appliance(...)
calls are doing. This can maybe be done through a top level description like in some of your other examples (as also suggested by @FabianHofmann). At first, especially for theadd_appliance(...)
method, it wasn't necessarily clear to me what the different arguments did. While I eventually did find the info I was looking for in theAppliance.__init__
API ref, this was not the first place I intuitively thought to look for the information (which was theadd_appliance(...)
method API ref).
@trevorb1 - https://github.com/RAMP-project/RAMP/pull/142 aims to address your comments, I added a docstring to add_appliance()
and I provided the kwargs in the example files (the ones under ramp/examples
)
Hi @Bachibouzouk; apologies in the delay for getting back to you!
- I find it a little confusing how the documentation site contribution guidelines and file CONTRIBUTING.md in the repository don't match? @trevorb1 - this should be now fixed in https://github.com/RAMP-project/RAMP/pull/138
Thank you for unifying the contribution guidelines; looks great to me!
- In contributions guidelines, it may be good to specify what to include in a new issue ticket and PR (or even create templates for these)? For example, for a new issue, do you want to know OS the user is running, version of RAMP they are running, ect. @trevorb1 - this has been added in https://github.com/RAMP-project/RAMP/pull/135
Thank you for adding the Issue and PR templates; they look great to me!
In the Quick Start section, appliances are added through the method User.Appliance(...). Later on in the examples section, appliances are added through the method User.add_appliance(...). Based on the API reference, the Appliance(...) method should only be used for when working with legacy code? If users are encouraged to use the add_appliance(...) method, updating docs to reflect this would be good! (Same issue also appears in "Appliances with multiple cycles" example)
The examples are great at walking the reader through the many different functions of RAMP! However, I think in the first example it would be beneficial to explicitly write out what all the different arguments in the User(...) and add_appliance(...) calls are doing. This can maybe be done through a top level description like in some of your other examples (as also suggested by @FabianHofmann). At first, especially for the add_appliance(...) method, it wasn't necessarily clear to me what the different arguments did. While I eventually did find the info I was looking for in the Appliance.init API ref, this was not the first place I intuitively thought to look for the information (which was the add_appliance(...) method API ref).
@trevorb1 - https://github.com/RAMP-project/RAMP/pull/142 aims to address your comments, I added a docstring to add_appliance() and I provided the kwargs in the example files (the ones under ramp/examples)
Thank you for adding the API ref under add_appliance()
and updating Appliacnce
to add_appliance
in the examples. Looks great to me!
From my perspective, documentation points 1,2,4 and 7 have now all been fully addressed. Thanks!
2. JOSS asks for reproducibility of the figure in the paper (Data Sharing & Reproducibility). Is the code online somewhere? I could not see it in the
joss_paper
branch of the repo.
- Same question as @FabianHofmann; is there code where I can reproduce Figure 1?
@FabianHofmann , @trevorb1 - this is addressed in https://github.com/RAMP-project/RAMP/pull/140 we introduced a seed for the random number generator in https://github.com/RAMP-project/RAMP/pull/128 such that we can make sure figures are 100% reproducable
3. It may be worth noting in the contributing guidelines that developers should follow Black formatting, and contributions will be checked for it with your actions (although, I do see you have the Black badge on the README.md). Alternatively, having a pre-commit file, or specifying in the contributing guidelines how to install Black in VSCode (or similar) to autolint contributions would be good!
@trevorb1 - this is now addressed in https://github.com/RAMP-project/RAMP/pull/143
@trevorb1 @FabianHofmann, I'd appreciate a quick clarification on one point: when you ask for a comparison with "similar software/tools", do you mean other open tools or even non-open ones? Thanks.
At the bottom of the introduction page, there is a note that says "This project is under active development!". Can you please clarify what the active development means? Just want to ensure this does not mean we can't trust the results. (Sorry, I know this one is a little pedantic!)
@trevorb1, thanks for pointing out that "active development" may not be self-explanatory for everyone. What we mean by it is that the project is actively maintained (e.g., if anyone opens an issue, this is taken into consideration and dealt with based on priorities) and continuously developed with new, enhanced features. In other words, while the core functionalities are stable and reliable, new features and improvements pop up regularly thanks to the contribution of the core development team as well as occasional external contributors. We saw the term "active development" being regularly used in various other repositories with this meaning, as opposed to projects "not being actively developed" anymore. We could perhaps make it more explicit with the following rewording:
"This project is actively maintained and developed. This means that while we provide stable and reliable software releases, we keep developing new features and improvements for upcoming, upgraded versions of the software."
@AdamRJensen - is it still possible to amend the list of authors of the JOSS paper? If yes there are two people we would like to add. One of them contributed to refactoring the code which lead the the version of RAMP we are submitting to JOSS and the other one did fix several bugs and opened interesting discussions in issues quite recently. Let me know if this would be ok on your side and I will provide their information.
this is addressed in https://github.com/RAMP-project/RAMP/pull/140 we introduced a seed for the random number generator in https://github.com/RAMP-project/RAMP/pull/128 such that we can make sure figures are 100% reproducable
Thanks, @Bachibouzouk! Adding the seed for the RNG looks great. My first point on general checks is considered complete.
this is now addressed in https://github.com/RAMP-project/RAMP/pull/143
Thanks @Bachibouzouk for adding in formatting checks! My third point on documentation is considered complete.
I'd appreciate a quick clarification on one point: when you ask for a comparison with "similar software/tools", do you mean other open tools or even non-open ones? Thanks.
@FLomb From my perspective, I think it would depend a little on the state of the field. Since I am not that familar with the state-of-the-art for stochastic load generators, from the paper I am looking for something that describes how RAMP fits into the current literature. If there are similar tools, but only propitiatory, then mentioning this would be helpful. If instead, there are similar tools that are open, mentioning how RAMP is different from them is needed. Im not necessarily looking for a huge comparison table with every similar tool; rather, just what other main tools exist for stochastic load generation and what literature/software gap RAMP fills.
I get the feeling you allude to it with the sentences in the paper:
In fact, the software is designed to require only a basic understanding of the expected user activity patterns and owned appliances as inputs, to be provided in tabular (.xlsx) format. Then, it leverages stochasticity (using the random package) to make up for the lack of more detailed information and to account for the unpredictability of human behaviour.
Just expanding this to explicitly say what the software gap is. For example, if another tool similar to RAMP requires an proprietary input format that limits it use-case, this would be good to mention. Or if a different tool is limited to a certain sector or end-use, this would be good to note.
Please let me know if this answers your question, or if more clarification is needed!
thanks for pointing out that "active development" may not be self-explanatory for everyone. What we mean by it is that the project is actively maintained (e.g., if anyone opens an issue, this is taken into consideration and dealt with based on priorities) and continuously developed with new, enhanced features. In other words, while the core functionalities are stable and reliable, new features and improvements pop up regularly thanks to the contribution of the core development team as well as occasional external contributors. We saw the term "active development" being regularly used in various other repositories with this meaning, as opposed to projects "not being actively developed" anymore. We could perhaps make it more explicit with the following rewording:
"This project is actively maintained and developed. This means that while we provide stable and reliable software releases, we keep developing new features and improvements for upcoming, upgraded versions of the software."
Thank you for the clarification, @FLomb! Your point of "We saw the term 'active development' being regularly used in various other repositories with this meaning" is totally fair. I prefer your suggested reword, as I think it is clearer, however, this is a little subjective. Therefore, now that I have extra clarification, I will consider my point 6 on documentation complete. Regardless if you update the wording or not. :)
@AdamRJensen @trevorb, I summarized which comments from @trevorb1 are already addressed
Thanks, @Bachibouzouk! Adding the seed for the RNG looks great. My first point on general checks is considered complete.
Thank you so much for the detailed description, @FLomb! This fully answers my question
All comments have been addressed
@Bachibouzouk; thanks for the updates! From my perspective, points 2 and 3 under my functionality comments have now been addressed.
With https://github.com/RAMP-project/RAMP/pull/126#issuecomment-2046507167 all comments have been addressed
From my perspective, documentation points 1,2,4 and 7 have now all been fully addressed. Thanks!
Thanks @Bachibouzouk for adding in formatting checks! My third point on documentation is considered complete.
Therefore, now that I have extra clarification, I will consider my point 6 on documentation complete. Regardless if you update the wording or not. :)
With https://github.com/RAMP-project/RAMP/pull/129#issuecomment-2050345429 all comments have been addressed
Points 1 and 2 still need to be addressed, work in progress here :)
@AdamRJensen - is it still possible to amend the list of authors of the JOSS paper? If yes there are two people we would like to add. One of them contributed to refactoring the code which lead the the version of RAMP we are submitting to JOSS and the other one did fix several bugs and opened interesting discussions in issues quite recently. Let me know if this would be ok on your side and I will provide their information.
@Bachibouzouk It is technically possible, but I'd like to ensure that they qualify for co-authorship. To avoid the danger of false authorships, please explain the contributions of the two potential additional authors and tag them here, and why they weren't listed as authors from the start.
@AdamRJensen I will tag and list the two additional co-authors below:
@dhungelgd did a large chunk of the refactoring of the ramp v0.3.1 code, he opened a few issues and tested that the refactor code was providing the same behavior as the original code. He worked at the time on the Reiner Lemoine Institut organisation's fork : https://github.com/rl-institut/RAMP as a Master student.
@JW-Kraft did use RAMP to simulate productive use and submitted pull request to address problem that were preventing the use of RAMP for productive usecase. He also documented some issues, fixed some himself and worked on the RAMP documentation.
@Bachibouzouk could you link to list of their PRs/issues to RAMP. And an explanation as to why they weren't included from the beginning would be nice
@Bachibouzouk could you link to list of their PRs/issues to RAMP. And an explanation as to why they weren't included from the beginning would be nice
@dhungelgd wasn't included in the author list because at the time he worked on RAMP (2022) he was a master student at RLI and I was the person who then created the PRs towards the official RAMP repo. He left RLI before being able to be present at many RAMP-dev meetings and was therefore not known by the other RAMP developpers. When we submitted the paper in November 2023 @FLomb made the list of authors and it did not struck me at this time that @dhungelgd was not listed. It is only then I consulted recently the list of contributors (https://github.com/RAMP-project/RAMP/graphs/contributors) that I realized we did not included @dhungelgd in the authors list.
https://github.com/RAMP-project/RAMP/pull/36/commits
https://github.com/rl-institut/RAMP/issues/15 https://github.com/rl-institut/RAMP/issues/12 https://github.com/rl-institut/RAMP/issues/10
@JW-Kraft wasn't included in the author list because when we submitted he hadn't yet contributed https://github.com/RAMP-project/RAMP/pull/132 https://github.com/RAMP-project/RAMP/pull/128 https://github.com/RAMP-project/RAMP/pull/116 https://github.com/RAMP-project/RAMP/pull/111
https://github.com/RAMP-project/RAMP/issues/127 https://github.com/RAMP-project/RAMP/issues/80 https://github.com/RAMP-project/RAMP/issues/78 https://github.com/RAMP-project/RAMP/issues/79
@Bachibouzouk Thanks for the clarification, it seems reasonable to me. Go ahead and add the two additional authors to the paper π
Submitting author: !--author-handle-->@FLomb<!--end-author-handle-- (Francesco Lombardi) Repository: https://github.com/RAMP-project/RAMP Branch with paper.md (empty if default branch): joss-paper Version: 0.5.2 Editor: !--editor-->@AdamRJensen<!--end-editor-- Reviewers: @FabianHofmann, @trevorb1 Archive: 10.5281/zenodo.11526597
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@noah80 & @FabianHofmann, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @AdamRJensen know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @noah80
π Checklist for @FabianHofmann
π Checklist for @trevorb1