Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/tii.2014.2349359 is OK
- 10.1109/phm.2008.4711422 is OK
- 10.1016/j.ress.2017.11.021 is OK
- 10.3233/atde210095 is OK
- 10.1007/s10489-021-03004-y is OK
- 10.1109/rams.2015.7105079 is OK
- 10.48550/arXiv.1907.12207 is OK
- 10.3390/math9233137 is OK
- 10.1109/icit.2019.8754956 is OK
- 10.1016/j.asoc.2020.106113 is OK
- 10.1016/j.conengprac.2021.104969 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.88 T=0.18 s (768.8 files/s, 120737.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 98 3700 2870 10254
Jupyter Notebook 6 0 3658 549
TeX 2 68 0 382
Markdown 30 185 0 305
YAML 5 37 16 235
TOML 1 5 0 37
-------------------------------------------------------------------------------
SUM: 142 3995 6544 11762
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 779
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@AnnikaStein, @ulf1, @Athene-ai Thanks for agreeing to review this submission! This is the review thread for the paper. All of our communications will happen here from now on. :+1:
As you can see above, you each should use the command @editorialbot generate my checklist
to create your review checklist. @editorialbot commands need to be the first thing in a new comment.
As you go over the submission, please check any items that you feel have been satisfied (and if you leave notes on each item that's even better). There are also links to the JOSS reviewer guidelines. I find it particularly helpful to also use the JOSS review criteria and review checklist docs as supplement/guides to the reviewer checklist @editorialbot will make for you.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#5294
so that a link is created to this Issue thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 4 weeks. Please let me know if either of you require some more time (that's perfectly okay). We can also use @editorialbot to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@matthewfeickert) if you have any questions/concerns.
Issue created here: https://github.com/lucianolorenti/ceruleo/issues/25 Fixed in: https://github.com/lucianolorenti/ceruleo/pull/26
ceruleo.dataset.catalog.PHMDataset2018
demo dataset (5 Gb) is provided via Google Drive what is not the place to store. What is the best practice that JOSS prefers?CONTRIBUTING.md
guideline, e.g. see https://github.com/fzi-forschungszentrum-informatik/TSInterpret#-contributinghello @lucianolorenti i had problems installing the package https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1484532861
:wave: @lucianolorenti, @AnnikaStein, @ulf1, @Athene-ai Just checking in on things. It seems that the review is ongoing, which is good and that there are GitHub Issues being opened given the discussions. :+1:
As the review has been going for 3 weeks at this point I'll have @editorialbot give us reminders in 2 weeks to follow up on anything outstanding.
@editorialbot remind @AnnikaStein in 2 weeks
Reminder set for @AnnikaStein in 2 weeks
@editorialbot remind @ulf1 in 2 weeks
Reminder set for @ulf1 in 2 weeks
@editorialbot remind @Athene-ai in 2 weeks
Reminder set for @Athene-ai in 2 weeks
@matthewfeickert I have jus made my review
@Athene-ai At the moment the "Reproducibility" check in your review is left blank. Can you please add a comment to the checklist about why, and if there is an issue with the state of reproducibility of the submission open a GitHub Issue on https://github.com/lucianolorenti/ceruleo for it?
@ulf1 You've added helpful comments and notes to your review https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1484532861, which is exactly what we hope that reviewers will do. :+1: Some of your notes though point out problems or typos in the submission. Can you please translate all of those problems to GitHub Issues on https://github.com/lucianolorenti/ceruleo like you did with https://github.com/lucianolorenti/ceruleo/issues/23 (maybe prefixing the titles with [JOSS Review] to help @lucianolorenti differentiate them) ?
@Athene-ai At the moment the "Reproducibility" check in your review is left blank. Can you please add a comment to the checklist about why, and if there is an issue with the state of reproducibility of the submission open a GitHub Issue on https://github.com/lucianolorenti/ceruleo for it?
Just done
:wave: @AnnikaStein, please update us on how your review is going (this is an automated reminder).
:wave: @ulf1, please update us on how your review is going (this is an automated reminder).
:wave: @Athene-ai, please update us on how your review is going (this is an automated reminder).
Hello @lucianolorenti there are two open issues in my review https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1484532861
Hi! thank you. I will take a look
Hi @ulf1 I wrote here some comments about the open issues
Regarding the PHMDataset2018: I am currently utilizing the original folder from the challenge: https://phmsociety.org/conference/annual-conference-of-the-phm-society/annual-conference-of-the-prognostics-and-health-management-society-2018-b/phm-data-challenge-6/ I'm not entirely happy about using google drive. Any suggestions on where else it could be stored? Probably the dataset owners should be contacted also if the data will be stored somewhere else.
Regarding the contribution: There was a CONTRIBUTING.md but not linked in the README, I added a brief comment in the readme with a link to the guidelines. Respect to the CLA, I prefer not to use one.
Please let me know if it is enough in the current state.
While there are numerous repositories with published and unpublished models for remaining useful life regression, and other PdM tasks, as well as multiple libraries for feature extraction in time series data, it is hard to find ibraries that combine both aspects, focusing on predictive maintenance and also including tools for model comparison in this context.
To the best of my knolwedge, the closest library in spirit is the prog_models and prog_als libraries from NASA (https://github.com/nasa/prog_models), which are intended to be used together. These libraries have a broader scope, as they allow simulation, but they don't provide so many mechanisms for feature extraction from time series data. They include LSTM models and Dynamic Mode Decomposition wrapped in their own library-specific classes, to be used, i think, with the third component: the prog_server library. In the case of CeRULEo, it focuses on a data-driven approach using industrial data, where a clear simulation model may not be available or is expensive to develop. One of the guiding principles of the library is to avoid adding additional classes wrapping the models themselves, making them easily deployable in any production environment. The idea is to as model library-agnostic as possible.
CeRULEo library originated from the need to iterate easily on predictive maintenance models, particularly for estimating remaining useful life (RUL), during collaborations with different companies. Our ultimate goal is to develop a production-ready model. However, existing libraries did not meet our specific requirements, which included feature extraction focused on run-to-failure cycles and model evaluation within the context of predictive maintenance.
Do you think should I add some more paragraph to the paper.md regarding the libraries?
Hi All, just following up on the state of the review.
|I believe your are done and your review recommendation (given no comments of any type) is to recommend publication?|
Yes, I recommend for the publication
On Fri, 19 May 2023 at 20:01, Matthew Feickert @.***> wrote:
Hi All, just following up on the state of the review.
- @AnnikaStein https://github.com/AnnikaStein, your checklist is complete and your lucianolorenti/ceruleo#25 https://github.com/lucianolorenti/ceruleo/issues/25 was resolved, so is your review complete and if so, what is your review recommendation?
- @Athene-ai https://github.com/Athene-ai, I believe your are done and your review recommendation (given no comments of any type) is to recommend publication?
- @ulf1 https://github.com/ulf1 Can you update us on the state of things given your review checklist, the Issues you've opened ( lucianolorenti/ceruleo#23 https://github.com/lucianolorenti/ceruleo/issues/23 has been resolved), and #5294 (comment) https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1546982158 ?
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1555039608, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANGYEVG4UKNKPKB6UMOYI43XG6YP5ANCNFSM6AAAAAAWE2XBKU . You are receiving this because you were mentioned.Message ID: @.***>
Hi All, just following up on the state of the review.
- @AnnikaStein, your checklist is complete and your [JOSS Review] Comments on the software paper (writing, references) lucianolorenti/ceruleo#25 was resolved, so is your review complete and if so, what is your review recommendation?
- @Athene-ai, I believe your are done and your review recommendation (given no comments of any type) is to recommend publication?
- @ulf1 Can you update us on the state of things given your review checklist, the Issues you've opened (Installation failed and dependencies have no versions lucianolorenti/ceruleo#23 has been resolved), and [REVIEW]: CeRULEo: Comprehensive utilitiEs for Remaining Useful Life Estimation methOds #5294 (comment)?
Hi Matthew,
indeed, my review is complete and I recommend this submission for publication.
- Regarding the State of the field:
While there are numerous repositories with published and unpublished models for remaining useful life regression, and other PdM tasks, as well as multiple libraries for feature extraction in time series data, it is hard to find ibraries that combine both aspects, focusing on predictive maintenance and also including tools for model comparison in this context.
To the best of my knolwedge, the closest library in spirit is the prog_models and prog_als libraries from NASA (https://github.com/nasa/prog_models), which are intended to be used together. These libraries have a broader scope, as they allow simulation, but they don't provide so many mechanisms for feature extraction from time series data. They include LSTM models and Dynamic Mode Decomposition wrapped in their own library-specific classes, to be used, i think, with the third component: the prog_server library. In the case of CeRULEo, it focuses on a data-driven approach using industrial data, where a clear simulation model may not be available or is expensive to develop. One of the guiding principles of the library is to avoid adding additional classes wrapping the models themselves, making them easily deployable in any production environment. The idea is to as model library-agnostic as possible.
Do you think should I add some more paragraph to the paper.md regarding the libraries?
@matthewfeickert Should the NASA software quoted in the paper?
Respect to the CLA, I prefer not to use one.
There is no need to ask for a CLA.
Do you think should I add some more paragraph to the paper.md regarding the libraries? ... Should the NASA software quoted in the paper?
I will leave it up to the authors what they feel as necessary to include in the paper as long as they are not excluding previous work in the area. However, given that this is a question that has come up I would suggest that some (short!) further clarifying discussion be added to clarify the position/scope of the library in the ecosystem.
@lucianolorenti as I just read a chunk of the paper to answer the above question I'll give you a heads up that in its current state of https://github.com/lucianolorenti/ceruleo/blob/02600e88a24f1b5ed982fe362f7664285b525870/paper/paper.md?plain=1#L32 "Industry 4.0" is jargon that will need to be updated.
Respect to the CLA, I prefer not to use one.
There is no need to ask for a CLA.
Do you think should I add some more paragraph to the paper.md regarding the libraries? ... Should the NASA software quoted in the paper?
I will leave it up to the authors what they feel as necessary to include in the paper as long as they are not excluding previous work in the area. However, given that this is a question that has come up I would suggest that some (short!) further clarifying discussion be added to clarify the position/scope of the library in the ecosystem.
@lucianolorenti as I just read a chunk of the paper to answer the above question I'll give you a heads up that in its current state of https://github.com/lucianolorenti/ceruleo/blob/02600e88a24f1b5ed982fe362f7664285b525870/paper/paper.md?plain=1#L32 "Industry 4.0" is jargon that will need to be updated.
Ok.
Assuming that the changes are made in the paper, I recommend the software for the publication.
Assuming that the changes are made in the paper, I recommend the software for the publication.
Great! @ulf1 @lucianolorenti can one of you please open up a corresponding Issue for the remaining edits on https://github.com/lucianolorenti/ceruleo/issues so that this is tracked through both the Issue tracker and this review?
Here is the issue lucianolorenti/ceruleo#29
I started working on the changes to the paper, I will try to finish ASAP. This week has been a bit complicated
Hi! I updated the article in the sota_libraries branch addressing the issues raised.
You can read the PDF in the following action result
Please, let me know what you think
@matthewfeickert @ulf1 Hi! Sorry to bother you, but I wanted to give you a heads up. I've made some updates to the issues raised a few weeks ago. Just thought I'd let you know in case you missed it.
Yes, I recommend for the publication.
@lucianolorenti @matthewfeickert
@editorialbot remove @Athene-ai from reviewers
@Athene-ai removed from the reviewers list!
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/tii.2014.2349359 is OK
- 10.1109/phm.2008.4711422 is OK
- 10.1016/j.ress.2017.11.021 is OK
- 10.3233/atde210095 is OK
- 10.1007/s10489-021-03004-y is OK
- 10.1109/rams.2015.7105079 is OK
- 10.48550/arXiv.1907.12207 is OK
- 10.3390/math9233137 is OK
- 10.1109/icit.2019.8754956 is OK
- 10.1016/j.asoc.2020.106113 is OK
- 10.1016/j.conengprac.2021.104969 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@lucianolorenti Apologies for the delay, I've been travelling for work for the last several weeks, but I think this submission is very close to publication. Can you please revise the jargon I mentioned in https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1560521972 ?
After that, all that's left to do now before publication is to ensure that there is a long term public archive of the code that was reviewed. In this case the code was v2.3.0
though there has been some development between then and the current HEAD
.
We'd suggest depositing the code either with Zenodo or with figshare to get an archive with a DOI. If you use Zenodo there is an (optional) GitHub integration that can create a Zenodo archive for you anytime you make a GitHub release of your code.
If you'd like to make a new release of CeRULEo to trigger the uploader that's fine and we can have @editorialbot update the version listed in the review. You're also welcome to not do that and just upload the state of the repository at v2.0.4
. Either way we'll just need you to share the resulting DOI of the archive created in this thread.
Let me know if you have any questions. :+1:
@AnnikaStein @ulf1 Thank you very much for your reviews! I think they have both helped this submission reach a more improved state before publication.
Hi @matthewfeickert , don't worry. I made some changes regarding the jargon in https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1585797582
Let me know if that is what you meant
@lucianolorenti Apologies for the delay — I've been traveling for work for the last 3 weeks and have been slow on returning to things.
I made some changes regarding the jargon in https://github.com/openjournals/joss-reviews/issues/5294#issuecomment-1585797582
Thanks. Your sota_libraries
branch clarifies things. From there
In Industry 5.0, the industrial machines produce a large amount of data which can be used to predict an asset’s life [@khan2023changes]. RUL estimation uses prediction techniques to forecast a machine's future performance based on historical data, enabling early identification of potential failures and prompt pre-failure interventions.
still seems a little jargony, but I'll accept it given the citation of Khan. :+1: Though along the lines of citation I'm noticing that many of the references you give are missing the DOIs which are given on their journal page. A short example:
@article{christ2018time,
title={Time series feature extraction on basis of scalable hypothesis tests (tsfresh--a python package)},
author={Christ, Maximilian and Braun, Nils and Neuffer, Julius and Kempa-Liehr, Andreas W},
journal={Neurocomputing},
volume={307},
pages={72--77},
year={2018},
publisher={Elsevier}
}
...
@article{van2022predictive,
title={Predictive maintenance for industry 5.0: behavioural inquiries from a work system perspective},
author={van Oudenhoven, Bas and Van de Calseyde, Philippe and Basten, Rob and Demerouti, Evangelia},
journal={International Journal of Production Research},
pages={1--20},
year={2022},
publisher={Taylor \& Francis}
}
@article{khan2023changes,
title={Changes and improvements in Industry 5.0: A strategic approach to overcome the challenges of Industry 4.0},
author={Khan, Moin and Haleem, Abid and Javaid, Mohd},
journal={Green Technologies and Sustainability},
volume={1},
number={2},
pages={100020},
year={2023},
publisher={Elsevier}
}
all have DOIs posted. Can you please make sure that DOIs are included for all references that provide them?
@editorialbot commands
Hello @lucianolorenti, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@editorialbot check references
Submitting author: !--author-handle-->@lucianolorenti<!--end-author-handle-- (Luciano Rolando Lorenti) Repository: https://github.com/lucianolorenti/ceruleo Branch with paper.md (empty if default branch): Version: v2.0.5 Editor: !--editor-->@matthewfeickert<!--end-editor-- Reviewers: @AnnikaStein, @ulf1 Archive: 10.5281/zenodo.8187300
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@AnnikaStein & @ulf1 & @Athene-ai, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @matthewfeickert know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @AnnikaStein
📝 Checklist for @Athene-ai
📝 Checklist for @ulf1