Closed whedon closed 2 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @danhey, @benjaminpope it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.06 s (535.1 files/s, 76350.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 10 483 1036 1812
TeX 1 21 0 402
reStructuredText 13 207 192 311
Markdown 3 30 0 121
YAML 3 0 0 34
DOS Batch 1 8 1 26
make 1 4 6 9
TOML 1 0 0 6
-------------------------------------------------------------------------------
SUM: 33 753 1235 2721
-------------------------------------------------------------------------------
Statistical information for the repository 'affe02802b3915c638d92384' was
gathered on 2021/06/04.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Ashley Chontos 129 21985 19881 93.97
Maryum Sayeed 10 444 241 1.54
Pavadol Yamsiri 1 1233 261 3.35
danxhuber 9 344 165 1.14
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
Ashley Chontos 3162 14.4 1.2 9.84
Maryum Sayeed 169 38.1 0.0 0.59
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1103/RevModPhys.93.015001 is OK
- 10.1051/0004-6361/201322068 is OK
- 10.3847/1538-3881/aabc4f is OK
- 10.1038/nature12419 is OK
- 10.1088/0067-0049/210/1/1 is OK
- 10.1051/0004-6361/201424181 is OK
- 10.1111/j.1365-2966.2009.16030.x is OK
- 10.1051/0004-6361/201015185 is OK
- 10.1088/0004-637X/743/2/143 is OK
- 10.1051/0004-6361/200913266 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1051/0004-6361/200912944 is OK
- 10.3847/1538-3881/abcd39 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.3847/1538-4365/aa97df is OK
- 10.1111/j.1365-2966.2011.18968.x is OK
- 10.1093/mnrasl/sly123 is OK
MISSING DOIs
- 10.1017/cbo9781139333696.004 may be a valid DOI for title: Solar-like oscillations: An observational perspective
- 10.1553/cia160s74 may be a valid DOI for title: Automated extraction of oscillation parameters for Kepler observations of solar-type stars
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋 @danhey @benjaminpope Thank you so much for agreeing to review! You can find the article in the comment box above ⬆️ , and the checklist and software repository linked in the first comment box on this issue. I think you're good to go -- please let me know if you need anything else.
Again, JOSS is an open review process and we encourage communication between the reviewers, the submitting author, and the editor. So please feel free to ask questions at any time, I'm always around.
Thank you so much @benjaminpope for providing this extremely helpful review so quickly! We also acknowledge the importance of reproducing the paper figure but have been stuck on the best way to implement this. We were wondering if JOSS has a recommended data-hosting website, something analogous to CDS for data releases associated with astronomy papers. Providing the power spectra to reproduce this figure would be about ~300 MB of data and therefore probably too large to provide directly on our GitHub repo.
Hi all, this has been a pleasure to review! It's a great piece of software, and I have no doubt it will be extremely useful for asteroseismology. Overall, I believe pySYD satisfies the JOSS criteria for publication after a few points have been addressed.
I agree with all of Ben's comments, especially regarding the distinction between a command-line vs. API tool. It appears to me that the functionality is there for using pySYD from say, a Jupyter notebook, but this has not been documented.
I will try to structure my review in the same format as Ben's.
tqdm
appears to be a requirement that is not listed in pySYD.pysyd.utils.stitch_data()
function for gapped data, perhaps a warning should also be thrown when this function is used? This could be fixed in one step: provide an example for a TESS star that has a large data gap in it!Providing the power spectra to reproduce this figure would be about ~300 MB of data and therefore probably too large to provide directly on our GitHub repo.
Only a suggestion, but would it be easier to instead write a small utility function in pySYD that calculates the power spectrum given an input light curve? This would also be useful for the general functionality of PySYD, allowing people to supply a light curve. It would also benefit reproducibility: I have seen a lot of different power spectra normalizations in my time ...
Agree with all of @danhey's comments.
Re @ashleychontos' question about data hosting - there are a few ways to do this... you could just upload a static .ipnyb that implements the required calculations, and describe where the input data come from - I imagine these are programmatically pulled from MAST or KASOC/TASOC. Or you could do it all on git-lfs. Finally, many universities have large, permanent data storage solutions - eg UQ does - you could upload relevant datasets there and link to those.
@ashleychontos Agree with @benjaminpope. You can provide code that queries and downloads the data and then makes the figure from that downloaded data. Then it is the user's responsibility to store the data (for ~300 MB, not too big an ask), but all they need to do to reproduce the figure is run the provided code.
:wave: @danhey, please update us on how your review is going (this is an automated reminder).
:wave: @benjaminpope, please update us on how your review is going (this is an automated reminder).
@mbobra apologies in advance since this is all of our (the authors) first time preparing anything like this. First and foremost, what is the most appropriate way to address and comment on either/both of the reviews (similar to a resubmission with AAS)? Following are comments from the reviews that we could use some clarification on before proceeding.
If the above test is not sufficient, we have discussed two other alternatives and would like to hear any comments or suggestions (@danhey included). The first is providing enough examples that cover all optional arguments to make sure no error is produced when they are all called (additionally, a plot could be produced to show the before and after when those arguments are used). The second idea was to have a counter that runs through all functions and if successful, asserts that the number returned equals the number of functions in the pipeline. This would also account for optional functions that are not required for the software to run successfully.
@ashleychontos thanks! So as a gold standard example, @dfm did this for the exoplanet
review: https://github.com/openjournals/joss-reviews/issues/3285#issuecomment-859043010, in which he links to some GitHub Issues that resolve each of the reviews as a checklist. So you might like to create a checklist, in which you respond point by point to each of the reviews, or you might like to create a separate issue for each point, or something like that.
If the examples in the documentation match the output verbatim that will be good! It will also be good to have richer output - for example, figures (and autosave figures, rather than click-through) and terminal output, so that we can test that it reproduces the examples exactly. (And a GitHub Action to test one or more examples, even with just a checksum, would meet the continuous testing requirement).
Agree with @benjaminpope. I also think a good unit test for GitHub Actions would be to just simply run pySYD on a test star for which numax and dnu is well known, and assert that the output of the pipeline is np.isclose()
to what you expect. This means that if you make some changes to the code, you will instantly see if the new output is significantly different, and the test will fail.
Thank you both @benjaminpope and @danhey again for such quick and helpful responses! I will follow Ben's suggestion and open up a separate pull request for each review and update those accordingly.
First and foremost, what is the most appropriate way to address and comment on either/both of the reviews (similar to a resubmission with AAS)?
👋 @ashleychontos It looks like you already got your answer -- but in any case, I will provide a rather vague response that as long as the reviewers feel (1) you have addressed their concerns and, (2) the software adheres to the JOSS guideline, I will accept the submission!
@mbobra - can you provide an update about this submission? It seems stuck...
I am going to pause the review. This gives @ashleychontos time to improve their code and also gives reviewers a break from keeping up with this review thread.
@ashleychontos There's nothing wrong (or negative) with pausing a review. If you know you'll be done by a certain time, we can keep it paused. If you're not sure how long all this will take (or if you no longer want to pursue it), you can withdraw this submission and resubmit at a later date. There's nothing wrong with that and it won't impact your next submission.
Thank you for clarifying @mbobra - the primary three developers for pySYD
are tied up at the moment (which includes myself, currently busy with postdoc applications), hence the delay. I think we still want to pursue it and also agree with all the reviewers comments, but a likely time scale when I can revisit this is ~2 months? Is that an ok amount of time to be paused?
In my opinion, this is getting close to the far edge of acceptable, but if the reviewers are willing to wait and then take another look at that time, we can accept it.
Hi all, I am happy to wait until the authors are ready. No rush here!
Fine by me!
——————————————— Dr Benjamin Pope (he/him) Lecturer in Astrophysics University of Queensland benjaminpope.github.io
From: Daniel @.> Sent: Friday, October 1, 2021 11:07:21 PM To: openjournals/joss-reviews @.> Cc: Benjamin Pope @.>; Mention @.> Subject: Re: [openjournals/joss-reviews] [REVIEW]: pySYD: Automated measurements of global asteroseismic parameters (#3331)
Hi all, I am happy to wait until the authors are ready. No rush here!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/openjournals/joss-reviews/issues/3331#issuecomment-932211427, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABN6YFISZQ36RM4RXMCVLGLUEWXATANCNFSM46DGHADQ. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Thanks @danhey and @benjaminpope!
@whedon remind @ashleychontos @mbobra @danielskatz in 2 months
Reminder set for @ashleychontos @mbobra @danielskatz in 2 months
@ashleychontos How is it going? Do you think this submission is ready to undergo the rest of the review process?
If not, I suggest you withdraw the submission and submit again. The review process is not really a long-term holding place, and we're at the 8-month mark here so it is time to wrap up or withdraw. However, there's absolutely no issue whatsoever with re-submitting -- and if you already have willing reviewers at the ready, the editor will be more than happy to re-assign them to the review. And I'm also happy to act as the editor again! You can request me when you re-submit. So basically you lose nothing and gain the time you need to polish this up. Please let me know what you think!
@whedon remind @ashleychontos @mbobra @danielskatz in 2 weeks
Reminder set for @ashleychontos @mbobra @danielskatz in 2 weeks
@danhey @benjaminpope The submitting author agreed to wrap this up in two weeks. Thank you for all your efforts thus far and I apologize for taking up so much of your volunteer time. Are you two still available to review? Please let me know!
No problem at all!
Certainly!
——————————————— Dr Benjamin Pope (he/him) Lecturer in Astrophysics University of Queensland benjaminpope.github.io
From: Daniel @.> Sent: Saturday, February 12, 2022 8:41:00 AM To: openjournals/joss-reviews @.> Cc: Benjamin Pope @.>; Mention @.> Subject: Re: [openjournals/joss-reviews] [REVIEW]: pySYD: Automated measurements of global asteroseismic parameters (#3331)
No problem at all!
— Reply to this email directly, view it on GitHubhttps://github.com/openjournals/joss-reviews/issues/3331#issuecomment-1036701458, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABN6YFIBQZFDOK7SDP4OWVTU2WF7ZANCNFSM46DGHADQ. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you were mentioned.Message ID: @.***>
Apologies for the delay and thank you so much @mbobra @danhey @benjaminpope for your patience! Job season had pretty much consumed the last 4 months of my life. As Monica said though, I will be ramping this back up to push this over the finish line in the next couple weeks. Thanks again!
👋 @ashleychontos We'll be hitting the two-week mark tomorrow -- will the code be ready to review at that time?
@danielskatz This submission has been in review for 10 months -- since June 4, 2021. The submitting authors don't appear to be quite ready yet -- in October 2021, they asked for two more months; this month, they asked for a couple more weeks (which have since passed). However, the reviewers and I are happy to continue reviewing and editing this submission. Can you advise on how we should proceed here?
Edit: ping @openjournals/joss-eics
Hi @mbobra - thank you very much for staying on top of this. We are running a bit behind on the resubmission but the other main developer and I are meeting on Friday and we are planning to set a more definitive, realistic schedule for getting this completed. Of course I understand if you need to do whatever in the meantime but I wanted to keep you updated to let you know that we do plan to finish this but will have a more definitive idea for when that will be on Friday.
No worries @ashleychontos. We can continue here. Yes I think it is a good idea to come up with a realistic schedule out of respect for the reviewers, who do this on their volunteer time and have been hanging around here for 10 months!
Great, thank you @mbobra!
@MaryumSayeed and I just finished chatting about the action items for pySYD
and both agreed that 2 weeks seems like a reasonable amount of time to finish up the remaining tasks (also paging @danxhuber). However, her and I will reconvene again in a week to make sure we are still good for time. I have a general question (that would possibly affect the timeline) -- does the JOSS submission depend on any current repo "issues"?
Again, many thanks all (@benjaminpope @danhey) for being incredibly patient! We are actively working on addressing the original comments/suggestions/etc. now :)
Thanks for the update, @ashleychontos! To answer your question -- it depends.
The JOSS reviewers will go through the checklist above and to determine whether pySYD satisfies the review criteria. If pySYD does not satisfy some given criterion, then they will open an issue on the repo. Those issues must be resolved before pySYD passes review. However, you absolutely do not need to resolve other issues on the repo unrelated to the review criteria. Does that answer the question?
👋 @ashleychontos How is it going? Please let me know if you need any help!
Aloha @mbobra, thanks for checking in - we have made a lot of progress and are on track to fully resubmit this Friday! Thanks again for your patience :)
Without further ado!
@mbobra, @benjaminpope, @danhey,
Thank you all for volunteering your time to review the software, and we appreciate your patience. Since both reviewers followed similar formats, we decided to group comments together by sections, with the hope that it will tell a more cohesive story. Since a long time has passed since our original submission, we have included the reviewers comments as a reference.
Response: We understand your confusion(s) re: the distinction between being a command-line tool vs. something more modular (using the API as a guide) and have therefore clarified this better in both the documentation and the manuscript. pySYD
was initially developed to be a command-line tool but became more modular organically with time. Therefore, we have added some Jupyter notebooks to demonstrate this usage as well. I will address the pySYD
vs SYD
comments in more detail in the last section, where other specific comments and questions were made regarding the comparison.
Response: The tqdm
package was an oversight and we have now added the package to the relevant locations (i.e. documentation and Github repo) to avoid any future confusion. There was already an optional flag that existed for putting data from setup into an arbitrary directory and have now implemented an even more efficient way to do this via utils._fix_init(new_path)
. So in the event that you would want to change it, run the setup feature with the -f
(or fix
) flag first with your desired path — which if None
is provided, it will locate the install directory and use that as the new default path.
Response: The comments made here are great points but beyond the scope of the project at this juncture, although it is definitely a long-term goal of the package. The features currently in place do test the core functionality of the software by exactly reproducing values and uncertainties (to the hundredth decimal place) for global parameters numax and dnu for our three example stars and hence, should suffice according to JOSS standards. We understand that this was partly due to the confusion of the examples for our original submission, but we have since addressed this by implementing and saving seeds for every random process, ensuring reproducible results.
Response: We have added community guidelines (using GitHub’s recommended standards as an example) to the GitHub repo and the documentation. While we are fairly certain that the Python version (>=3) was specified and enforced in the pip install (via setup.py), we have added it to the requirements.txt and explicitly state its dependency in the documentation.
Response: Our apologies for the confusion surrounding the examples and expected outputs. As I mentioned before, pySYD
now automatically implements and saves seeds for any locally-processed stars that undergo any random process in order to prevent this from happening in the future. The goal is to eventually include all the “verbose output” in a single config file (and do away with the long command-line output) — the config file which will contain any information needed to reproduce the results (e.g., something similar to what radvel does with the configparser function). That will likely take care of a majority of the confusion mentioned in this section. Finally, with regard to the new problem of TESS data gaps, the software will process a star the same way irrespective of the instrument (i.e it will not use the stitching function), but of course any power spectrum computed from a time series with large gaps will take longer — which we currently do provide a warning for. In addition, we also print a warning if a person opts to use the data stitching function.
Response: We have addressed all minor text comments in the manuscript. Regarding the comparison between SYD
and pySYD
— pySYD
was initially meant to be a direct 1:1 translation but because it comes with many enhancements — the answers will not match exactly. We improved the comparison by re-running the sample and setting parameters more closely to those used in the SYD
analyses and found much better agreement, with no significant offset in the median values. We also added a folder to the GitHub page that contains all the relevant information to recreate the comparison, including the i) list of targets, ii) a shell script of all the executed commands from the pySYD end, iii) results from each of the two pipelines and iv) scripts to regenerate the figure. Finally, we also provide instructions for how to access and process the data, the light curves which are all publicly available on KASOC.
Of course there is always more that can be done but we have finally reached a point which we are confident about and therefore, encourage you all to take a look at the new versions (v.5.9.4
) and get back to us if you have any questions!
Cheers,
@ashleychontos, @danxhuber, @MaryumSayeed & @pavyamsiri
@whedon generate pdf
My name is now @editorialbot
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @ashleychontos,
Thanks for the update.
I like the new Jupyter notebook demos and explanation in the body of the paper.
Before I delve into the review - the paper as rendered above doesn't contain any figures, but they are referred to in the text. There is a broken hyperref at line 78 " (section )" and that entire section seems incomplete.
Perhaps the latest version of the paper has not yet been uploaded?
As another comment, re unit tests, it is a requirement of JOSS that you have automated testing. I regret that we can't say it's beyond the scope of the paper - I have to be able to tick the box from JOSS. In your comment above, I don't know which features you're referring to when you say you have tests implemented - on the repo, I see plenty of unit tests in the tests
folder but no GitHub action to automate these (which would hopefully be easy to add!).
Apologies,
Ben
Thanks @benjaminpope --
I was going to say that I just double-checked the PDF @editorialbot generated and it is definitely an old, unfinished version. The paper lives (and compiles perfectly fine) in the source repo but I guess it has been so long since the original submission that I thought that's how JOSS papers were generated. Are there instructions for uploading a resubmission to JOSS that I've somehow missed?
@mbobra checking on this
@benjaminpope regarding the automated testing -- I visited the JOSS documentation here and it says:
What we have right now currently satisfies the latter of the two, which is what our original response above was referring to
(* apologies as that sent early by accident). We understand that this is not the ideal scenario, which is also discussed in more detail here. As you alluded to though, it is something we are currently working on but* it will take some time to build up and implement the automated tests at the level that we would like. The scripts in the tests
folder were merely copied from another public repository to be used as a guide in the future but are not currently relevant to the pySYD
package.
In your comment above, I don't know which features you're referring to when you say you have tests implemented.
Again, sorry for the confusion. The example on this page in the documentation provides identical figures and reproducible answers (to the hundredth decimal place for numax and dnu) so that the functionality of the software can be verified.
Submitting author: !--author-handle-->@ashleychontos<!--end-author-handle-- (Ashley Chontos) Repository: https://github.com/ashleychontos/pySYD Branch with paper.md (empty if default branch): master Version: v6.10.0 Editor: !--editor-->@mbobra<!--end-editor-- Reviewers: @danhey, @benjaminpope Archive: 10.5281/zenodo.7301604
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@danhey & @benjaminpope, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mbobra know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @danhey
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @benjaminpope
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper