Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.35 s (754.7 files/s, 180996.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JSON 10 0 0 22644
TypeScript 122 2797 6447 11967
Python 37 1783 3358 6104
Sass 33 512 534 2347
HTML 35 206 658 1967
SQL 1 0 0 448
Markdown 4 120 0 255
YAML 2 5 4 75
XML 6 0 0 69
TeX 1 5 0 66
Bourne Shell 3 4 0 50
JavaScript 1 1 6 43
PowerShell 2 4 0 32
Dockerfile 2 20 7 30
TOML 1 6 0 17
SVG 1 0 0 4
-------------------------------------------------------------------------------
SUM: 261 5463 11014 46118
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1173
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/0142159X.2020.1755030 is OK
- 10.1191/1478088706qp063oa is OK
- 10.1136/bmj.320.7227.114 is OK
- 10.1123/jtpe.2017-0084 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@muctadir I finished an initial review, creating issues in the LaMa repository. If you could have a look at those issues, please. Once they are resolved I will proceed with the rest of the review checklist. Thanks!
Looking at the repository commit history, I don't see clear indications that the submitting author made "major" contributions to the software implementation itself - it looks like contributions were primarily made to a prior version of this application before it was rewritten? Comments on this would be appreciated!
@muctadir - please feel free to respond to the comments and suggestions in this thread. Thanks!
@kinow thanks for your comments. We noticed the issues that you opened and currently working to resolve them.
@luxaritas Your observation is indeed correct. In terms of commits, my contributions are limited. For example, I improved some implementations as you can see in PR#1. However, I contributed significantly at least in two major ways. First is the development of the prior version (as you already mentioned), which includes concepts such as collaborative labeling, conflict resolution, theming and so on. These features were replicated to LaMa although the implementation was done separately. Secondly, I closely supervised the development and influenced all the major decisions. For example, the choice of technologies, moving to service based architecture, the UI design and so on. The newly added features, the changes in architecture, and design are based on our research on the older version of the tool. Please, let me know if my explanation is sufficient.
@muctadir That explanation is sufficient to me - thank you!
@kinow we resolved all the issues that you created except one. I am wondering if it would be possible to move on with the review as we resolve the remaining one.
Hi @muctadir. Excellent. I will have another look and review the issues resolved, updating the checklist appropriately, and also see if I can continue working on the other review points. Thanks!
Hi @muctadir
I've made good progress on the review today. Here's what's pending in my review:
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
Pending, https://github.com/muctadir/lama/issues/11
State of the field: Do the authors describe how this software compares to other commonly-used packages?
Pending https://github.com/muctadir/lama/issues/7
Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
The item before is related, about the state of the field, could change the rest of the article, including the references. So I will wait for that one to complete these ones :+1:
That's me for now, ping me again in the issues or here if you have any updates, please.
Thank you! -Bruno
@muctadir - how are the revisions going? Have you had a chance to update the software to implement the reviewers' suggestions?
Thanks!
@fboehm @kinow we investigated and identified what the problem with the automated test was. Shortly, we will update the repo with the necessary changes. Sorry for the delayed responses. Last week was carnival week in our region and I was on leave half of last week.
Thanks so much, @muctadir! I'm look forward to seeing the updates. Please feel free to continue the discussion in this thread. thanks again!
Sorry for the delay in review. Overall, this looks great! I've submitted a handful of issues around UI/UX improvements - these aren't intended as blockers to the JOSS review, but are things I hope may be useful to you that I noticed while reviewing.
The following are the items that I feel need resolution:
Additionally one thing I'm not sure about is the "Example usage" requirement. There are screenshots and such showing generally how to use the application, but I don't think quite "to solve real-world analysis problems". Maybe it could be possible to provide the ability to pre-load synthetic data that shows a realistic use case (not just "lorem ipsum" text)? Haven't submitted an issue for this as I'm not fully clear on what JOSS requires, but I certainly can if that's helpful.
Hi @fboehm
@luxaritas has left a comment at https://github.com/muctadir/lama/issues/20. This is about the previous tools LaMa is based upon and how the new version is different. I replied to the comment. However, both of us think that you might want to provide some input there. Would it be possible for you to take a look?
Thanks, Hossain.
@muctadir - I've left a comment on the issue thread, https://github.com/muctadir/lama/issues/20. Please remind me how you currently address the predecessors to LaMa in the paper.md file. My inclination is to include at least a couple sentences to explain the inspirations for LaMa and how LaMa differs from its predecessors. Does that seem reasonable?
@muctadir - how is the revisions process going? Do you have any questions on how to proceed? Thanks again!
@kinow and @luxaritas - how are the reviews going? Is there anything that I can help with at this time?
@kinow and @luxaritas - how are the reviews going? Is there anything that I can help with at this time?
Hi, @fboehm , see this comment, please: https://github.com/muctadir/lama/issues/7#issuecomment-1474165593
Thank you!
Hi @fboehm - my comment above shows pretty well the status from my side - there are still two more issues noted as open that I feel are relevant to address. If you could provide some clarity on the requirements around example usage as mentioned there, that could be helpful.
@muctadir - please feel free to look at the reviewers' checklists above. It seems that there may be a few small edits needed before we can complete the reviews. Please let me know if you have any questions about how to proceed. Thanks again!
@luxaritas - Thanks also for your thoughtful comment:
Additionally one thing I'm not sure about is the "Example usage" requirement. There are screenshots and such showing generally how to use the application, but I don't think quite "to solve real-world analysis problems". Maybe it could be possible to provide the ability to pre-load synthetic data that shows a realistic use case (not just "lorem ipsum" text)? Haven't submitted an issue for this as I'm not fully clear on what JOSS requires, but I certainly can if that's helpful.
I think your suggestion would definitely improve the product. @muctadir - is it possible to implement this suggestion?
Thanks again!
@fboehm @kinow @luxaritas we have addressed all the issues mentioned in https://github.com/openjournals/joss-reviews/issues/5135#issuecomment-1455393725. I believe, the review process can now move forward.
As for the "example usage" pointed out by @fboehm and @luxaritas, to enable this we need to publish some real data with LaMa. However, the data we have at hand are under privacy protected and can not be made publicly available. Furthermore, I also made a quick search to find some open-source thematic analysis/interview data, which did you yield anything useful. Therefore, I am afraid that implementing the suggestion of the "example usage" won't be possible this time around. However, we think this is a very good suggestion and will keep this in mind as a possible enhancement.
@fboehm can we get the latest Markdown source of the paper to be re-generated as PDF, please? https://github.com/muctadir/lama/blob/main/paper/paper.md
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@kinow @muctadir @luxaritas - please find the updated pdf in the link above.
Thank you @fboehm !
@fboehm I have re-read the paper now, looked at the code, went through the review checklist, checked everything off and finished my review.
The last item was for automated tests. They were added/updated after this issue in the review: https://github.com/muctadir/lama/issues/11
The automated tests are not being executed in a continuous integration environment, so other devs & users must run it manually - that's what I did.
Frontend tests:
Backend tests:
And for the record, the git commit I reviewed and tested:
(venv) (base) kinow@ranma:~/Development/python/workspace/lama$ git log -n 1
commit 95b4fc6434304952eaa2a85f4d9a25487333d5a1 (HEAD -> main, upstream/main, upstream/HEAD)
Author: Hossain Muctadir <h.m.muctadir@tue.nl>
Date: Tue Apr 4 08:13:21 2023 +0200
adjust statement of need
Cheers -Bruno
@kinow - Thanks for your thorough review! I hope that we can work together again at a future date.
@muctadir - have you had a chance to resolve all of the issues suggested by @luxaritas ? Thanks again!
Thanks for your thorough review! I hope that we can work together again at a future date.
Thank you for the great job as editor, and looking forward to working together again in other papers, @fboehm :wave: :bow:
Sorry for the delay - this is looking great.
@muctadir
As for the "example usage" pointed out by @fboehm and @luxaritas, to enable this we need to publish some real data with LaMa. However, the data we have at hand are under privacy protected and can not be made publicly available. Furthermore, I also made a quick search to find some open-source thematic analysis/interview data, which did you yield anything useful. Therefore, I am afraid that implementing the suggestion of the "example usage" won't be possible this time around. However, we think this is a very good suggestion and will keep this in mind as a possible enhancement.
Would it be possible to come up with some very simplistic synthetic data to replace the content in the screenshots? I'm imagining something like a project with a topic of restaurant reviews, and artifacts like "I loved the food" labeled as positive and on the topic of food, and "The staff were inattentive" labeled as negative and on the topic of service. That way it's just a little bit more grounded in a concrete use case
@muctadir - have you had a chance to resolve all of the issues suggested by @luxaritas ? Thanks again!
Hi @fboehm, I believe we resolved all of the blocking issues as indicated by @luxaritas in his https://github.com/openjournals/joss-reviews/issues/5135#issuecomment-1455393725. There are some minor unresolved issues. However, I understand that they are not blocking for the review process?
Would it be possible to come up with some very simplistic synthetic data to replace the content in the screenshots?
@luxaritas we are looking into it. Thanks.
Yeah the other issues I filed were potential application improvements not related to the review
Thanks, @muctadir and @luxaritas. Once @muctadir has a chance to implement the changes suggested above, and quoted here, we can complete the review process.
Would it be possible to come up with some very simplistic synthetic data to replace the content in the screenshots? I'm imagining something like a project with a topic of restaurant reviews, and artifacts like "I loved the food" labeled as positive and on the topic of food, and "The staff were inattentive" labeled as negative and on the topic of service. That way it's just a little bit more grounded in a concrete use case
@muctadir - do you have any questions about how to proceed? Thanks again!
@fboehm we are onto implementing the suggestion. I hope to update the repo and notify you shortly. Thanks.
Excellent! Thanks, @muctadir ! Please let me know if you encounter difficulties or have questions. Thanks again!
Hi, @muctadir - how are the revisions going? Please let me know if you have any questions or difficulties. Thanks again!
Hi @fboehm. Thanks for checking. We are still looking into this. It is taking longer than expected as some other task took preference. However, we hope to update you shortly. Sorry for the delay.
Hi @fboehm and @luxaritas, we updated our repo to address your comment
Looks great, thank you @muctadir!
Checklist is officially all signed off by me
Thank you, @muctadir, for the latest revisions. @luxaritas - Thanks so much for the excellent and thorough review!
@muctadir - now that the reviewers have approved the submission, there are a few last steps before publication. I'll generate a checklist for them in this comment thread below.
@editorialbot commands
Submitting author: !--author-handle-->@muctadir<!--end-author-handle-- (Hossain Muhammad Muctadir) Repository: https://github.com/muctadir/lama Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@fboehm<!--end-editor-- Reviewers: @kinow, @luxaritas Archive: 10.5281/zenodo.7866236
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@kinow & @luxaritas, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fboehm know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @kinow
📝 Checklist for @luxaritas