Open editorialbot opened 2 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
✅ OK DOIs
- 10.1145/3343031.3350535 is OK
- 10.5281/ZENODO.11543564 is OK
- 10.1109/I2C2.2017.8321819 is OK
- 10.1007/s11042-022-12100-1 is OK
- 10.48550/ARXIV.2401.01454 is OK
- 10.48550/ARXIV.2304.02643 is OK
🟡 SKIP DOIs
- None
❌ MISSING DOIs
- None
❌ INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.90 T=0.14 s (1067.4 files/s, 242558.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JSON 3 0 0 18828
JSX 45 360 64 6152
JavaScript 76 512 189 5362
Python 8 303 138 1605
Markdown 7 204 0 745
YAML 4 14 4 93
TeX 1 7 0 81
HTML 1 3 22 21
TOML 1 0 0 20
Dockerfile 2 14 0 18
CSS 1 0 0 6
CSV 4 0 0 4
-------------------------------------------------------------------------------
SUM: 153 1417 417 32935
-------------------------------------------------------------------------------
Commit count by author:
293 sumn2u
192 seveibar
71 Severin Ibarluzea
47 semantic-release-bot
13 Oleh Yasenytsky
11 Suman Kunwar
10 snyk-bot
7 Henry LIANG
7 Tamay Eser Uysal
6 Emiliano Castellano
5 DQ4443
5 sreevardhanreddi
3 Mews
3 Mykyta Holubakha
3 OmG2011
3 dependabot[bot]
2 Josep de Cid
2 Katsuhisa Yuasa
2 Mohammed Eldadah
2 linyers
1 HoangHN
1 Hummer12007
1 Joey Figaro
1 Puskuruk
1 Shahidul Islam Majumder
1 ThibautGeriz
1 beru
1 harith-hacky03
Paper file info:
📄 Wordcount for paper.md
is 1018
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
✅ OK DOIs
- 10.1145/3343031.3350535 is OK
- 10.5281/ZENODO.11543564 is OK
- 10.1109/I2C2.2017.8321819 is OK
- 10.1007/s11042-022-12100-1 is OK
- 10.48550/ARXIV.2401.01454 is OK
- 10.48550/ARXIV.2304.02643 is OK
🟡 SKIP DOIs
- None
❌ MISSING DOIs
- None
❌ INVALID DOIs
- None
Hi @sumn2u,
I’ve taken a close look at your tool, and I want to commend you on the impressive work and extensive documentation. I have gone through the reviewer’s checklist and the Review criteria, and I noticed a few minor points that I would appreciate some clarification on:
Annotation project options: At the start of an annotation project, users can choose between “Image classification” and “Image segmentation.” I’d like to point out that “Image classification” typically refers to assigning a single label to an entire image, while the functionality of placing bounding boxes, circles, or polygons is more accurately described as “Object detection.” Would it be possible to revise this terminology to better reflect this distinction?
User experience for annotating multiple images: In scenarios where users want to annotate multiple images, particularly in bulk (potentially thousands), implementing shortcut keys could significantly enhance the user experience. During my testing, I found that the only way to move to the next image was by clicking on it in the list. Did I overlook any existing shortcuts, or is there a plan to introduce this feature in future updates?
Comparison with similar tools: In the manuscript, you mention other similar software tools such as Label Studio, VGG, COCO Annotator, Super Annotate, and CVAT. However, I found it challenging to pinpoint the main differences between your tool and these others. Is the primary distinguishing factor that your tool is open-source and community-driven? It might be helpful to elaborate on this in the manuscript to highlight what sets your tool apart.
Thank you for your time, and I look forward to your responses!
Hi @PetervanLunteren , Thank you for your review and for your kind words! I truly appreciate your thoughtful feedback on both the tool and its documentation. Here's answer to your questions:
- Annotation project options: At the start of an annotation project, users can choose between “Image classification” and “Image segmentation.” I’d like to point out that “Image classification” typically refers to assigning a single label to an entire image, while the functionality of placing bounding boxes, circles, or polygons is more accurately described as “Object detection.” Would it be possible to revise this terminology to better reflect this distinction?
It makes sense to update the terminology, as it better conveys the intended functionality. I’ve made the necessary changes and updated the repository, documentation, and manuscript accordingly. It looks like this now.
- User experience for annotating multiple images: In scenarios where users want to annotate multiple images, particularly in bulk (potentially thousands), implementing shortcut keys could significantly enhance the user experience. During my testing, I found that the only way to move to the next image was by clicking on it in the list. Did I overlook any existing shortcuts, or is there a plan to introduce this feature in future updates?
We don’t have this feature yet, but we plan to add it in the near future.
- Comparison with similar tools: In the manuscript, you mention other similar software tools such as Label Studio, VGG, COCO Annotator, Super Annotate, and CVAT. However, I found it challenging to pinpoint the main differences between your tool and these others. Is the primary distinguishing factor that your tool is open-source and community-driven? It might be helpful to elaborate on this in the manuscript to highlight what sets your tool apart.
Yes, our tool is indeed open-source and community-driven. Moreover, our client-server architecture enhances flexibility and scalability, which distinguishes us from other annotation tools. I have added this section in our manuscript to provide a clearer comparison and to emphasize the unique features of our tool.
Please let me know if anything is unclear.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@sumn2u Thank you for the quick response! In my opinion, with the revisions and answers, the submission meets the acceptance criteria.
Thanks a lot for your great review @PetervanLunteren! :pray:
And thank you @sumn2u for your reactivity :+1:
@jpcbertoldo should be able to start his own review pretty soon.
Hi @boisgera, Do you happen to know when the next review will begin? I was under the impression that the entire review process would start in about seven weeks.
Hi @boisgera, Do you happen to know when the next review will begin? I was under the impression that the entire review process would start in about seven weeks.
Hi @sumn2u, I apologize for delaying this, I had some other things to prioritize, but I will be able to deal with this in a few days. So sorry for taking long @boisgera !
Hi @jpcbertoldo, when you have a moment, would you be able to review this? Thanks!
Hi @jpcbertoldo, when you have a moment, would you be able to review this? Thanks!
doing it tomorrow!
Thank you, @jpcbertoldo, for the thorough review. @boisgera, I believe we're now ready to move on to the next step.
Submitting author: !--author-handle-->@sumn2u<!--end-author-handle-- (Suman Kunwar) Repository: https://github.com/sumn2u/annotate-lab Branch with paper.md (empty if default branch): paper Version: v2.0.0 Editor: !--editor-->@boisgera<!--end-editor-- Reviewers: @jpcbertoldo, @PetervanLunteren Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@jpcbertoldo & @PetervanLunteren, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @boisgera know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @PetervanLunteren
📝 Checklist for @jpcbertoldo