Closed editorialbot closed 9 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.08 s (437.7 files/s, 89553.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 19 1370 477 3942
Markdown 8 192 0 497
TeX 1 20 0 160
Bourne Shell 4 27 1 47
YAML 1 1 0 18
-------------------------------------------------------------------------------
SUM: 33 1610 478 4664
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1052
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/TIFS.2018.2871749 is OK
- 10.1007/11552055_12 is OK
- 10.1186/1687-417X-2014-1 is OK
- 10.5281/zenodo.4655945 is OK
- 10.1145/3335203.3335738 is OK
- 10.1109/tdsc.2022.3154967 is OK
- 10.1109/icip.2014.7025854 is OK
- 10.1007/3-540-45496-9_2 is OK
- 10.1109/WIFS49906.2020.9360897 is OK
MISSING DOIs
- 10.29327/226091 may be a valid DOI for title: Aletheia
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋🏼 @daniellerch, @YassineYousfi, @ragibson, this is the review thread for the paper. All of our communications will happen here from now on.
As a reviewer, the first step is to create a checklist for your review by entering
@editorialbot generate my checklist
at the top of a new comment in this thread.
There are additional guidelines in the first comment of this issue.
Please don't hesitate to ping me (@mstimberg) if you have any questions/concerns.
(Apologies for originally pinging the wrong Daniel in this comment)
Finally getting around to this.
Just a quick note on the list of authors -- it seems the primary author @daniellerch wrote the software package and both authors collaborated on a few of the detection techniques used by the package (e.g., https://dl.acm.org/doi/10.1145/3335203.3335738 and https://ieeexplore.ieee.org/document/9722958), so the authors list looks good to me.
There are a handful of other contributors on the GitHub repository, but they're all much more minor.
A note on the data sharing point -- the paper basically contains no original data since the examples are generic and I was able to effectively run them with my own local images.
Ditto on reproducibility of the examples in the paper.
I would note that I ran into a few opaque errors in the simulators, but they look like research-grade code sourced from other authors/institutions (hence the requirement to accept secondary licenses and download the code from https://github.com/daniellerch/aletheia-external-resources). In these cases, the issues are external to the authors' work.
@daniellerch That said, I would like to see the secondary repository explicitly linked in the paper and/or READMEs of the package rather than just in aletheialib/utils.py
.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@ragibson Many thanks for your review and comments so far. @YassineYousfi, did you have any chance to look at the software/paper yet?
@mstimberg not yet, but I will check it out and review it this weekend!
Some notes to improve the submission:
The target audience for the software is not clearly stated in the documentation. I am assuming this software is geared towards researchers and enthusiasts.
How were the machine learning models trained? Very little detail is provided in https://github.com/daniellerch/aletheia/blob/master/aletheia-models/README.md. Note that some open source steganalysis training scripts exist (https://github.com/DDELab/deepsteganalysis), but weren't cited. can they be used to contribute more or better models?
DDE implementations are commonly used by researchers and should be cited https://dde.binghamton.edu/download/
:wave: @daniellerch I've seen that you have addressed and closes all the issues directly opened against your GitHub repository. Would you say that all the issues raised by the reviewers in the comments above have been addressed from your side or are you still working on it (updating the paper, for example)?
Thanks for checking in @mstimberg. I'm currently making some requested updates to the documentation and paper. Will keep you posted on the progress.
@YassineYousfi, thank you for your feedback. I have updated the README to better define the target audience. In addition, I've added more details about the training of Aletheia's machine learning models; there's a comprehensive article on this topic. Regarding DDELab's software, while it isn't directly integrated into Aletheia due to license incompatibilities, it is utilized by Aletheia and is available in an external repository. You can find the citation and acknowledgments there. Please refer to the 'External resources' section in the README.
Hi @mstimberg, I've finished updating the software and the documentation, thanks to the reviewers' helpful comments. Appreciate your patience and guidance. All set for a final review now.
Great, thanks @daniellerch. @ragibson and @YassineYousfi: do the updates address all your concerns/suggestions?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Checking off a few more items from my checklist:
Otherwise,
:wave: Hi everyone, and a Happy New Year :sparkles: ! I hope you are all doing well and had a good break (assuming you took one, of course).
@daniellerch could you give us an update with respect to the remaining issues in @ragibson's comment above? @YassineYousfi do the changes mentioned in @daniellerch's earlier comment address all your concerns?
Hello everyone, and Happy New Year.
The latest changes I've made include adding the community guidelines.
On another note, if @ragibson is happy to accept the help of the program and the examples in the documentation instead of automated tests, I think the only thing left is the issue of references to other steganalysis software. As far as I know, there are software that implements some specific techniques, but no complete tool aimed at a more general user that allows for a comprehensive steganalysis using machine learning.
Of the tools that implement different steganalysis techniques, almost all come from the Digital Data Embedding Laboratory (https://dde.binghamton.edu/download/). Some of these are used by Aletheia and are cited at the time of download. There is also information about this in the "External resources" section. Of course, if you believe that these should also be cited in another part of the documentation, or in the paper, please let me know.
So, I think I have nothing pending. Please inform me if otherwise.
thanks @daniellerch @mstimberg, the changes address all my concerns. I updated my checklist.
Thanks @daniellerch @mstimberg @YassineYousfi! It looks good to me now too 👍
Thank you very much @YassineYousfi @daniellerch for getting back to me, and thanks for your reviews in the first place, obviously.
@daniellerch since both reviewers have recommended the paper for acceptance, I will now proceed with the final checks before the final publishing steps. Could you please take care of the author tasks in the checklist below :point_down:
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi again @daniellerch, in addition to the points mentioned in the author checklist above, please have a look at the PR I just opened (daniellerch/aletheia#33). I made a few minor edits (mostly related to the formatting of the references and missing DOIs). Please merge it if you agree with all changes :pray:
@mstimberg Just made the merge. Thanks for the contributions!
@editorialbot set v0.3 as version
Done! version is now v0.3
@editorialbot set 10.5281/zenodo.10497963 as archive
Done! archive is now 10.5281/zenodo.10497963
Thanks for the updates @daniellerch. Regarding the Zenodo archive for v0.3, could you add the second author as an author there as well? If I understand correctly, they did not commit code directly, but supervised the development? In general, we prefer to have the same list of authors for the JOSS paper and the Zenodo archive, but if you have a strong reason to not have them be the same, please let me know! Also, please:
Note that you can make all these changes manually on Zenodo, without releasing a new version
I think you used the automatic GitHub-Zenodo integration – if you want to make future releases have the complete metadata automatically, you can use a .zenodo.json
file: https://developers.zenodo.org/#harvesting-with-multiple-filters
@mstimberg The second author didn't directly contribute code, yet their involvement has been recognized, and they are now included in the Zenodo archive. I believe everything is now in order. Thank you!
@daniellerch Many thanks for the changes. From your changes on Zenodo, I just realized that the affiliations concern three separate institutions (even though I assume they are all co-located). Could you update the paper in that regard, i.e. list them as ¹²³ as in the Zenodo archive (stating "Barcelona, Spain" each time – assuming this is correct, of course)? Thanks!
@editorialbot generate pdf
@mstimberg Thank you for pointing this out. The changes have been made to the paper, listing the affiliations as ¹²³ and stating 'Barcelona, Spain' for each, as per your suggestion. Thanks again!
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/TIFS.2018.2871749 is OK
- 10.1145/1288869.1288872 is OK
- 10.1109/TIFS.2014.2312817 is OK
- 10.1007/11552055_12 is OK
- 10.1186/1687-417X-2014-1 is OK
- 10.1145/2482513.2482965 is OK
- 10.1145/3335203.3335738 is OK
- 10.1109/tdsc.2022.3154967 is OK
- 10.1109/icip.2014.7025854 is OK
- 10.1007/3-540-45496-9_2 is OK
- 10.1109/WIFS49906.2020.9360897 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot recommend-accept
All looking good from my side, handing things over to the topic editor for the final steps. Thanks again to everyone involved!
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/TIFS.2018.2871749 is OK
- 10.1145/1288869.1288872 is OK
- 10.1109/TIFS.2014.2312817 is OK
- 10.1007/11552055_12 is OK
- 10.1186/1687-417X-2014-1 is OK
- 10.1145/2482513.2482965 is OK
- 10.1145/3335203.3335738 is OK
- 10.1109/tdsc.2022.3154967 is OK
- 10.1109/icip.2014.7025854 is OK
- 10.1007/3-540-45496-9_2 is OK
- 10.1109/WIFS49906.2020.9360897 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Submitting author: !--author-handle-->@daniellerch<!--end-author-handle-- (Daniel Lerch-Hostalot) Repository: https://github.com/daniellerch/aletheia Branch with paper.md (empty if default branch): master Version: v0.3 Editor: !--editor-->@mstimberg<!--end-editor-- Reviewers: @YassineYousfi, @ragibson Archive: 10.5281/zenodo.10497963
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@YassineYousfi & @ragibson, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mstimberg know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @ragibson
📝 Checklist for @YassineYousfi