Closed editorialbot closed 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.01 s (567.8 files/s, 42961.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Markdown 1 14 0 101
TeX 1 13 0 76
YAML 1 1 4 18
-------------------------------------------------------------------------------
SUM: 3 28 4 195
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1088/2632-2153/ab42ec is OK
- 10.1021/acs.jpca.0c10731 is OK
- 10.1038/s42005-020-0317-3 is OK
- 10.5281/zenodo.7402665 is OK
- 10.1021/acsnano.8b02208 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Wordcount for paper.md
is 1074
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@ziatdinovmax, @jagar2 Thanks for agreeing to review. I am forwarding information from the author below. Please note a new V3 branch in the repository.
Hi all,
@zhubonan thanks for sticking with this and finding reviewers, it is much appreciated! @jagar2 @ziatdinovmax, thanks for agreeing to review!!
Just a heads up... I have been doing some work on Scanbot over the few months since it was submitted. It has a GUI now which should make things easier when testing - this is available on the V3 branch. All core functionality is basically the same.
Also, given that STM time is generally precious, I've created ways to test Scanbot's functionality, without needing an STM, through introducing a demo mode (along with nanonis V5 simulator). For example, when tracking the motion of the STM tip in demo mode, the camera feed will be replaced by a recording of the tip moving. The other main example is when automated tip shaping is initiated, STM images of previously acquired tip imprints will be loaded from a pickled file instead of using images from the simulator.
The files for testing can be downloaded from this link. Just save and extract it in Scanbot's root directory.
If you have any questions, let me know!
Thanks!!
Hi @ziatdinovmax, @jagar2 how is your review going?
Hi @ziatdinovmax, @jagar2 I am wondering if you have started the review. I know it is holiday season at the moment 🎄 , but it would be great if you can let know when you think the review can be finished. Thanks a lot!
@zhubonan - my apologies for the delay, as I was in the middle of relocation. I will try to complete my review in the next few weeks.
@ziatdinovmax Thanks for the update and happy new year! @jagar2 Have you started the review?
Any update from your side @ziatdinovmax @jagar2? Please let me know. Thanks
@zhubonan I'm working through the checklist.
@ceds92, can you please add unit tests and at least brief function documentation?
@zhubonan I'm working through the checklist.
Great! Thanks for the heads-up.
@ceds92, can you please add unit tests and at least brief function documentation?
Yeah, I'll upload it today
@ziatdinovmax, please see attached. Let me know if there's anything else.
Edit: You can also find the Nanonis V5 simulator here if you need it
Hi @ziatdinovmax, just wondering if the document I uploaded contains everything you asked for or if there's any additional information you wanted?
@ceds92 I was expecting something more along the lines of continuous integration workflows. Do I understand correctly that testing of the scanbot requires a connection with an actual nanonis controller?
@ziatdinovmax
That's correct, testing Scanbot requires a TCP connection with nanonis/nanonis simulator controller software. Note you do not need any nanonis hardware if you're using the simulator.
Apologies, I'm not super familiar with CI and based on what I've just read, it will take me a while to fully implement it for Scanbot. I have however, finished the development of the web application (V4 branch) that is able to launch from a .exe which would simplify things a lot as no installation is required. If you would consider testing this using the nanonis simulator, I can upload a document detailing the specific tests related to claims made in the joss submission. I think it would take around 30-45 mins to run those specific tests manually.
If implementing CI is something you require, I'll attempt to do so but may not have time for a couple of weeks.
Thanks for your time.
@ceds92 I think testing manually with a simulator is reasonable in this case, but I need some clarification from JOSS editors on this since the reviewer's checklist asks specifically for automated tests. cc: @zhubonan
The manual steps for testing is OK, although automated tests are strongly included. @ziatdinovmax
However, JOSS does require that the manual test steps to be included in the documentation. @ceds92
One more thing, you don't have to have tests for the "full" functionality - people usually write unit tests to ensure the code correctness. These tests are usually for different functions which may not involved external software at all. For interaction with external software, one way to get around is to use mocks in the code, e.g. simulated responses from the external software.
Ok, thanks for the clarification @zhubonan. I will update the documentation to reflect the manual testing steps with the simulator. I will release it by the end of the week and update @ziatdinovmax when it's ready.
Thanks for your patience.
I've updated the documentation with the manual test steps @ziatdinovmax. Everything needed for testing can be found here: https://new-horizons-spm.github.io/scanbot/web-app-test/
Let me know if there's anything else 😃
Another update from my end... I've started implementing the automated tests and should be done by the end of the week. I'll keep you posted.
That is great! @ziatdinovmax Please carry on reviewing.
In the meantime, I will try to find another reviewer since @jagar2 has not responded in this issue.
@editorialbot add @ianhi as reviewer
@ianhi added to the reviewers list!
Hi @ianhi thanks again for agreeing to review. When you are ready, please run the following command to generate your checklist:
@editorialbot generate my checklist
Hi @zhubonan and @ziatdinovmax,
This week I started implementing the automated tests. So far this includes installation and about 50% of the web app server endpoints. I didn't get to spend as much time on it as I'd hoped for this week but plan to implement the remaining unit tests next week.
Anyway, in doing this I had to refine the installation procedure so just giving you a heads up about that. The documentation has the most recent installation steps
Hi @zhubonan and @ziatdinovmax,
I have completed the automated tests. Please let me know if there is anything else.
Sorry all for my delay here. I had an unexpected deadline, and then my wife unfortunately suffered a concussion so I did not have the time I expected to start the review process.
@editorialbot generate my checklist
@ianhi I'm really sorry to hear that, hope all is well! No problem at all for the delay
I have completed the automated tests. Please let me know if there is anything else.
Thanks! It is generally advised to have tests in a dedicated directory that is separate from the main package code, e.g.
mypackage/
__init__.py
module1.py
module2.py
tests/
test_module1.py
test_module2.py
Other than that all looks good.
Oops, apologies @ziatdinovmax, I just saw your comment - not sure how I missed it. I have moved the test script as you have suggested, thanks for that!
@ceds92 - Thanks! One minor remaining boilerplate task from the JOSS checklist is "clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support".
Could you please add these to ReadMe and/or Documentation?
Sure, I can do this tomorrow and update you when it's done
@ceds92 - Thanks! One minor remaining boilerplate task from the JOSS checklist is "clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support".
Could you please add these to ReadMe and/or Documentation?
Hey @ziatdinovmax, I have updated the documentation to address this. If there's anything else, let me know. (apologies for the delay, I have been very busy in the lab)
@ceds92 - thanks!
@zhubonan - I have completed the review and I can recommend this paper for the publication.
@editorialbot add @KoenImdea as reviewer
@KoenImdea added to the reviewers list!
Hi @KoenImdea, the review happens here.
If you make a comment with the following content:
@editorialbot generate my checklist
A review checklist will be generated for your use. Thanks a lot. I am hoping to have the review done within two weeks, if possible. As I mentioned, we already have one reviewer who has completed the review, so hopefully testing the software would be a smooth ride. If you have any problem, feel free to ask here.
Hi @ianhi to generate the checklist, please leave a comment containing
@editorialbot generate my checklist
only. I don't think the editorial bot will be able to pick it up if the command is embedded inside a comment.
Hi @ceds92 (and @zhubonan),
Question. I have been reading back in this thread, where it seems that V3 was proposed to be reviewed. However, it seems that this submission has been ongoing for a while now. And right now, the default branch in the repository is V4. So, which branch should I review?
Hi @KoenImdea, thanks so much for agreeing to review!
Yeah, this has been going on for a long time and I have made many updates before the review actually started. Please review the latest version of the V4 branch.
Thanks!
Hi @ceds92 ,
Thanks for the quick reply! In that case, would you mind adding a license to the V4 repository? It seems that that got lost creating the new branch. Cheers.
Oops, fixed!
@KoenImdea, in case it's not obvious in the history of this thread, I should also mention that to test the claims in the paper you can follow the steps in this section of the documentation using the nanonis simulator.
Edit: sorry, this section
Hi. Thank you for your patience, I was ill these last two days. As a heads up, from Tuesday late afternoon I'll be visiting my country next week, and I'll be offline.
Anyway, back to the task at hand. As I have some experience with Scanning Tunneling Microscopy (STM), I feel more qualified to give feedback on that part than on the actual code. Having gone over the paper and the documentation, I can see the need that the authors highlight for a system that is both:
At the moment, Scanbot certainly seems to be actively maintained and developed, which can be a two-edged sword. For instance, the documentation is starting to claim in several places that you can do nc-AFM, and in specific z-dependent nc-AFM and nc-AFM registration, but I did not manage to locate these features so far. For the rest, the README on github.com is short. Users are pointed to a nice web-guide.
So I checked the installation guide there, and picked the pip version. I did not install Zulip nor did I use Google Firebase. On My PC I have Nanonis Mimea V5e R11796, which I run in Demo mode. Note: this is not the same version as the simulator from SPECS. I had (known) issues with the simulator when running sparse sampling routines. All TCP commands should be the same however.
For the installation:
For the configuration:
Data Acquisition: Unfortunately, this did not seem to work on my machine. This is what appears on the terminal during a test Survey: If I go to configuration immediately after testing the survey, the configuration is gone: As there are no real useful errors printed to the terminal, I'm a bit unsure of what went wrong.
Tip shaping: Well, I do not have a camera connected to my computer, and the Demo mode did not end well...
Overall, I think the program is a great and welcome initiative. The documentation on the website looks nice. It has a few minor inconsistencies, and some parameters that could use a bit more detail in explaining. But I am very pleased with the overall quality of said documentation. Unfortunately, at the moment the program is not doing what it should on my computer. The automated tip recovery on a second sample is a nice feature. As many samples are still measured just on a noble metal surface, I recommend to leave room to implement the technique of Wang which you cite. This recommendation is not related to if your manuscript should be published yes or no, but as a way of me seeing possible use in a lab in my institute. The fact that I am thinking of how to use it means that I like it. However, first I would like to figure out why it is not working right now... For this: more, and more verbose pytests/unittests would be really helpful. Or some kind of debug flag to print more output.
Hi @KoenImdea,
Thanks so much for your detailed feedback! I am sorry and I hope that this bug has not wasted too much of your time.
Please see attached for my responses to your comments. I guess the main points are:
1) I was able to reproduce the errors you're encountering and they all stem from a case where autosave is not turned on and the Z channel is not selected. I have just pushed through a fix for this. Now, they are both automatically enabled. I have not released this fix to pypi just yet so please reinstall from source to test.
2) I will update all of the comments relating to documentation at some point next week
Thanks again for looking into this!
Submitting author: !--author-handle-->@ceds92<!--end-author-handle-- (Julian Ceddia) Repository: https://github.com/New-Horizons-SPM/scanbot Branch with paper.md (empty if default branch): joss Version: 4.5.1 Editor: !--editor-->@zhubonan<!--end-editor-- Reviewers: @ziatdinovmax, @KoenImdea Archive: 10.5281/zenodo.12669343
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@ziatdinovmax & @jagar2, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @zhubonan know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @ziatdinovmax
📝 Checklist for @KoenImdea