Open editorialbot opened 2 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.02 s (1603.8 files/s, 190658.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 20 494 640 1413
Markdown 8 178 0 587
YAML 4 26 9 160
Rmd 1 112 195 125
TeX 1 14 0 89
-------------------------------------------------------------------------------
SUM: 34 824 844 2374
-------------------------------------------------------------------------------
Commit count by author:
139 Hause Lin
9 Tawab Safi
Paper file info:
📄 Wordcount for paper.md
is 1189
✅ The paper includes a Statement of need
section
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
✅ OK DOIs
- 10.48550/arXiv.2408.11707 is OK
- 10.1002/widm.1531 is OK
- 10.48550/arXiv.2404.07654 is OK
- 10.48550/arXiv.2408.05933 is OK
- 10.48550/arXiv.2403.12082 is OK
- 10.48550/arXiv.2408.11847 is OK
🟡 SKIP DOIs
- No DOI given, and none found for title: Enhancing propaganda detection with open source la...
❌ MISSING DOIs
- None
❌ INVALID DOIs
- None
License info:
🟡 License found: Other
(Check here for OSI approval)
👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above.
Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/7211 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks @KennethEnevoldsen for your review/comments! I've made changes to the repo/doc to address your comments, which definitely clarified things a lot. Let me know if they've addressed your comments (see responses below). Your other comments relate to the paper itself, which I'll address later.
- There seem to be two license files. I believe these can be combined to one
The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one—ollamar is following closely the conventions followed by the R community. For example, see the multiple licenses in ggplot and dplyr, two of the most used R libraries.
- Substantial scholarly effort: I believe this is a borderline case. This package is quite young and provides a wrapper around ollama. I believe this in itself is quite valuable, but I am unsure if there is workflow in place to ensure maintainability and compatibility. This could e.g. be ensured using a scheduled test. I would love a section added to the paper on this. I would suggest in favor of the tutorials which are best kept in the documentation (as it will become outdated).
Glad you think it's valuable! There are many workflows in place. First, the package already has github continuous integration and deployment, so it will be tested on macOS/linux/windows whenever there are changes to the repository (there are a lot of test cases, which are also being run whenever the repo updates). Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility (see regular test results here; note that on 2024-09-10, a few CRAN servers are down, resulting in failed tests on certain linux machines and test results page might not load). Note also that for a library to be hosted on CRAN, it has to satisfy many strict requirements regarding maintainability and compatibility (otherwise, CRAN will inform the author and take it down).
- Documentation: I believe the readability of the documentation can be improved. For instance, there is a header called "Notes", which seems like it should be reformatted. Fold out menus could also be used to allow for ease of navigation.
- Community guidelines: can't find any community guidelines
I've restructured the site so the home page focuses on installation and basic usage (also added a table of contents on the right). I've also added a Get started page that uses foldout menus to allow for ease of navigation. The old "Notes" section no longer exist and has been integrated into the rest of the documentation. There's a new Community section on the right sidebar that links to contributing guide and code of conduct.
- Installation: I have created an issue here: Broken link in the readme hauselin/ollama-r#25
I've updated the installation instructions so they are clearer, especially for different OS.
thanks for the quick fixes!
The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility
Thanks for the clarification. It has been a while since I did packages in R and they were only for internal projects so didn't know CRAN regularly ran tests (def. nice to know).
Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.
I've made changes to the repo/doc to address your comments, which definitely clarified things a lot
I totally agree, makes navigation noticeably easier. Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.
Optional:
Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.
I've added the versions that have been tested in the updated README (https://github.com/hauselin/ollama-r/blob/4fca9c0546b45e7ea998e600e8112de17e028340/README.md?plain=1#L32C1-L36C16). Let me know if this is good, @KennethEnevoldsen. Ollama should be relatively stable (86k stars and almost 7k forks) so it's unlikely they'll introduce breaking changes. But if they do, the (official) Python and JS libraries (and the hundreds of apps/tools that have already been built on top of it) will break too—ollamar's design philosophy is similar to these two libraries' and is very modular/follows good software design practices, so it should not be difficult to update.
Regarding your two optional comments:
To increase the visibility of the package it might be worth checking if the ollama folks want the add your library to their list of libraries. Understand if you would rather do this after the review.
It actually already is in their list of libraries (but in a different section). I think the section you referred to is for their official libraries (other libraries are listed lower down on the same page).
I note that you do not use the citation.cff file for citations. Would recommend adding it, but it is def. up to you.
There is a citation file here (again, it's located in this directory because I'm following R package development conventions. It is picked up by github though (if you go to the main page, you can see "Cite this repository" in the right sidebar). If I add a citation.cff
, the automated tests flags it and leave this note: Found the following CITATION file in a non-standard place: CITATION.cff Most likely ‘inst/CITATION’ should be used instead.
👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - just checking in to see how things are going with this review. Could you each post a short update here?
Also, @elenlefoll I don't see that you have created your checklist yet. Are you still able to conduct this review?
Thanks!
I am waiting for the updated article draft, made clear here:
Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.
Notably the missing state-of-the-field and design considerations as it allow me to evaluate whether the code lives up to the intent
However, I understand that @hauselin was waiting for the second review before making too many changes.
Yes @crvernon I've already revised the codebase based on @KennethEnevoldsen's suggestions. What's left are changes to the paper itself, and waiting for the second review before revising the paper makes more sense to me (but @crvernon, if you think it makes sense for me to revise the paper at this point too, let me know).
A couple of minor comments, having now read the paper:
"Locally deployed LLMs offer advantages in terms of data privacy, security, and customization, making them an attractive option for many users (Chan et al., 2024; Liu et al., 2024; Lytvyn, 2024; Shostack, 2024)" --> I was surprised not to see reproducibility mentioned anywhere in the paper as an advantage of locally deployed LLMs.
"To use Ollama, you must first download the model you want to use from https://ollama.com/li-53 brary." --> The wording of this sentence could be improved to make clear that the pull() function automatically downloads models from this website.
"ollamar fills a critical gap in the R ecosystem by providing a native interface to run locally deployed LLMs" --> I personally feel that this sentence is somewhat misleading since other R libraries do exist to run LLMs locally via R. Only rollama is mentioned a couple of sentences later.
I have now checked off most boxes of my review and only have a few concerns that largely overlap with Reviewer 1:
Substantial scholarly effort: The package offers many useful functions that are well documented but my understanding is that it is exclusively a wrapper around ollama and I am therefore unsure whether JOSS considers this to correspond to a sufficiently substantial scholarly effort. This might be a question for the editor(s) (@crvernon?) to clarify.
State of the field: The statement of need is about accessing locally deployed LLMs in R and gives the impression that only one other R package currently exists for this task. The package mentioned is also a wrapper for ollama and it is not clear how this current package differs for the paper.
References: These will need to be updated once the state of the field section has been filled with life.
Everything else I'm happy with!
Hi @elenlefoll - Concerning the substantial scholarly effort question you raised: I let this submission go through pre-screening on the grounds of "...makes addressing research challenges significantly better (e.g., faster, easier, simpler)."
Thanks for clarifying, @crvernon. I'll address both reviewers' concerns in the next day or two. Thanks again, everyone!
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @KennethEnevoldsen and @elenlefoll, thanks again for reviewing the library and paper! I've revised the paper and addressed all your comments. Briefly, I removed the example usage sections and added two new sections: "state of the field" and "design." These two sections focus on address both your comments. Summary of your comments:
@KennethEnevoldsen
@elenlefoll
Thanks for the update @hauselin. I have gone over the revised version and updated my checklist given the changes.
I think the current comparisons cover what I would expect and the state of the field reasonably outlines alternatives and their comparison with the current work. I appreciate the new section on design. I am more than happy to recommend this paper for acceptance
Thanks again for taking the time/effort to review it (and for the pull requests), @KennethEnevoldsen!
Fixed a few typos in the paper.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Submitting author: !--author-handle-->@hauselin<!--end-author-handle-- (Hause Lin) Repository: https://github.com/hauselin/ollama-r Branch with paper.md (empty if default branch): joss Version: v1.2.0.9000 Editor: !--editor-->@crvernon<!--end-editor-- Reviewers: @KennethEnevoldsen, @elenlefoll Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@KennethEnevoldsen & @elenlefoll, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @KennethEnevoldsen
📝 Checklist for @elenlefoll