Closed editorialbot closed 6 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.14 s (853.9 files/s, 71243.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 75 736 1042 2736
Markdown 12 1080 0 1667
JSON 3 0 0 415
YAML 8 44 20 318
TeX 1 2 0 194
reStructuredText 13 151 134 173
TOML 1 10 4 163
Jupyter Notebook 2 0 668 82
make 1 8 0 31
-------------------------------------------------------------------------------
SUM: 116 2031 1868 5779
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1040
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.18653/v1/2020.acl-main.442 is OK
- 10.18653/v1/D19-1670 is OK
- 10.5281/zenodo.1212303 is OK
- 10.18653/v1/2021.naacl-demos.6 is OK
- 10.18653/v1/2023.latechclfl-1.13 is OK
- 10.3115/v1/D14-1162 is OK
MISSING DOIs
- 10.21437/interspeech.2019-2680 may be a valid DOI for title: SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
- 10.1007/978-3-030-57321-8_21 may be a valid DOI for title: Improving short text classification through global augmentation methods
- 10.18653/v1/2020.emnlp-demos.16 may be a valid DOI for title: TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
INVALID DOIs
- None
@sap218, @wdduncan This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above. Please create your checklist typing:
@editorialbot generate my checklist
As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/6370
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
Hi all! Happy to have started this review - looking great and I've successfully ran the "Simple Example" from the repo.
Whilst I am looking into the code, I have some initial comments about the Paper:
Some little things I wanted to mention - I appreciate that the author acknowledges the contributors & the documentation is very good and thorough!
Happy to check these things off when addressed - please let me know if you have any questions (hope it all makes sense) Will continue with the review this week! :)
Thanks for the suggestions @sap218 I have fixed 1, 2, 3, 6.
re. 4) it is a summary so some degree of duplication is expected. If you feel it is a big problem I will reformulate the sentence, but I don't see it as problematic.
re. 5) I believe the summary should include both the relevant context as well as the package. If you have specific information you would like added to the summary I would be more than happy to rewrite it. Statement of needs is structured such that existing augmentation libraries are contrasted with augmenty, I believe this is the best way to highlight both the need for the package, but also what it tried to solve and what it does not.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @KennethEnevoldsen thank you for addressing my comments! I think the paper flows much better and I appreciate your comments on 4 & 5 - I've ticked off these points from my review checklist :)
One minor last thing, on line 67 ref needs fixing (pennington-etal-2014-glove)
I've started reviewing. So far here is what I've done using python 3.11.8
and pip
version 24.0
. I am running code on an a MacBook Pro with an Apple M1 Max processor and 64GB of RAM; OSX version Sonoma 14.3.1.
python -m venv
pip install augmenty
pip install augmenty[all]
, it fails with zsh: no matches found: augmenty[all]
. any ideas why?make test
-> this fails with error make: *** [test] Abort trap: 6
. perhaps a memory error?spacy
models; e.g. I had run python -m spacy download en_core_web_sm
to get en_core_web_sm
. also, I am not an expert using spacy
, but to get your github code to run, I had to load en_core_web_sm
using:
import en_core_web_sm
nlp = en_core_web_sm.load()
for doc in augmenty.docs(docs, augmenter=entity_augmenter, nlp=nlp):
print(doc)
it fails with error NameError: name 'docs' is not defined
.
am I using the correct version of python?
Hi @sap218
1) the pyproject.toml specified all the required dependencies. The pyproject.toml is a more recent alternative to setup.py and setup.cfg (I believe it is the recommended format)
2) I agree the table could be better. Currently, there is sadly not a lot of support for tables in the sphinx ecosystem. I will see if I can make it clearer though.
Hi @wdduncan
1) When running on zsh I believe you need to write pip install "augmenty[all]"
(I will change to documentation to match)
2) to run the make test
you will have to install it using make install
, this installs the development dependencies as well. I have tried to make this more clear in the contributing.md.
3) I agree that it should be clear that it needs to be downloaded. I have added a comment:
import spacy
import augmenty
nlp = spacy.load("en_core_web_md")
# if not installed run: python -m spacy download en_core_web_md
docs = nlp.pipe(["Augmenty is a great tool for text augmentation"])
entity_augmenter = augmenty.load("ents_replace_v1",
ent_dict = {"ORG": [["spaCy"], ["spaCy", "Universe"]]}, level=1)
for doc in augmenty.docs(docs, augmenter=entity_augmenter, nlp=nlp):
print(doc)
3.1) You shouldn't need to use this syntax. However, that seems to be a spacy issue. If you only have issues with it when installing augmenty let me know. (I suspect it might be spacy assuming something about your paths)
4) I believe you need to run the following line:
docs = nlp.pipe(["Augmenty is a great tool for text augmentation"])
If this step failed it could suggest an error in your spacy pipeline installation.
Hi @KennethEnevoldsen
I should be completed with my review at the end of this week/next week (functionality and automated tests are left)
:wave: @wdduncan – it looks like you've not started your review yet. Do you think you might be able to complete it in the next couple of weeks?
@editorialbot generate my checklist
@arfon I ran the @editorialbot generate my checklist
command. See above. Where is the checklist generated?
@KennethEnevoldsen I ran the command pip install "augmenty[all]"
. This worked fine.
When I run
nlp = spacy.load("en_core_web_md")
docs = nlp.pipe(["Augmenty is a great tool for text augmentation"])
entity_augmenter = augmenty.load("ents_replace_v1", ent_dict = {"ORG": [["spaCy"], ["spaCy", "Universe"]]}, level=1)
for doc in augmenty.docs(docs, augmenter=entity_augmenter, nlp=nlp):
print(doc)
it returns
Augmenty is a great tool for text augmentation
Should it return?:
spaCy Universe is a great tool for text augmentation.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@KennethEnevoldsen I have tested functionality and all seems good! I can sign off the checklist item Software Paper: References once the reference on line 67 [pennington] is fixed 😄 thank you for your patience!
Just a note @KennethEnevoldsen I encountered a small setuptools
problem w/ ubuntu (https://github.com/pypa/setuptools/issues/3269) when trying to run the tests (the fix was here https://pip.pypa.io/en/stable/installation/#get-pip-py). This isn't a problem w/ your package at all but wanted to let you know in case you wanted to update documentation...
BUT, I did run into a test issue - when I ran python -m pip install -e ".[tests]"
this doesn't include dacy and so I couldn't run the tests... to resolve this I had to also run python -m pip install -e ".[da]"
to finally be able to run python -m pytest
- documentation should reflect this - or a fix to run the tests without dacy
Thanks for the feedback @sap218 and @wdduncan. I will get back to this in the beginning of next week due to holidays.
@sap218 thanks for letting me know. Sounds like a very tricky problem, that is good to know about!
BUT, I did run into a test issue - when I ran python -m pip install -e ".[tests]" this doesn't include dacy and so I couldn't run the tests... to resolve this I had to also run python -m pip install -e ".[da]" to finally be able to run python -m pytest - documentation should reflect this - or a fix to run the tests without dacy
I have fixed that. It is actually stated in the contributing guidelines (but wrongly stated in the FAQ). I have removed the FAQ section on running tests so there is no duplicate information.
@wdduncan thanks for catching the mistake. This seems to be caused the by fact that "ORG" is replace with "GPE" (geopolitical entity) in the latest spaCy models. You should be able to fix it using:
nlp = spacy.load("en_core_web_md")
docs = nlp.pipe(["Augmenty is a great tool for text augmentation"])
# note the change in the following line:
entity_augmenter = augmenty.load("ents_replace_v1", ent_dict = {"GPE": [["spaCy"], ["spaCy", "Universe"]]}, level=1)
for doc in augmenty.docs(docs, augmenter=entity_augmenter, nlp=nlp):
print(doc)
# spaCy Universe is a great tool for text augmentation.
I have updated the documentation to match
Hi @KennethEnevoldsen
It is actually stated in the contributing guidelines (but wrongly stated in the FAQ)
Can you link this in the FAQ or include in the installation instructions in the README - I think it's quite difficult for users to find the test commands
--
Sorry to weigh in on this issue but ORG worked for me and in fact the change to GPE doesn't seem to work:
import augmenty
import spacy
nlp = spacy.load("en_core_web_sm")
ent_dict = {
"ORG": [ ["spaCy"], ["spaCy", "Universe"] ],
"GPE": [ ["London"], ["Berlin"] ],
"LOC": [ ["Paris"], ["Sydney"] ],
}
for x in range(3): # testing numerous outputs
docs = nlp.pipe([
"Augmenty is a great tool for text augmentation", "In New York, there are many tall buildings." ])
entity_augmenter = augmenty.load("ents_replace_v1", ent_dict=ent_dict, level=1)
for doc in augmenty.docs(docs, augmenter=entity_augmenter, nlp=nlp):
print(doc)
I get the output(s):
spaCy Universe is a great tool for text augmentation
In London, there are many tall buildings.
spaCy is a great tool for text augmentation
In Berlin, there are many tall buildings.
spaCy is a great tool for text augmentation
In Berlin, there are many tall buildings.
Which is great - but please let me know if I have misunderstood something but I don't understand why the change from ORG to GPE would be expected to work? If I use GPE instead of ORG, I get the following outputs:
ent_dict = {
"GPE": [ ["spaCy"], ["spaCy", "Universe"] ],
#"GPE": [ ["London"], ["Berlin"] ],
}
....
Augmenty is a great tool for text augmentation
In spaCy, there are many tall buildings.
Augmenty is a great tool for text augmentation
In spaCy, there are many tall buildings.
Augmenty is a great tool for text augmentation
In spaCy Universe, there are many tall buildings.
Also interestingly, GPE works instead of the NER tag LOC (location) but not sure if this is intention?
@sap218 I have added a link to the table in the readme to the CONTRIBUTING.md file
Oh right so the issue, might be very dependent on what version of spacy pipelines is being used (one might be using a newer or an older pipeline). As the first example is important I have added a few more details. Hopefully these should allow the user to better know what is going on. Feel free to say if it doesn't help.
import augmenty
import spacy
nlp = spacy.load("en_core_web_md")
# if not installed run: python -m spacy download en_core_web_md
doc = nlp("Augmenty is a great tool for text augmentation")
# check that the pipeline detects the entities (this done by SpaCy and is not a 100%)
print(doc.ents)
# (Augmenty,) is detected as an entity
doc.ents[0].label_
# 'GPE'. Depending on the model, the label might be different (e.g. 'ORG')
entity_augmenter = augmenty.load(
"ents_replace_v1", ent_dict={"GPE": [["spaCy"], ["spaCy", "Universe"]]}, # label=GPE,
level=1
)
for augmented_doc in augmenty.docs([doc], augmenter=entity_augmenter, nlp=nlp):
print(augmented_doc)
Also interestingly, GPE works instead of the NER tag LOC (location) but not sure if this is intention?
My guess is that New York is considered a GPE and not a location. This depends on the annotation guidelines used on the dataset, however generally a location have no agency and e.g. can't have impose rules ("New York [GPE] imposes a new rule requiring citizens to ..."), while a GPE can have agency. Some times it is also annotated depending on the context, i.e. "I was in New York [LOC] during the week" (no agency).
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @KennethEnevoldsen @arfon Happy that my comments/questions have been addressed and so I've completed my checklist! 🎉
@arfon I ran the @editorialbot generate my checklist command. See https://github.com/openjournals/joss-reviews/issues/6370#issuecomment-2018565334. Where is the checklist generated?
Weird, it should have edited this comment, replacing it with the checklist. @wdduncan – would you mind trying again please?
@arfon I ran the @editorialbot generate my checklist command. See #6370 (comment). Where is the checklist generated?
Weird, it should have edited this comment, replacing it with the checklist. @wdduncan – would you mind trying again please?
Now it worked :)
@arfon I'm finished with my review.
@arfon, seems like everything is ready, here. Anything that is missing from my side?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@KennethEnevoldsen – can you please merge this PR please? https://github.com/KennethEnevoldsen/augmenty/pull/224
It also looks like we're very close to being done here. I will circle back here next week, but in the meantime, please give your own paper a final read to check for any potential typos etc.
After that, could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:
Thanks, @arfon, I believe all of the things are now done:
@editorialbot set 10.5281/zenodo.11002423 as archive
Done! archive is now 10.5281/zenodo.11002423
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.18653/v1/2020.acl-main.442 is OK
- 10.18653/v1/D19-1670 is OK
- 10.5281/zenodo.1212303 is OK
- 10.18653/v1/2021.naacl-demos.6 is OK
- 10.18653/v1/2023.latechclfl-1.13 is OK
- 10.3115/v1/D14-1162 is OK
MISSING DOIs
- No DOI given, and none found for title: The effectiveness of data augmentation in image cl...
- 10.21437/interspeech.2019-2680 may be a valid DOI for title: SpecAugment: A Simple Data Augmentation Method for...
- No DOI given, and none found for title: ScandEval: A Benchmark for Scandinavian Natural La...
- No DOI given, and none found for title: hetpandya/textgenie
- 10.1007/978-3-030-57321-8_21 may be a valid DOI for title: Improving short text classification through global...
- 10.18653/v1/2020.emnlp-demos.16 may be a valid DOI for title: TextAttack: A Framework for Adversarial Attacks, D...
- No DOI given, and none found for title: Natural language processing with Python. O’Reilly ...
- No DOI given, and none found for title: WordNet: A Lexical Database for English
- No DOI given, and none found for title: DaDebias/genda-lens
- No DOI given, and none found for title: DaCy: A Unified Framework for Danish NLP
INVALID DOIs
- None
:wave: @openjournals/dsais-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/5264, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
The proof looks good on my end
Submitting author: !--author-handle-->@KennethEnevoldsen<!--end-author-handle-- (Kenneth C. Enevoldsen) Repository: https://github.com/KennethEnevoldsen/augmenty Branch with paper.md (empty if default branch): Version: joss Editor: !--editor-->@arfon<!--end-editor-- Reviewers: @sap218, @wdduncan Archive: 10.5281/zenodo.11002422
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@sap218 & @wdduncan, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @sap218
📝 Checklist for @wdduncan