Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.07 s (1285.6 files/s, 81991.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 39 538 1246 2257
reStructuredText 38 344 139 403
TeX 1 11 0 120
JSON 1 0 0 106
Markdown 2 64 0 105
YAML 2 1 4 28
DOS Batch 1 8 1 26
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 85 970 1397 3054
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1008
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/tit.2011.2158885 is OK
- 10.1007/s10618-015-0435-9 is OK
- 10.1109/ita.2013.6502991 is OK
- 10.1109/icdm.2012.136 is OK
- 10.1007/978-3-031-79285-4 is OK
- 10.1016/j.softx.2022.100988 is OK
- 10.1016/j.softx.2021.100675 is OK
- 10.21105/joss.01731 is OK
- 10.5281/zenodo.4456181 is OK
- 10.18637/jss.v084.i08 is OK
- 10.1007/s11222-007-9033-z is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@sara-02 and @zoometh - Thanks for agreeing to review this submission. This is the review thread for the paper. All of our communications will happen here from now on.
As you can see above, you each should use the command @editorialbot generate my checklist
to create your review checklist. @editorialbot commands need to be the first thing in a new comment.
As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4894
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if either of you require some more time. We can also use editorialbot (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@danielskatz) if you have any questions/concerns.
(And to remind everyone, including myself: @zoometh will start their review in a week or so and @sara-02 will start their review in mid November)
Hello @danielskatz: I noticed that the version is listed as 0.0.1. Would it be possible to set this to 0.0.3? A few updates were made since the initial submission; these changes are minor, but are listed for transparency in our release log.
Also @sara-02 and @zoometh, if it helps at all regarding the code coverage statement, I included a reference visual in the contributor guide.
Thank you All!
@editorialbot set 0.0.3 as version
Done! version is now 0.0.3
π @zoometh - any update on your review? It would be helpful if you can at least create your checklist via using the command @editorialbot generate my checklist
. @editorialbot commands need to be the first thing in a new comment.
I'm a bit overwhelmed these days. I'll start this weekend
Hello @sara-02 and @zoometh - thank you for working on this review during an especially busy season.
I wanted to check in to see if there is anything I can do to facilitate the process. For instance, if it's helpful and appropriate, I can confirm a few of the "general checks" (e.g. whether we conduct animal research).
Thank you for your time!
@lucasmccabe - thanks, but as an author, you shouldn't be prompting the reviewers (even if done in a friendly and helpful way), while as editor, I probably should in this case.
π @sara-02 - is there any update on your review? It would be helpful if you can at least create your checklist via using the command @editorialbot generate my checklist
. @editorialbot commands need to be the first thing in a new comment.
π @zoometh - How is your review coming?
@lucasmccabe - thanks, but as an author, you shouldn't be prompting the reviewers (even if done in a friendly and helpful way), while as editor, I probably should in this case.
Understood - my apologies!
π @zoometh - How is your review coming?
Slowly, due to a large amount of work. I'll continue this end of the week
@danielskatz @lucasmccabe I am done with reviews from my side, the library loaded smoothly, and the sample code also worked. I have just one minor comment for the documentation. Rest is all good from my side. The tests are also running smoothly.
Thanks @sara-02 !
Thank you for your review @sara-02! I have updated both the README and the narrative documentation to reflect your recommendation.
I'm on my way to actually do my review, sorry for the delay
π @lucasmccabe
In general:
nx.fast_gnp_random_graph(100, 0.25)
) it is useful to set up a pseudo-random number (e.g. random.seed()
) so that the user can find the same results as the tutorial page.
FutureWarning: laplacian_matrix will return a scipy.sparse array instead of a matrix in Networkx 3.0
I
and G
are not instantiated. The following code snippets are not reproductible:result = cosasi.single_source.netsleuth(I, G)
and
result = cosasi.multiple_source.netsleuth(I, G)
m = cosasi.utils.estimators.number_sources(I=I, number_sources=None, return_source_subgraphs=False, number_sources_method="eigengap")
gives:
<class 'networkx.utils.decorators.argmap'> compilation 12:4: FutureWarning: normalized_laplacian_matrix will return a scipy.sparse array instead of a matrix in Networkx 3.0.
/home/archesadmin/env/lib/python3.8/site-packages/cosasi/utils/estimators.py:25: FutureWarning: adjacency_matrix will return a scipy.sparse array instead of a matrix in Networkx 3.0.
A = nx.adjacency_matrix(I)
/home/archesadmin/env/lib/python3.8/site-packages/sklearn/manifold/_spectral_embedding.py:259: UserWarning: Graph is not fully connected, spectral embedding may not work as expected.
warnings.warn(
Same warnings for the following code snippets...
already_run_simulation
variable is not instantiatedHi @zoometh, thank you for the careful review. Youβve raised several important points.
I think I've addressed your comments completely in my updates - please let me know if I've missed anything.
Reproducibility of tutorial pages (e.g. Source Inference, When the Number of Sources is Unknown, Automatic Benchmarking) -
This is now addressed in the most recent documentation update. I went through the tutorial and ensured that all variable references are self-contained in that page. I also added command line outputs where reasonable so that users may check that their results line up.
Setting a random seed:
This is addressed in two ways:
In the tutorial, random graph generation is now seeded. NetworkX
uses both random
and np.random
for number generation, so I seed with both.
There is also stochasticity in the diffusion processes - both in infection initialization and spread. I updated the library to expose a seed
parameter in the contagion simulator; this is reflected in package version 0.0.4 and throughout the documentation.
In the subsequent section (Seeding Confirmation), I run a few checks to confirm that things are properly seeded in the updated version.
Warnings:
These are mostly things like future warnings coming directly from our dependencies (e.g. FutureWarning: laplacian_matrix is hard-coded here). They should not reflect risk to the user, since dependencies have been fixed in the requirements file.
Nonetheless, these warnings will be a hassle to users, so I have addressed them in package version 0.0.4. Using this updated version, I went through the tutorial again and experienced no warnings, errors, etc.
Imports, seeds, etc. used in this section:
import networkx as nx, cosasi, random, numpy as np
seed = 42
random.seed(seed)
np.random.seed(seed)
Double-check seeding of random graph generation:
>>> G = nx.fast_gnp_random_graph(200, 0.15, seed=seed)
>>> H = nx.fast_gnp_random_graph(200, 0.15, seed=seed)
>>> G.edges == H.edges
True
Double-check seeding of diffusion simulation:
>>> contagion1 = cosasi.StaticNetworkContagion(G=G, model="si", infection_rate=0.01, fraction_infected=0.05, seed=seed)
>>> contagion1.forward(steps=10)
>>> contagion2 = cosasi.StaticNetworkContagion(G=G, model="si", infection_rate=0.01, fraction_infected=0.05, seed=seed)
>>> contagion2.forward(steps=10)
>>>
>>> contagion1.get_source() == contagion2.get_source()
True
>>> contagion1.get_infected_subgraph(step=5).edges == contagion2.get_infected_subgraph(step=5).edges
True
Hello @danielskatz: Would it be possible to set the version to 0.0.4? To address some of @zoometh's comments, I packaged an updated version.
These changes are listed for transparency in our release log.
@editorialbot set 0.0.4 as version
Done! version is now 0.0.4
Thanks @lucasmccabe I will finish my review this weekend
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@lucasmccabe: there is a recent open-source software dealing with graph diffusion and source detection: https://arxiv.org/abs/2206.12327 whose source code and data are provided here: https://github.com/triplej0079/SLVAE. This reference is missing in the article's references
@zoometh Yes, I became aware of this paper recently, as well, and will update mine to cite it this weekend.
That said, SLVAE fills a different niche than cosasi - SLVAE is an implementation of a single autoencoder-based strategy, whereas cosasi is a more general framework.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1145/3534678.3539288 is OK
- 10.1109/tit.2011.2158885 is OK
- 10.1007/s10618-015-0435-9 is OK
- 10.1109/ita.2013.6502991 is OK
- 10.1109/icdm.2012.136 is OK
- 10.1007/978-3-031-79285-4 is OK
- 10.1016/j.softx.2022.100988 is OK
- 10.1016/j.softx.2021.100675 is OK
- 10.21105/joss.01731 is OK
- 10.5281/zenodo.4456181 is OK
- 10.18637/jss.v084.i08 is OK
- 10.1007/s11222-007-9033-z is OK
MISSING DOIs
- None
INVALID DOIs
- None
@zoometh The paper has been updated to cite SL-VAE. Thank you!
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I have completed my review list. Everything is on tracks now. Well done @lucasmccabe !
@lucasmccabe - At this point could you:
I can then move forward with acceptance of the submission, which will include a careful proofread.
@zoometh @sara-02 Thank you both for reviewing this paper! I appreciate your time and guidance.
@danielskatz Thank you for providing these next steps. A release with tag cosasi-0.0.4
is available here.
As for Zenodo archiving, I will request that my organization enable third-party access to the Zenodo application. I will follow up when this is done.
@danielskatz
v0.0.4-joss
, available here.Please let me know if I've missed anything!
@editorialbot set v0.0.4-joss as version
Done! version is now v0.0.4-joss
@editorialbot set 10.5281/zenodo.7430558 as archive
Done! Archive is now 10.5281/zenodo.7430558
@editorialbot recommend-accept
I'll proofread this next
Submitting author: !--author-handle-->@lucasmccabe<!--end-author-handle-- (Lucas McCabe) Repository: https://github.com/lmiconsulting/cosasi/ Branch with paper.md (empty if default branch): joss Version: v0.0.4-joss Editor: !--editor-->@danielskatz<!--end-editor-- Reviewers: @sara-02, @zoometh Archive: 10.5281/zenodo.7430558
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@sara-02 & @zoometh, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danielskatz know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @zoometh
π Checklist for @sara-02