Closed editorialbot closed 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.03 s (1227.2 files/s, 108476.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Julia 17 256 546 1302
XML 14 0 7 690
Markdown 4 146 0 437
YAML 4 3 4 95
TeX 1 9 0 90
TOML 1 2 0 37
-------------------------------------------------------------------------------
SUM: 41 416 557 2651
-------------------------------------------------------------------------------
Commit count by author:
379 Bittens
18 Max Bittens
1 baxmittens
Paper file info:
📄 Wordcount for paper.md
is 806
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1137/100786356 is OK
- 10.1137/060663660 is OK
- 10.1016/j.jcp.2009.01.006 is OK
- 10.48550/arXiv.1509.01462 is OK
- 10.21105/joss.04300 is OK
- 10.1109/38.865875 is OK
- 10.21105/joss.05003 is OK
MISSING DOIs
- No DOI given, and none found for title: Quadrature and interpolation formulas for tensor p...
- 10.1007/s12665-012-1546-x may be a valid DOI for title: OpenGeoSys: an open-source initiative for numerica...
INVALID DOIs
- None
@baxmittens, @ziyiyin97, @dannys4, This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above.
For @ziyiyin97 and @dannys4 - Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
As you are probably already aware, The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/6725 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
Thanks in advance and let me know if you have any questions!!
Reference check summary (note 'MISSING' DOIs are suggestions that need verification): OK DOIs - 10.1137/100786356 is OK - 10.1137/060663660 is OK - 10.1016/j.jcp.2009.01.006 is OK - 10.48550/arXiv.1509.01462 is OK - 10.21105/joss.04300 is OK - 10.1109/38.865875 is OK - 10.21105/joss.05003 is OK MISSING DOIs - No DOI given, and none found for title: Quadrature and interpolation formulas for tensor p... - 10.1007/s12665-012-1546-x may be a valid DOI for title: OpenGeoSys: an open-source initiative for numerica... INVALID DOIs - None
@baxmittens, in the meantime see comment above. Looks like a DOI is missing for a reference. Could you get that corrected? Thanks!
@dannys4 - When you get a chance, could you fill out this form here- https://reviewers.joss.theoj.org/reviewers/new? This would convey JOSS editors your willingness to review for JOSS in the future based on interest. Let me know what you think. Thanks!
@kanishkan91 - the form should be filled out. Thanks!
The software paper largely looks great and very clear, there's just a few (admittedly narrow) nitpicks I have:
Less a comment and more of a question for @kanishkan91 : the paper states "special attention was paid to implement allocation-free in-place variants of all necessary math operators for all output datatypes such as VTUFile
or XDMF3File
". Is this a "performance claim" as suggested in the checklist and, if so, how rigorously should I try to check this?
On the examples side, I was having trouble getting the "plug and play" provided example to work.
altered_StochasticOGSModelParams.xml
do not match the actual file in the test repo and I wasn't sure if I was supposed to change anything to make the mean and variance to match the documented example.DistributedSparseGrids
on the main
branch. It seems like it was known that PlotlyJS
has some issues, and this was preventing my functionality (Julia 1.10.3). This may be above my paygrade as a reviewer, but there's definitely compat
s missing for the dependencies that would prevent some of these issues. Once I got that figured out, it looks like there's some pathing issue going on. I get to start!(ogsuqasg)
and then I get a bunch of output like
julia> start!(ogsuqasg)
[ Info: Starting 65 simulatoin calls
From worker 2: 1_1_1_1Progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| Time: 0:00:00
From worker 3: 1_2_1_1
From worker 2: ogs call @[0.0, 0.0]
From worker 3: ogs call @[0.0, -1.0]
...
ERROR: TaskFailedException
nested task error: On worker 2:
IOError: could not spawn `/path/to/ogs/bin/ogs -o ./Res/1_1_1_1 ./Res/1_1_1_1/point_heat_source_2D.prj`: no such file or directory (ENOENT)
Stacktrace:
[1] _spawn_primitive
@ ./process.jl:128
[2] #784
@ ./process.jl:139 [inlined]
[3] setup_stdios
@ ./process.jl:223
[4] _spawn
@ ./process.jl:138 [inlined]
[5] _spawn
@ ./process.jl:166
[6] #run#798
@ ./process.jl:479
[7] run
@ ./process.jl:477 [inlined]
[8] fun
@ ~/.julia/dev/OpenGeoSysUncertaintyQuantification/test/ex1/user_functions.jl:46
[9] fun
@ ~/.julia/dev/OpenGeoSysUncertaintyQuantification/test/ex1/user_functions.jl:41
[10] #invokelatest#2
@ ./essentials.jl:892
[11] invokelatest
@ ./essentials.jl:889
[12] #110
@ ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:287
[13] run_work_thunk
@ ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:70
[14] #109
@ ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:287
Stacktrace:
[1] remotecall_fetch(::Function, ::Distributed.Worker, ::StaticArraysCore.SVector{…}, ::Vararg{…}; kwargs::@Kwargs{})
@ Distributed ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/remotecall.jl:465
[2] remotecall_fetch(::Function, ::Distributed.Worker, ::StaticArraysCore.SVector{2, Float64}, ::Vararg{Any})
@ Distributed ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/remotecall.jl:454
[3] remotecall_fetch(::Function, ::Int64, ::StaticArraysCore.SVector{2, Float64}, ::Vararg{Any}; kwargs::@Kwargs{})
@ Distributed ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/remotecall.jl:492
[4] remotecall_pool(::Function, ::Function, ::Distributed.WorkerPool, ::StaticArraysCore.SVector{…}, ::Vararg{…}; kwargs::@Kwargs{})
@ Distributed ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/workerpool.jl:126
[5] remotecall_pool
@ ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/workerpool.jl:123 [inlined]
[6] remotecall_fetch
@ ~/.julia/juliaup/julia-1.10.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Distributed/src/workerpool.jl:232 [inlined]
[7] (::DistributedSparseGrids.var"#41#42"{…})()
@ DistributedSparseGrids ~/.julia/dev/DistributedSparseGrids/src/DistributedSparseGrids.jl:353
...and 64 more exceptions.
Stacktrace:
[1] sync_end(c::Channel{Any})
@ Base ./task.jl:448
[2] macro expansion
@ ./task.jl:480 [inlined]
[3] distributed_fvals!(asg::AdaptiveHierarchicalSparseGrid{…}, cpts::Vector{…}, fun::typeof(fun), worker_ids::Vector{…})
@ DistributedSparseGrids ~/.julia/dev/DistributedSparseGrids/src/DistributedSparseGrids.jl:344
[4] distributed_init_weights_inplace_ops!(asg::AdaptiveHierarchicalSparseGrid{…}, cpts::Vector{…}, fun::typeof(fun), worker_ids::Vector{…})
@ DistributedSparseGrids ~/.julia/dev/DistributedSparseGrids/src/DistributedSparseGrids.jl:423
[5] macro expansion
@ ./timing.jl:279 [inlined]
[6] start!(ogsuqasg::OGSUQASG, refinetohypercube::Bool)
@ OpenGeoSysUncertaintyQuantification ~/.julia/dev/OpenGeoSysUncertaintyQuantification/src/OpenGeoSysUncertaintyQuantification.jl:487
[7] start!(ogsuqasg::OGSUQASG)
@ OpenGeoSysUncertaintyQuantification ~/.julia/dev/OpenGeoSysUncertaintyQuantification/src/OpenGeoSysUncertaintyQuantification.jl:473
[8] top-level scope
@ REPL[18]:1
Some type information was truncated. Use show(err)
to see complete types.
where I get
shell> ls Res/ 1_1_1_1 1_5_1_10 2_1_3_1 2_4_1_2 3_1_4_1 4_1_2_1 4_2_6_3 5_1_6_1 1_2_1_1 1_5_1_12 2_2_1_1 2_4_1_4 3_2_2_1 4_1_4_1 4_2_8_1 5_1_8_1 1_2_1_3 1_5_1_14 2_2_1_3 2_4_1_6 3_2_2_3 4_1_6_1 4_2_8_3 1_3_1_2 1_5_1_16 2_2_3_1 2_4_1_8 3_2_4_1 4_1_8_1 5_1_10_1 1_3_1_4 1_5_1_2 2_2_3_3 2_4_3_2 3_2_4_3 4_2_2_1 5_1_12_1 1_4_1_2 1_5_1_4 2_3_1_2 2_4_3_4 3_3_2_2 4_2_2_3 5_1_14_1 1_4_1_4 1_5_1_6 2_3_1_4 2_4_3_6 3_3_2_4 4_2_4_1 5_1_16_1 1_4_1_6 1_5_1_8 2_3_3_2 2_4_3_8 3_3_4_2 4_2_4_3 5_1_2_1 1_4_1_8 2_1_1_1 2_3_3_4 3_1_2_1 3_3_4_4 4_2_6_1 5_1_4_1
For the record, I installed `OpenGeoSysUncertaintyQuantification` via `Pkg.dev(...)`, so this is on the main branch.
The software paper largely looks great and very clear, there's just a few (admittedly narrow) nitpicks I have:
- Distributions.jl should probably be properly cited in the referencessince it is referenced in the work (citation here)
- Sobol' should have the apostrophe at the end for proper method attribution.
Less a comment and more of a question for @kanishkan91 : the paper states "special attention was paid to implement allocation-free in-place variants of all necessary math operators for all output datatypes such as
VTUFile
orXDMF3File
". Is this a "performance claim" as suggested in the checklist and, if so, how rigorously should I try to check this?
Hello @dannys4
thank you for reviewing this software. I appreciate it. I will go to your comments and alter my code appropriately. If I fix the things accordingly, I will write here again.
I added the citation and apostrophe you've suggested.
The performance claim
can be tested likewise:
import AltInplaceOpsInterface: add!
import LinearAlgebra: mul!
using BenchmarkTools
xdmf = XDMF3File("./Res/1_1_1_1/PointHeatSource_quarter_002_2nd.xdmf");
tmp = deepcopy(xdmf);
@btime $xdmf + $xdmf;
# 878.125 μs (1472 allocations: 3.13 MiB)
@btime add!($xdmf, $tmp)
# 164.500 μs (0 allocations: 0 bytes)
@btime $xdmf * $xdmf;
# 599.958 μs (1472 allocations: 3.13 MiB)
@btime mul!($xdmf, $tmp, $tmp)
# 60.792 μs (0 allocations: 0 bytes)
At the moment, I use a nasty thing called AltInplaceOpsInterface.jl for in-place operation since Julia does not seem to support broadcasting for nested data types and may never will, see e.g. https://discourse.julialang.org/t/efficient-weighted-sum-for-arbitrary-data-types/113992
@dannys4 , @baxmittens looks like you guys are communicating through the issues, which is great!
@dannys4 On your question above, if a specific claim is made in the paper regarding processing for a specific data type, there may be two things to check-
But again seems like @baxmittens responded to that already!
@dannys4
- A minor detail is that the docs for
altered_StochasticOGSModelParams.xml
do not match the actual file in the test repo and I wasn't sure if I was supposed to change anything to make the mean and variance to match the documented example.
I fixed that
- I also wasn't sure if I was supposed to be able to generate the plots provided in the documentation (and if so, how).
I updated the ./test/ex1/start.jl
in line 22-30, you can find
# import Pkg
# Pkg.add("PlotlyJS")
# Pkg.add("DistributedSparseGridsPlotting")
using PlotlyJS
using DistributedSparseGridsPlotting
display(PlotlyJS.plot([PlotlyJS.scatter3d(ogsuqasg.asg), surface_inplace_ops(ogsuqasg.asg, 20, x->maximum(x["temperature_interpolated"]))], PlotlyJS.Layout(title="ASG-Surrogate response function ASG(x) - max. temp")))
display(PlotlyJS.plot([PlotlyJS.scatter3d(asg_expval), surface_inplace_ops(asg_expval, 50, x->maximum(x["temperature_interpolated"]))], PlotlyJS.Layout(title="ASG(x)*pdf(x) - max. temp")))
display(PlotlyJS.plot([PlotlyJS.scatter3d(asg_varval), surface_inplace_ops(asg_varval, 50, x->maximum(x["temperature_interpolated"]))], PlotlyJS.Layout(title="(ASG(x)-𝔼(x))^2*pdf(x) - max. temp")))
- It only started working for me once I used the version of
DistributedSparseGrids
on themain
branch. It seems like it was known thatPlotlyJS
has some issues, and this was preventing my functionality (Julia 1.10.3). This may be above my paygrade as a reviewer, but there's definitelycompat
s missing for the dependencies that would prevent some of these issues.
I agree. Recently, PlotlyJS stopped working (at least for me) if it is included in remote workers. I did throw PlotlyJS out since I plan to switch to Makie, and, in addition, I do not like to have these heavy graphic libraries as a dependency, anyway. I issued a new version. this should be fixed, I hope.
- Once I got that figured out, it looks like there's some pathing issue going on. I get to
start!(ogsuqasg)
and then I get a bunch of output like
You have to change simcall
from /path/to/ogs/bin/ogs
to the path
of you ogs installation in the altered_StochasticOGSModelParams.xml
https://www.opengeosys.org/docs/userguide/basics/introduction/
Installing OGS can be a bit cumbersome. I can provide some help if you need it.
Thanks for the quick response!
@dannys4
Thanks for the quick response!
No problem. it's me who has to say thank you!
- I'm not seeing that the file is generated/changes from what's in the github repo. In particular, I'm getting that this file/line is unchanged, so it tries to call OGS on your system when doing the example
- Sorry I totally overlooked the path to the binary. I've fixed this and now the only issue seems to be the one above, at least for the time being (I'll try it from scratch again once that gets addressed)
With that example, I wanted to emphasize that the helper functions can generate XML templates that can be altered accordingly if the user wants to. I figured that this was a rather bad idea.
I altered the three files generate_stoch_params_file.jl, generate_stoch_model.jl, and start.jl. But you can also just run run_ex1.jl, which calls the three files, consecutively.
I deleted this whole altered_
nonsense since it only caused confusion (sorry about that).
The needed changes are now implemented in code in the three files above and the results are stored in StochasticParameters.xml, StochasticOGSModelParams.xml, and SampleMethodParams.xml.
I updated the documentation, too. See Usage
The only things to notice is that you still have to alter the simcall
in generate_stoch_model.jl (or you have to have the ogs binary in your PATH
) and if you want to create the plot, you have install PlotlyJS
and DistributedSparseGridsPlotting.jl
import Pkg
Pkg.add("PlotlyJS")
Pkg.add("DistributedSparseGridsPlotting")
I hope this will fix things now. If it still does not run, I will take care of it as soon as possible. But for me it does work on different systems.
You already have the dev version, so a git pull for the directory should be sufficient. If we are through here, I will fix the compats and issue new versions of OpenGeoSysUncertaintyQuantification
and all the underlying packages.
Anyways thank you for your patience and efforts.
@dannys4 , @ziyiyin97 and @baxmittens - Looks like this review is moving forward. Could you give me a short update as to where things are at your end? No rush obviously! Just wanted to check in. Thanks everyone!
Hi @kanishkan91
I'm looking forward to receiving more input from the reviewers, but as you said, there's no need to rush.
Greetz max
Hi, thanks for the follow up. I will take a closer look this week.
I'm trying to finish up the rest of this review as I'm able, but I feel like I'm pretty close. A few points of note:
@baxmittens : I updated the documentation, too.
The new usage example looks and feels much smoother overall, and I think will work well. The only point of criticism I have is that at one point you require DistributedSparseGridsPlotting
in start.jl
instead of DistributedSparseGrids
(or you use
the right one), and don't import the right keywords for surface_inplace_ops
, which causes an error. Otherwise, once I fix this, it seems to at least work on main
.
The last two points on the review are on documentation/testing:
xdmf
files to memory, but I couldn't figure out from these docs how to actually load things back in. If I want to serialize the mean (or the sparse grid!) I get from this and use it in future computations, I feel like this would be not just helpful, but vital.ex1/altered...
files, the tests don't seem to run as written on main
, and it doesn't look like the github workflow automatically runs these tests. I'm not a stickler on the last point (since running the tests are very easy in Julia), but they should probably be adjusted and, if possible, expanded on to include the functions that wouldn't be tested in the direct dependencies (e.g., ignoring the integration and interpolations presumably tested in DistributedSparseGrids
).Once these get addressed and the paper is rerendered, I'll test on the general registry version, reread and consider the review done.
As a remark that in no way influences my reviewing, I might suggest that you look into working with extensions in Julia as a way to work around the problem with plotting dependencies; it's not a perfect solution, but it's the best solution I know of for handling these issues.
Hey @dannys4
thank you again for your thorough review. I will work on fixing things as soon as possible, which will be mid of next week I think as i am traveling for a long weekend.
- There's plenty of documentation of how to write these
xdmf
files to memory, but I couldn't figure out from these docs how to actually load things back in. If I want to serialize the mean (or the sparse grid!) I get from this and use it in future computations, I feel like this would be not just helpful, but vital.
I can read xdmf-files by
xdmf = XDMFFileHandler.XDMF3File("path/to/xdmf.xdmf")
I'll will add a description. The use of xdmf instead of vtu is fairly new in OpenGeoSys. I am afraid, I don't really get the rest of your point. I don't get why I should want to serialize the mean or the sparse grid. Let me try to explain, how it is meant to be.
The first time start.jl
is run, the Res
folder gets populated, according to the experimental design of a sparse grid or some other method, with individual realizations of OGS projects at their specific location in the state space for which ogs is called in a further step and the postprocessing results are generated and also saved in the individual folders with the project data. Stochastic postprocessing can be conducted if all samples are generated and loaded into memory
The next time start.jl
is run, OGS will not be called once again for each realization (if it is not parameterized differently in the user_functions.jl
file), and only the postprocessing results will be loaded into memory, which is done fairly fast. So, if I need a result, I can load it from an xdmf file; if I need the sparse grid in memory for generating more post-processing results, I will just run the start.jl
once again.
Does that make sense? Or did I get you wrong?
- I'm a little bit queasy on the testing side of things; there are a lot of keywords exported and very few called in the tests. Additionally, since one of them depends on the now-destroyed
ex1/altered...
files, the tests don't seem to run as written onmain
, and it doesn't look like the github workflow automatically runs these tests. I'm not a stickler on the last point (since running the tests are very easy in Julia), but they should probably be adjusted and, if possible, expanded on to include the functions that wouldn't be tested in the direct dependencies (e.g., ignoring the integration and interpolations presumably tested inDistributedSparseGrids
).
I did not plan to run the examples in the tests since this would require a OGS binary. I will think about how to resolve this issue.
As a remark that in no way influences my reviewing, I might suggest that you look into working with extensions in Julia as a way to work around the problem with plotting dependencies; it's not a perfect solution, but it's the best solution I know of for handling these issues.
Thank you for your suggestion. I will have a look into it. Maybe this fixes things for me.
I'll will add a description. The use of xdmf instead of vtu is fairly new in OpenGeoSys.
Sounds great!
Let me try to explain, how it is meant to be.
I see. I think I was a little confused about the end-use and this explains it very well.
I did not plan to run the examples in the tests since this would require a OGS binary. I will think about how to resolve this issue.
I don't really see a good solution here; I don't want to get too involved in the nitty-gritty software design, but is there any way you can extract the binary interface such that you can test that it has the correct behavior with output from a dummy OGS process (i.e., record OGS output from a "reference" session that you know works and somehow use that to test the behavior of how OGSUQ.jl responds)? I understand if this ends up not being viable, it's just the main point of contention when finishing the checklist.
I don't really see a good solution here; I don't want to get too involved in the nitty-gritty software design, but is there any way you can extract the binary interface such that you can test that it has the correct behavior with output from a dummy OGS process (i.e., record OGS output from a "reference" session that you know works and somehow use that to test the behavior of how OGSUQ.jl responds)? I understand if this ends up not being viable, it's just the main point of contention when finishing the checklist.
Hi @dannys4
I think, I finally fixed this issue.
You can now install the newest version of OpenGeoSysUncertaintyQuantification
]add OpenGeoSysUncertaintyQuantification@0.1.4
(don't forget to free other packages like the DistributedSparseGrid, like I did...)
I added a OpenGeoSysUncertaintyQuantification.install_ogs()
function, which runs a shell script which installs OGS via pip3 (see OGS docs) in your test
folder. However, you have to have a python version < 3.12 installed (i.e. linked with python3
and pip3
which are called by the script).
If you do
]test OpenGeoSysUncertaintyQuantification
the example, now located in test/Examples/ASG_Point_Heat_Source
is copied to a temp folder in test
and a reduced sampling with less collocation points is performed. I included a Github Action which performs the runtests.jl
on each push and pull request. This works just fine.
You can also navigate to `test/Examples/ASG_Point_Heat_Source' and run
include("run_ex1.jl")
to run the full example. If you want to generate the plots, you have to comment in the last 10 lines or so in start.jl
and you have to have PlotlyJS
and DistributedSparseGridsPlotting
installed.
I altered the documentation, accordingly. I tested this on 2 different machines. For me it worked both times.
to run the full example. If you want to generate the plots, you have to comment in the last 10 lines or so in
start.jl
and you have to havePlotlyJS
andDistributedSparseGridsPlotting
installed.
Before running the full example it is beneficial to increase the number of workers n_workers
in the generate_stoch_model.jl
to the numbers of cores of your machine.
@baxmittens , @ziyiyin97 and @dannys4 Just wanted to check in again since it seems there has been a lot of action on this! It seems the review is progressing well. Thank you all for your efforts!
@ziyiyin97 I saw that you already posted comments on the paper and are working on the code comments. Is that correct?
@dannys4 It seems your checklist is almost complete except for the items regarding automated tests and documentation? Is there something you need from the author regarding the same?
@baxmittens Once you complete addressing all comments, could you post here and tag me and the reviewers?
Yes, I am working on the code now and will be done in 1 or 2 days. Sorry for the delay.
@ziyiyin97 No worries at all!
@baxmittens @ziyiyin97 and @dannys4 - Thanks for keeping the review going. I went through all comments and checklists thus far and it seems there are only two issues remaining-
From both @ziyiyin97 and @dannys4 - Regarding the functionality. It seems based on the discussion that all functionality claims are addressed. Though correct me if I'm wrong.
From @dannys4 - The automated tests- I think this is addressed as a part of the responses to @ziyiyin97 s comments. But I could be wrong.
To the reviewers- Once you are both satisfied with all responses and finish up your checklists, I can take a read through the paper and the code and indicate to @baxmittens regarding the next steps.
Thank you once again for this fast and insightful review! Do let me know if you have any questions.
Just finished everything on my checklist. Good work from the authors.
Just finished my checklist as well. The tests work when I download it from the general registry and the examples also seem to work (I had to use the low-res version; thanks for including that!). The new examples do a great job of laying out exactly what needs to happen and (I think) will greatly benefit users. @kanishkan91 Let me know if anything else needs to get addressed on my end; otherwise, it looks great and I look forward to the paper.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1137/100786356 is OK
- 10.1137/060663660 is OK
- 10.1016/j.jcp.2009.01.006 is OK
- 10.48550/arXiv.1509.01462 is OK
- 10.21105/joss.04300 is OK
- 10.1109/38.865875 is OK
- 10.21105/joss.05003 is OK
- 10.5281/zenodo.2647458 is OK
MISSING DOIs
- No DOI given, and none found for title: Quadrature and interpolation formulas for tensor p...
- 10.1007/s12665-012-1546-x may be a valid DOI for title: OpenGeoSys: an open-source initiative for numerica...
- No DOI given, and none found for title: Preliminary safety analyses in the high-level radi...
- No DOI given, and none found for title: Uncertainties and robustness with regard to the sa...
INVALID DOIs
- None
👋 @baxmittens - I went through the paper one more time and we are almost there! A few clean ups may be necessary. It seems no DOIs could be found for 4 references (See above).
After that is just setting up the archive for your new release.
We want to make sure the archival has the correct metadata that JOSS requires. This includes a title that matches the paper title and a correct author list.
So here is what we have left to do:
[x] Conduct a GitHub release of the current reviewed version of the software and archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository). Please ensure that the software archive uses the same license as the license you have posted on GitHub.
[x] Check the archival deposit (e.g., in Zenodo) to ensure it has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list matches the people and order in your paper). You may also add the authors' ORCID.
[x] Please respond with the 1) version and 2) DOI of the archived version here
I can then move forward with accepting the submission.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.48550/arXiv.1509.01462 is OK
- 10.21105/joss.04300 is OK
- 10.21105/joss.05003 is OK
- 10.1007/s12665-012-1546-x is OK
- 10.5281/zenodo.2647458 is OK
- 10.5194/adgeo-56-67-2021 is OK
- 10.1007/s12665-023-11346-8 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Hi @kanishkan91 ,
I've added the DOIs of the citations, released a new version and made a Zenodo archive from that.
https://github.com/baxmittens/OpenGeoSysUncertaintyQuantification.jl/releases/tag/v0.1.6 https://zenodo.org/records/11923517
That should be it, right?
Greetz Max
@editorialbot set v0.1.6 as version
Done! version is now v0.1.6
@editorialbot set 10.5281/zenodo.11923516 as archive
Done! archive is now 10.5281/zenodo.11923516
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot recommend-accept
@baxmittens I have recommended this for acceptance now. The AEiC in this submission track will review shortly and if all goes well this will go live soon! Big thank you to @dannys4 and @ziyiyin97 for reviewing! JOSS is volunteer run and relies heavily on researchers such as yourself.
Submitting author: !--author-handle-->@baxmittens<!--end-author-handle-- (maximilian bittens) Repository: https://github.com/baxmittens/OpenGeoSysUncertaintyQuantification.jl Branch with paper.md (empty if default branch): Version: v0.1.6 Editor: !--editor-->@kanishkan91<!--end-editor-- Reviewers: @ziyiyin97, @dannys4 Archive: 10.5281/zenodo.11923516
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@ziyiyin97 & @dannys4, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @kanishkan91 know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @ziyiyin97
📝 Checklist for @dannys4