Closed harpolea closed 4 years ago
Thanks @harpolea! I'll work on these and keep you updated on their progress.
Hi, just checking in after reviewing this (openjournals/joss-reviews#1774) and have a couple of questions
First, to echo the two points already mentioned by @harpolea: -has the performance question been addressed? Since the equation is given explicitly it would be nice to have this substantiated quantitatively with a repex if possible -again to echo the state of the field issue: you mention other tools using server side calculations - can you mention what some of these tools are and how they compare (performance wise)? And in addition: -I didn't see references in the paper - can you point these out if I'm just missing them somehow (perhaps they are somewhere else) -regarding automated test cases: although the example notebooks are fairly in depth, they are quite specific to niche use-cases and maybe a more general 'set-up and testing' type tutorial could be added in addition to these, to show base functionality with sparse datasets as mentioned in the paper?
Hi, thanks so much for the reviews @harpolea and @KayleighRutherford! I've submitted two pull requests that address some of the comments, which I'll try to explain here. If they look good to you, please let me know and I'll merge them. If they don't address your feedback sufficiently then I'll do my best to fix them based on your feedback! @KayleighRutherford I've asked a few questions about your review so I can best respond to your comments.
Do you have any specific examples that demonstrate this [performance]?
I attempted to address this in data-exp-lab/widgyts#47 , where I implemented a jupyter widget that updates based on changes to a view model with yt and compared its timing to widgyts'. There's now a figure that shows this trend in the paper, and I've also included the notebook with which I performed the analysis in the paper directory if you wish to replicate it.
Could you add something [about the state of the field with other commonly used packages] to the paper?
I think I've addressed this with the language and updated references in data-exp-lab/widgyts#46. I by no means list every package with interactivity, but I do try to touch (without getting too in the weeds) on different tools used for interactivity and how they're used by other packages.
I didn't see references in the paper - can you point these out if I'm just missing them somehow
Hm, I've added a few more references to the paper in the updated draft. There should be at least two in the original paper on the last page of the copy linked by whedon. Or do you mean a different type of reference? I'm happy to add whatever you'd like if they don't address your concerns!
regarding automated test cases: although the example notebooks are fairly in depth, they are quite specific to niche use-cases and maybe a more general 'set-up and testing' type tutorial
Would it help if I linked how to run the test suite in the "installation" part of the documentation? Those show that the widgyt is working as expected if installed from source. Here's the testing section in the developer guide: https://widgyts.readthedocs.io/en/latest/developer_guide.html#testing And here's the installation section, which doesn't mention testing: https://widgyts.readthedocs.io/en/latest/install.html
Oh also @harpolea we fixed the codecov issue!
@harpolea can you give an update here as to the status of your suggestions? Thanks.
The additions to the paper look great to me, so I'm happy to close this issue.
I have a couple of suggestions for the JOSS paper submission (https://github.com/openjournals/joss-reviews/issues/1774)
widgyts
allows for more efficient exploration of datasets by shifting the image calculation to the client. Do you have any specific examples that demonstrate this?