Closed raphaelquast closed 2 months ago
Dear @raphaelquast, thank you for submitting to pyOpenSci! I saw that @NickleDave already looked into your pre-submission and it went well 🎉. I am performing the EIC (editor in chief) checks for EOmaps and will update you by tomorrow!
Hey @arianesasso. Just checking in to see if there are any updates on the EOmaps submission yet?
In the meantime I've pushed to EOmaps v7.2 to address some reported issues and since the formal review process did not yet start, I've also updated the "submitted version" above to v7.2 (please let me know in case this is not OK). Thanks!
Hello @raphaelquast, I am really sorry for the delay, there were some unforeseen tasks this week on my side. Thank you for letting me know about the update, I will check it with the other editors but should be okay! Would it also be fine for you to wait till the end of this week for us to start the review formally? Thank you again for your patience 🙏🏼
@arianesasso thanks for the quick response! I fully understand that editor/reviewer activities sometimes have to be put on the waiting list for a few days. Take your time!
Hi @raphaelquast thank you for your continued patience.
We are still looking for an editor with spatial expertise for this review.
In the meantime, here are the initial editor checks. As expected, EOmaps passes with flying colors.
Please check our Python packaging guide for more information on the elements below.
import package
.README.md
file with clear explanation of what the package does, instructions on how to install it, and a link to development instructions.CONTRIBUTING.md
file that details how to install and contribute to the package.Code_of_Conduct.md
file.YAML
header of the issue (located at the top of the issue template).None except nice work 🙂
@NickleDave Thanks! Great to hear that initial checks turned out fine!
Two quick points from my side:
Ah, whoops, sorry I failed to mark the code-of-conduct box, I saw it but missed the checkbox. Fixed now.
Since there's no review activity yet, I'd like to push the submitted version up to v7.2.1 to incorporate latest fixes.
No worries. We will make sure that's clear here. You can feel free to change the metadata in your initial comment above, or I can.
We are getting close to finding an editor. Thanks again for your continued patience, and expect us to reply back early next week.
OK, thanks! Metadata is updated.
We are getting close to finding an editor. Thanks again for your continued patience, and expect us to reply back early next week.
Awesome! looking forward to getting this started!
Hi @raphaelquast happy to report that @banesullivan will be the editor for this review, and that he has already started looking for reviewers
Hey @NickleDave, that's great news! Thanks for the update!
I thought I'd chime in with a quick update so you don't think I've gone dark: I have found one reviewer and I'm still trying to confirm a second reviewer. Hopefully, I'll have an update in the coming days!
@banesullivan Hey, thanks for the info! I fully understand that it can be tough to find reviewers and I'm grateful for the time you invest here!
:wave: Hi @yeelauren and @jhkennedy! Thank you for volunteering to review for pyOpenSci!
Before beginning your review, please fill out our pre-review survey. This helps us improve all aspects of our review and better understand our community. No personal data will be shared from this survey - it will only be used in an aggregated format by our Executive Director to improve our processes and programs.
The following resources will help you complete your review:
Reviewers: @yeelauren @jhkennedy Due date: 2023-12-06
Hi :wave: @raphaelquast ! Great work this far and happy to e-meet you!
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
pyproject.toml
file or elsewhere.Readme file requirements The package meets the readme requirements below:
The README should include, from top to bottom:
NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider whether:
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md
matching JOSS's requirements with:
Estimated hours spent reviewing: ~2:45 onboarding, 6 hr reviewing
I really like the number of examples, blog articles and documentation provided. The docstrings within the package are very well done as well! I'll definitely be excited to use this package for future work :)
I have a few questions and some suggestions below, happy to provide them as separate issues to discuss as well:
While I can definitely see the use-case for eomaps, in terms of this review, I think the README and documentation could use a line or two about what the package is and why you might want to use it stated more explicitly.
Since this package is building on cartopy, what were the motivations for creating a separate package versus adding a PR/additional functionality to cartopy directly?
qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in ""
Since the review is due today, I just wanted to check in and say I'm working on it and will try and get it done by Friday, or next week at the worst. Sorry for the delay!
Hey @yeelauren, Thank you very much for your review! Below you can find a detailed response to your comments (associated adjustments in the package will follow as soon as possible).
I really like the number of examples, blog articles and documentation provided. The docstrings within the package are very well done as well! I'll definitely be excited to use this package for future work :)
Thanks for the kind words! I put a lot of effort in extensively documenting the package since I think it is extremely important (especially for beginners) to get as much information and examples as possible.
While I can definitely see the use-case for eomaps, in terms of this review, I think the README and documentation could use a line or two about what the package is and why you might want to use it stated more explicitly.
Glad to hear that you feel that the package is useful! I agree that the README can be improved... I'll try to come up with some improvements and report back here as soon as they are implemented!
Since this package is building on cartopy, what were the motivations for creating a separate package versus adding a PR/additional functionality to cartopy directly?
Cartopy is closely tied to the matplotlib syntax. Since my intention was to provide a package that goes beyond static maps and provides functionalities specifically tuned for interactive geo-data analysis, supercharging Figure or Axes objects with additional methods didn't seem like a good approach.
To be a bit more specific:
m.cb.pick...
). For pure cartopy plots there is no need for this additional complexity.In general, I try my best to port relevant fixes to cartopy, and it might even be good to one day have a unified package that does it all (or simply a more clear split between what goes into cartopy and what goes into EOmaps), but this would require a lot of time and effort. Right now, with 0 financial support (and ~0.01 contributors) this is out of scope for me. I am open to participating in discussions on the future of the packages. At the moment my personal focus is to find maintainers and contributors (and ideally financial support) for EOmaps, to clean-up the codebase and to make the API stable and easy-to-use.
The inclusion of specific instructions in the setup for various IDES makes me wonder if I missed some steps for my initial set-up using VSCode. I'd argue that a lot of Developers and Data Scientists are users of VSCode so instructions should be included.
~Yes, I agree. I will get myself a copy of VSCode and see what steps are required to get EOmaps running properly and include it in the docs!~
✓ Implemented! (see here)
Provide instructions on notebook setup with examples? e.g. use jupyter, quarto, etc. to implement some of the notebook cell tricks in https://eomaps.readthedocs.io/en/latest/FAQ.html#from-0-to-eomaps-a-quickstart-guide
The main reason why Jupyter Notebooks are a bit under-represented in the docs at the moment is that the interactivity is affected by a bug in the interactive jupyter-matplotlib backend ipympl
. I tried my best to help leveraging a solution by investigating problems of the python-side of ipympl
(see https://github.com/raphaelquast/EOmaps/issues/62) and the associated webagg
backend (see https://github.com/matplotlib/matplotlib/pull/27160) , but so far there is little progress concerning Jupyter notebooks since the problem requires adaptions in the javascript code (and my experience with js is limited).
Nonetheless, static maps now work just fine (and even interactivity is OK if update triggers are not too fast) so I agree that it might be time to better highlight the usability of EOmaps with Jupyter Notebooks in the docs.
My current idea is to actually completely refactor the Examples section. I gained some experience using myst_nb recently and I was thinking about using it to re-write all examples directly as Jupyter Notebooks and render them to the docs.
This will also help introducing a better structure for the examples (e.g. simple - advanced / static - interactive / ...). I will try to start this process as soon as possible, but it might take some time until I created a decent number of examples.
When trying to run tests on my machine (ubuntu) I received this: qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in ""
So far I have never encountered this error and unfortunately I cannot reproduce it locally (tested on Linux Mint 21.2 Cinnamon) and also GitHub actions currently run on Ubuntu 22.04.3 and do not seem to be affected by this.
After doing some research, it seems to be a more general issue with Qt that affects multiple other packages as well. I'm not sure what I can do to solve this except for pointing you to some associated discussions I found that might provide some (OS-specific) workarounds:
If you feel that there is something that I can do in EOmaps to resolve this, please open an Issue and I'll do my best to help in finding a solution!
Some errors and deprecation warnings come up while running the test(s) that may need to be addressed in a future version
I had a close look at the warnings reported in the latest test-run (from here):
cartopy DownloadWarning
can savely be ignored. They just indicate that the associated NaturalEarth dataset has never been accessed before and needs to be downloaded (and cached for future use).
geopandas UserWarning: Geometry is in a geographic CRS. ...
can savely be ignored. it just indicates that geometry.buffer(..)
was used in a geographic crs and therefore the size of the buffer might vary depending on the location of the data.
[x] numpy RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility.
:
~This indicates that the uploaded NetCDF file was created with an earlier numpy version... I'll update the associated test-dataset!~
(see https://github.com/pyOpenSci/software-submission/issues/138#issuecomment-1854792315)
[ ] numpy UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
: I have to investigate this.
[ ] geopandas FutureWarning: In a future version, "df.iloc[:, i] = newvals" will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either "df[df.columns[i]] = newvals" or, if columns are non-unique, "df.isetitem(i, newvals)"
: I have to investigate this
[ ] DeprecationWarning: invalid escape sequence '\*'
This is from a docstring.
It originates from trying to document what happens with additional keyword-arguments, e.g. **kwargs
The problem here is that in re-structured text and markdown, **
is also used for bold-text which is why I had to escape \*
. It seems that this is now Deprecated and I have to find a better solution.
I'm happy about any suggestions for a clean way how to do this!
To clarify, I'd need a "re-structured-text and markdown save" version of a docstring like this that can be parsed with sphinx-autodoc
:
def func(**kwargs):
"""
A nice function.
Parameters
----------
**kwargs:
Keyword-arguments passed to print.
"""
print(kwargs)
@yeelauren Basic info on how to use VSCode/VSCodium is now added to the docs!
(currently still in dev-branch for next release since there are also changes in the way how the docs are built).
You can have a look at the updated content here.
@yeelauren I had a closer look at the numpy RuntimeWarning that appears in the tests...
numpy RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility.: This indicates that the uploaded NetCDF file was created with an earlier numpy version... I'll update the associated test-dataset!
This is apparently caused by some unresolved issue regarding the import-order of numpy
and netCDF4
.
Here is the associated (open) issue from xarray
: https://github.com/pydata/xarray/issues/7259
My solution for now is to try to import netCDF4
on init to ensure it is imported prior to numpy
to address this warning.
(see https://github.com/raphaelquast/EOmaps/pull/205/commits/da5de8d50ce193bb466b49bdf48e9e1129808a1e and the associated test-run)
Overall fantastic work @raphaelquast!
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
pyproject.toml
file or elsewhere.Readme file requirements The package meets the readme requirements below:
The README should include, from top to bottom:
NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that a badge for pyOpenSci peer-review will be provided upon acceptance.)*
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider whether:
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md
matching JOSS's requirements with:
Estimated hours spent reviewing: 2 on-boarding; 4-6 playing/using.
Overall, I'm really excited about this package, and it significantly reduces some of the boilerplate for the CartoPy maps I've produced in the past. Coming to the GitHub, I was impressed with the README, releases, and release notes immediately, and I could see many of the "community standards" (e.g., code of conduct, contributing, license, etc.) right up front.
The documentation is fantastic, as well as the examples, and it was overall pretty easy to jump in as a user.
I find myself wishing you were submitting to JOSS so you could expand on the why -- why create the package, why you designed it the way you did, etc., which @yeelauren also hit on. I think JOSS is likely a better place for that than in the README or docs.
Note: I'm reviewing specifically v7.3.2, as that was latest v7.3 release when I started in seriously.
github.io
but the README and rest of the docs links point to readthedocs.io
-- I'd change the about to also point to ReadTheDocssetup.py
to pyproject.toml
It's not clear what python versions EOMaps supports:
python_requires
is not set in setup.py
classifiers
in the setup.py
only list Python 3.7, but I see that EOMaps is only tested on Python 3.8-3.10 I'd really like to see this cleaned up, with the supported python versions clearly listed, and all supported versions tested.
*Note: I also have some additional comments related to this when setting up a development environment.
This proved a bit more challenging as there was no top-level environment.yml
or requirements.txt
that'd set up a development environment. So I had to go: README -> installation docs -> scroll to bottom to find a small link to where the dev environment was described in the contributing docs. I didn't naturally jump to the contributing guide initially, and the section in the contributing guide also didn't work because:
eomaps.yml
, which does not exist at the top level and the guide doesn't tell you where it exists in the repository. I ended up running find . -iname '*.yml'
which then informed me that it doesn't exist at all and I likely wanted docs/contribute_env.yml
.docs/contribute_env.yml
at least matches the content of eomaps.yml
as listed in the contributing guide (in the docs) and had I looked at contributing.rst
I would have seen that is the environment file being used in the docs:
Content of ``eomaps.yml``:
.. literalinclude:: ../docs/contribute_env.yml
At this point, I was unsure if the Docs were in-sync with the latest released version, but the docs don't list a version number explicitly anywhere. I'd like to at least see that on the front page of the docs and explicitly as a version listed on ReadTheDocs (even if you don't keep old versions around).
When reading through the contributing guide:
git switch
over get checkout
which is more confusing and does both switching branches and restoring specific revisions.conda
and mamba
from the get go..github/workflows/testMaps.yml
to see how tests should be run.FAILED tests/test_plot_shapes.py::TestPlotShapes::test_contour - AttributeError: 'NoneType' object has no attribute 'dpi'
tests/test_doc_codeblocks.py
hangs between the third and fourth codeblock until you close a figure window, but I couldn't easily tell which codeblock this corresponds to easily, and it looks like there's no logging of which code block is being run that I could make visible with the -s
pytest option.I am not really a fan how the conda environment files are set up here. Recommendations:
environment.yml
at the repository root, and:
python>=3.6
or whatever your minimum supported version is. If you want a preferred development version that's not whatever's the latest conda/mamba resolve, add setting the version restriction to the dev docs.rtree
is in test_env.yml
in the runtime dependencies but not in contribute_env.yml
. It should be moved to the appropriate section or removed if not needed..github/workflows/pre-commit.yml
docs_env.yml
is fine where it is, but I'd just include sphinx-autobuild since you highlight it in the docsI hopped over to the FAQs after messing around with the tests to see if anything was addressed there.
matplotlib
is unnecessarily included in the Gt5Agg
may be unnecessary now (fine to still recommend though)UPGRADING.rst
at the top level of the repo or at list mention it right up front in the READMEtarget="_blank"
)add_feature
and add_wms
methods were hard to findadd_feature.preset
isn't documented and seems to be a NaturalEarth_features class. In fact, the add_feature
reference doesn't seem to match at allNE_features.json
, reformatting it so it's not on a single line, and looking at what's available there.Here's the full traceback of the test failure I had on v7.3.2:
FAILED tests/test_plot_shapes.py::TestPlotShapes::test_contour - AttributeError: 'NoneType' object has no attribute 'dpi'
full traceback:
______________________________________________________________ TestPlotShapes.test_contour _______________________________________________________________
self = <test_plot_shapes.TestPlotShapes testMethod=test_contour>
def test_contour(self):
for data in (self.data_pandas, self.data_1d, self.data_2d):
m = Maps(ax=221, figsize=(10, 6))
m.subplots_adjust(left=0.01, right=0.99)
m.set_data(**data)
m.set_shape.contour(filled=True)
m.plot_map()
cb = m.add_colorbar()
m1 = m.new_map(ax=222, inherit_data=True)
m1.set_shape.contour(filled=False)
m1.plot_map()
cb1 = m1.add_colorbar()
m2 = m.new_map(ax=223, inherit_data=True)
m2.add_feature.preset("ocean", "land")
m2.set_shape.contour(filled=True)
m2.plot_map(
colors=["none", "r", (0, 1, 0, 0.25), "r"],
hatches=["", "xxxx", "///", "xxxx"],
)
m3 = m.new_map(ax=224, inherit_data=True)
m3.set_shape.ellipses()
m3.plot_map(alpha=0.25)
cb3 = m3.add_colorbar()
m3_1 = m3.new_layer("contours", inherit_data=True)
m3_1.set_shape.contour(filled=False)
m3_1.plot_map(linestyles=["--", "-", ":", "-."])
cb3.indicate_contours(
contour_map=m3_1,
add_labels="top",
exclude_levels=[0, -1],
label_kwargs=dict(color="r", rotation=90, xytext=(-5, -10)),
)
cb3.indicate_contours(
contour_map=m3_1,
add_labels="top",
use_levels=[1],
label_names=["This one!"],
label_kwargs=dict(
xytext=(-40, 20), zorder=-1, arrowprops={"arrowstyle": "fancy"}
),
)
arts = m3_1.ax.clabel(m3_1.coll.contour_set)
for a in arts:
m3_1.BM.add_bg_artist(a, layer=m3_1.layer)
> m.show_layer("base", "contours")
tests/test_plot_shapes.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
eomaps/eomaps.py:3357: in show_layer
self.BM.update()
eomaps/helpers.py:2692: in update
self.fetch_bg(show_layer)
eomaps/helpers.py:2154: in fetch_bg
self._do_fetch_bg(layer, bbox)
eomaps/helpers.py:2078: in _do_fetch_bg
self._combine_bgs(layer)
eomaps/helpers.py:2017: in _combine_bgs
self.fetch_bg(l)
eomaps/helpers.py:2154: in fetch_bg
self._do_fetch_bg(layer, bbox)
eomaps/helpers.py:2124: in _do_fetch_bg
art.draw(renderer)
eomaps/shapes.py:73: in cb
returns.append(getattr(c, name)(*args, **kwargs))
../../../../mambaforge/envs/eomaps/lib/python3.10/site-packages/matplotlib/artist.py:72: in draw_wrapper
return draw(artist, renderer)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <matplotlib.collections.PathCollection object at 0x7f87019c8190>, renderer = <matplotlib.backends.backend_agg.RendererAgg object at 0x7f87029afa90>
@artist.allow_rasterization
def draw(self, renderer):
> self.set_sizes(self._sizes, self.figure.dpi)
E AttributeError: 'NoneType' object has no attribute 'dpi'
../../../../mambaforge/envs/eomaps/lib/python3.10/site-packages/matplotlib/collections.py:1004: AttributeError
Overall, great work on EOmaps, @raphaelquast!! I'm excited to enter the next stage of this submission given the fantastic reviews by @yeelauren and @jhkennedy (thank you both very, very much for making the time!!!).
It looks like there are a few things that need to be addressed but overall EOmaps appears in good health. I've further opened a few issues in the EOmaps repository directly and linked back here -- some of these issues are just restating what our reviewers pointed out above, but I felt needed a dedicated issue to track the task.
@raphaelquast, I'd appreciate if you would likewise track the reviewer tasks here as new issues in EOmaps where you feel appropriate (no need to open an issue for every single comment). Similarly, I'd appreciate if you would link this issue for any Pull Requests to EOmaps resulting from this feedback.
Hey @jhkennedy
Thank you very much for your thorough review, your suggestions are highly appreciated! You brought up some important points and I will try my best to implement them as soon as possible!
As suggested by @banesullivan I'll track associated tasks in separated issues in EOmaps.
Please find below my general response to your comments:
Overall, I'm really excited about this package, and it significantly reduces some of the boilerplate for the CartoPy maps I've produced in the past. Coming to the GitHub, I was impressed with the README, releases, and release notes immediately, and I could see many of the "community standards" (e.g., code of conduct, contributing, license, etc.) right up front.
The documentation is fantastic, as well as the examples, and it was overall pretty easy to jump in as a user.
Thanks for the kind words! Nice to hear that you find the package useful and that all the hours I spent on the documentation payed off.
I find myself wishing you were submitting to JOSS so you could expand on the why -- why create the package, why you designed it the way you did, etc., which @yeelauren also hit on. I think JOSS is likely a better place for that than in the README or docs.
Thanks for mentioning this! On submission I was quite unsure if JOSS would be feasible and then decided to see how the PyOpenSci review would turn out first. @banesullivan Do you think it would be OK to still opt-in for the JOSS submission by preparing a accompanying paper? (on submission it said "may be post-review" but I'm not sure if that means submitting directly to JOSS or including the JOSS submission here)
Using a copy-left license like GPL means, for me, I will happily use and contribute to EOMaps, but I will be unable to build upon it as I'm obligated to use a permissive use license in my work. This is fine for EOMaps for the most part, I think, as it's more of an end-user package than a library.
I agree that now is a good time to to re-think the licensing of the package (and that GPL might not be the best choice after all...) I will give this a bit of thought and report back in the associated issue: https://github.com/raphaelquast/EOmaps/issues/209
On the repository main page on GitHub, in the "About" (upper-left) there's a documentation link pointing to
github.io
but the README and rest of the docs links point toreadthedocs.io
-- I'd change the about to also point to ReadTheDocs
Thanks for spotting this! I agree that pointing to github.io
(which is basically just a copy of the readme) makes little sense if you already found your way to GitHub...
✔ I implemented the changes as suggested!
It's time to move from
setup.py
topyproject.toml
I agree. I'll implement this as soon as possible!
It's not clear what python versions EOMaps supports:
- There's no badge on the README, or in the PyPI metadata about supported versions
python_requires
is not set insetup.py
classifiers
in thesetup.py
only list Python 3.7, but I see that EOMaps is only tested on Python 3.8-3.10The conda-forge recipe supports Python >=3.6, so users can install into Python 3.11 and 3.12, but it's not being tested on those versions
I'd really like to see this cleaned up, with the supported python versions clearly listed, and all supported versions tested.
Again yes, I fully agree. I will try to clean this up together with the switch to pyproject.toml
This proved a bit more challenging as there was no top-level
environment.yml
orrequirements.txt
that'd set up a development environment. So I had to go: README -> installation docs -> scroll to bottom to find a small link to where the dev environment was described in the contributing docs. I didn't naturally jump to the contributing guide initially, and the section in the contributing guide also didn't work because:
- it instructs you to create a conda environment from
eomaps.yml
, which does not exist at the top level and the guide doesn't tell you where it exists in the repository. I ended up runningfind . -iname '*.yml'
which then informed me that it doesn't exist at all and I likely wanteddocs/contribute_env.yml
.docs/contribute_env.yml
at least matches the content ofeomaps.yml
as listed in the contributing guide (in the docs) and had I looked atcontributing.rst
I would have seen that is the environment file being used in the docs:Content of ``eomaps.yml``: .. literalinclude:: ../docs/contribute_env.yml
Thanks for the detailed report on your experience!
I will try to update the guide to make it more consistent (and I'll include a reference to the contribution-guide in the installation section!)
- At this point, I was unsure if the Docs were in-sync with the latest released version, but the docs don't list a version number explicitly anywhere. I'd like to at least see that on the front page of the docs and explicitly as a version listed on ReadTheDocs (even if you don't keep old versions around).
Again really good suggestion, thanks! I will investigate how to provide a proper versioning for the docs so that one can be sure that the docs are in sync with the used version!
When reading through the contributing guide:
- Overall I found it very well done! 👍
- Consider switching to
git switch
overget checkout
which is more confusing and does both switching branches and restoring specific revisions.- You point people to the miniconda installer but then have users install mamba. I'd just point people to miniforge which includes
conda
andmamba
from the get go.- There's nothing on how to run tests. I want to set up my environment and immediately run tests before starting in on changes. I had to go look at
.github/workflows/testMaps.yml
to see how tests should be run.
git switch
.miniforge
is now officially merged with (the now discouraged) mambaforge
and that it now contains mamba
out of the box! This is certainly a better entry-point for anyone who wants to start from scratch! I'll update the docs accordingly!
- WHOAH soo many figure windows get created and deleted! I was not prepared for that. The tests took just long enough I switched away and started on something else when my desktop started exploding with figure windows being created/deleted. Do you need this to happen? If so, you should definitely document how to run tests and warn about this.
This is due to the fact that the default backend for matplotlib is set as Qt5Agg
.
I will have to investigate which tests require a GUI backend and which don't...
(some tests involve simulated click-events to test callback responses but most should be OK to run in a non-GUI backend like agg
)
Until I find time for this I'll add a warning to the (upcoming) section on how to run the tests in the contribution guide!
- I had 1 test failure from checkout of version 7.3.2 (I'll post the full traceback in a subsequent comment)
FAILED tests/test_plot_shapes.py::TestPlotShapes::test_contour - AttributeError: 'NoneType' object has no attribute 'dpi'
I will have to investigate what happened here since I cannot reproduce it locally (and it's also not happening on GitHub actions).
tests/test_doc_codeblocks.py
hangs between the third and fourth codeblock until you close a figure window, but I couldn't easily tell which codeblock this corresponds to easily, and it looks like there's no logging of which code block is being run that I could make visible with the -s
pytest option.I am aware that testing the code-snippets of the docs is currently implemented in a suboptimal way.
(It was implemented quickly to make sure that code-blocks run at least once after I encountered some outdated snippets...)
Right now, I am in the process of updating the docs to switch to using myst_nb
and write chapters directly as Jupyter Notebooks
(where all code-blocks are again executed to ensure they run without errors)
The overall framework is already implemented and pending in the PR for EOmaps v7.4. At the moment, only 2 sections are already ported, and switching all of the docs will take a bit more time.
The notebook-tests will properly report the name of the notebook that caused the error.
I am not really a fan how the conda environment files are set up here. Recommendations:
- Provide a single
environment.yml
at the repository root, and:
- use this for both testing and development. This can be accomplished by setting the python requirement to
python>=3.6
or whatever your minimum supported version is. If you want a preferred development version that's not whatever's the latest conda/mamba resolve, add setting the version restriction to the dev docs.rtree
is intest_env.yml
in the runtime dependencies but not incontribute_env.yml
. It should be moved to the appropriate section or removed if not needed.- I would really prefer git not be installed in the environment -- it's much more common to use the system install of git and this could lead to very unexpected things with miss-match system and environment versions.
- Related to ☝️ I would encourage pre-commit, but not install it into the environment, especially since you also have a pre-commit workflow:
.github/workflows/pre-commit.yml
docs_env.yml
is fine where it is, but I'd just include sphinx-autobuild since you highlight it in the docs
Thank you for all the suggestions, I fully agree and I will implement them as suggested!
rtree
can actually be removed as an explicit dependency since it is currently only required by geopandas
and not by EOmaps
itself. I hopped over to the FAQs after messing around with the tests to see if anything was addressed there.
- In the "From 0 to EOmaps - a quickstart guide" section
- After installing miniconda, you ask users to open a Anaconda Prompt terminal, but that is windows specific.
matplotlib
is unnecessarily included in the2nd3rd TODO- Really, the TODO blocks are subsections -- It'd be nice to at least number them.
- In the "Configuing the editor (IDE)" setcion:
- I did not have any hanging with interactive plots like mentioned in the PyCharm section so setting
Gt5Agg
may be unnecessary now (fine to still recommend though)- It's Pycharm Professional not commercial.
- The latest Pycharm Version don't have "Python Scientific" anymore; disabling plots into the tool window is located at "Setting -> Tools -> Python Plots -> [] show plots in tool window"
- I really like the Important Changes between major versions but
- It'd be good to include that in an
UPGRADING.rst
at the top level of the repo or at list mention it right up front in the README- It'd also be good to reverse the order so the changes between the latest version and the last major version are at the top
Thank's a lot for your thorough look through the docs!
- in the Basics the layer management figure needs to be bigger or zoomable. If that doesn't happen as an overlay, I'd set it top open in a new tab/window (
target="_blank"
)- In the data visualization section:
- in your note on data-reprojection about 1D coordinate vectors, you should also mention that these will be broadcast using matrix indexing, not cartesian indexing, as this could lead to the data being transposed in the plot.
- The speed and performance notes might also be worth linking to or putting in the FAQ
- Optional dependencies should be discussed on the installation page
- In the examples section:
- It would be nice to number the example's content to match the example figure numbers and code files
- They also appear out of order in places, e.g., ? -> 3 -> 2.
- The actual examples code being located in the tests directory was unexpected -- I initially looked for it in the docs directory
target="_blank"
as suggested!
- The API reference is hard to navigate
- it's not in alphabetical order, or is at least missing headings, and the
add_feature
andadd_wms
methods were hard to findadd_feature.preset
isn't documented and seems to be a NaturalEarth_features class. In fact, theadd_feature
reference doesn't seem to match at all- IDE tab completion also doesn't discover NE categories; hard to know what I'm able to access
- I ended up just opening
NE_features.json
, reformatting it so it's not on a single line, and looking at what's available there.
This is something that bothered me as well for quite some time but I lack the experience to make it work properly...
The problem is that at the moment attributes are set on initialization of the Maps-object based on the content of NE_features.json
which is why they can only be auto-completed in an interactive console on an already initialized Maps
instance.
I'd appreciate any suggestions on how to improve auto-completion for such dynamically created attributes.
(note that the same is also true for WebMap services, however these are only fetched on access and then dynamically set similar as the NaturalEarth features. This again leads to the problem that the available layers of a service can only be autocompleted in an interactive terminal on a Maps
instance.)
Overall, great work on EOmaps, @raphaelquast!! I'm excited to enter the next stage of this submission given the fantastic reviews by @yeelauren and @jhkennedy (thank you both very, very much for making the time!!!).
@banesullivan Thank you very much! I'm indeed very exited that EOmaps is ready to continue to the next stage and I'll start implementing the required changes and suggestions right away!
It looks like there are a few things that need to be addressed but overall EOmaps appears in good health. I've further opened a few issues in the EOmaps repository directly and linked back here -- some of these issues are just restating what our reviewers pointed out above, but I felt needed a dedicated issue to track the task.
Thanks a lot for taking the time to open the issues! I agree that this will make it easier to track progress on the remaining TODOs.
@raphaelquast, I'd appreciate if you would likewise track the reviewer tasks here as new issues in EOmaps where you feel appropriate (no need to open an issue for every single comment). Similarly, I'd appreciate if you would link this issue for any Pull Requests to EOmaps resulting from this feedback.
OK, I will do that!
@jhkennedy I had a look at the error you've encountered in the unittests (https://github.com/pyOpenSci/software-submission/issues/138#issuecomment-1858584307):
This is caused by the use of matplotlibs ax.clabel
in the test to add labels to a contour-plot. The problem is the following:
Since this issue originates from using a non-EOmaps method, my solution for now is to simply omit this in the tests (and also add a catch for artists that no longer exist in the figure) This is done in: https://github.com/raphaelquast/EOmaps/pull/205/commits/62ce4a8d46a8c74adaaf971f489af6e02e8e64b3
In addition, I opened a dedicated issue to summarize the problems and hopefully implement a solution for adding contour-labels in the future: https://github.com/raphaelquast/EOmaps/issues/218
@banesullivan I hope it is OK for this review to postpone the development of a contour-label feature for now.
Response to comment from @jhkennedy :
At this point, I was unsure if the Docs were in-sync with the latest released version, but the docs don't list a version number explicitly anywhere. I'd like to at least see that on the front page of the docs and explicitly as a version listed on ReadTheDocs (even if you don't keep old versions around).
I now provide explicit docs versions (for the latest version of v5.x v6.x) as well as the most recent version and the dev-docs.
Response to comment form @jhkennedy
The API reference is hard to navigate
- it's not in alphabetical order, or is at least missing headings, and the add_feature and add_wms methods were hard to find
- add_feature.preset isn't documented and seems to be a NaturalEarth_features class. In fact, the add_feature reference doesn't seem to match at all
- IDE tab completion also doesn't discover NE categories; hard to know what I'm able to access
- I ended up just opening NE_features.json, reformatting it so it's not on a single line, and looking at what's available there.
I implemented major changes to the API docs to reflect the actual public API and not the (internal) module-structure... I hope you'll find them easier to navigate now! You can have a look at the current state here: https://eomaps.readthedocs.io/en/dev/api/reference.html I'm happy about any additional comments how to improve the docs!
@banesullivan @jhkennedy @yeelauren Once again, thank you very much for all your helpful and productive comments! They really helped to pinpoint aspects of the package that required improvements (which are sometimes hard to spot if you know almost any line of code by heart).
I have now collected all major changes into a pending PR for EOmaps v8.0 EOmaps#205 which should address the majority of comments mentioned here.
Here's an overview of the most relevant changes/fixes relevant for this review:
environment.yml
containing all relevant dependencies
pre-commit
and git
are no longer included explicitly but mentioned in the contribution-guidetests/test_env.yml
is kept for now, since the test-runs do not require packages relevant to build the docsdocs/docs_env.yml
now pinns docs-build dependencies and installs only minimal EOmaps requirementspyproject.toml
pip install eomaps
now installs only bare minimum requirements)miniforge
as starting point for python+conda/mamba setupSince my personal todos tagged for EOmaps v8.0 are more or less implemented, I wanted to ask how we proceed with the review-process since it would be nice if the release of v8.0 can be aligned with the outcomes of the PyOpenSci review.
@banesullivan In addition, I also wanted to ask if it is feasible to still append a JOSS paper at this stage of the review or if you prefer that this is done post-review to avoid stretching the time for this review.
Hey, @raphaelquast, I want to leave a quick note that I haven't forgotten about this! Again fantastic work addressing all of the feedback!
In the coming days, I will sit down and review all of this to take inventory of anything that may remain and answer you questions about JOSS
I appreciate your patience!
@banesullivan Thanks, take your time!
@raphaelquast, this has truly come together fantastically! Thank you for diligently working to address all of the feedback our excellent reviewers brought up!
I've read this review thread up and down a few times now... (lots of amazing work happening here!!) and I believe you have addressed all of the feedback raised. That said, I'd like to give @yeelauren and @jhkennedy time to double check and sign off on the review.
@yeelauren and @jhkennedy, to make this process a little easier for you both I've tried to sift through your open review request/comments to track down the state of things. The following are my findings:
@yeelauren's comments:
dev
branch show everything passing.dev
branch, there are still a high number of warnings thrown, however they do not greatly concern me for the sake of this review at this time. I've raised a new issue in EOmap https://github.com/raphaelquast/EOmaps/issues/227 to track these and feel comfortable not having those block this review.@jhkennedy's comments:
dev
branch, I'm having no issues about tests@raphaelquast, on your question about the JOSS review: I'm leaning towards following up with the JOSS paper post-review at this stage, but I will talk with some other's in PyOpenSci to fully assess this.... stand by.
@banesullivan Thank you very much for your thorough review and the kind words! I've already started investigating the extensive number of warnings (in raphaelquast/EOmaps#227) and I agree that the README can still be improved... I'll try to come up with an updated README as soon as possible!
I finally found the time to draft an updated README to tackle the last open point from the review (PR: https://github.com/raphaelquast/EOmaps/pull/230).
If any of you find the time, feedback would be highly appreciated! (You can look at it here)
@banesullivan @jhkennedy @yeelauren
Dear all, I wanted to ask if it is possible to get a quick update on the time-schedule for this review.
The reason I'm asking is because I am currently postponing the release for EOmaps v8.0
in hope that it might be aligned with the outcome of this review. However, since time is flying, I am now facing several upcoming activities (most importantly EGU General Assembly in 14-19 April and GeoPython 2024 conference on May 27) where I plan to participate with all the improvements introduced with EOmaps v8.0
.
I would highly appreciate your opinion if it is feasible to arrive at a conclusion for this review by end of March.
If you think this is out of scope, I would release EOmaps v8.0
in the upcoming weeks irrespective of the status of the PyOpenSci review (and we can continue here without a time-constraint).
@banesullivan @raphaelquast Sorry I got slammed the last couple of weeks.
As far as I'm concerned, EOMaps is in fantastic shape, @raphaelquast did an amazing job responding to feedback and updating EOMaps, and I definitely would recommend accepting EOmaps v8.0.
@jhkennedy thank you very much for your kind words and the prompt response!
@raphaelquast I haven't forgotten! I think end of March is doable. I'll review ASAP this weekend :) apologies for the delay !
@raphaelquast amazing work - and happy with your response to the review and the significant work put into v.8.0! I like the added blurb to the read me. Same with @jhkennedy I'd recommend accepting :blush:
@yeelauren Thank you very much for your support! Glad you like the improvements!
@banesullivan Could you provide an update on how we proceed with this review?
Test-warnings (https://github.com/raphaelquast/EOmaps/issues/227) are now down to 17 of which I only 1 requires attention.
matplotlib >= 3.10
and requires a few bigger changes in the way how contour-plots are implemented, I'll not include a fix in v8.0 already)Aside of that, I believe all comments have been addressed as requested.
In addition, there are a few nice features that now made it into the v8.0 branch!
xarray.Dataset
is now accepted as input for m.set_data()
(see info here)The latest release-candidate can be installed with
pip install eomaps[all]==8.0rc2
@raphaelquast, thank your for your patience! The last few weeks got the best of me between moving and some sickness.
Aside of that, I believe all comments have been addressed as requested.
I believe you're right! I think we're ready to proceed with the acceptance of EOmaps given @jhkennedy and @yeelauren have both recommended accepting! 🎉 🎉
I'm going to change the status of this review and will handle the next steps from here.
I want to close the loop on an earlier conversation about submitting to JOSS: We highly recommend you submit EOmaps to JOSS, but are going to encourage you to go through the standard JOSS submission process as opposed to starting it here. Given acceptance here, I believe there is precedent for fast-tracking the JOSS submission or using this review to augment the JOSS software review. I will happily chime in to help in any way I can during the JOSS submission if you chose to do so, please @
mention me as needed.
Amazing work here, @raphaelquast!!
@raphaelquast, I'd like to tie this review with version 8.0, would you let me know when that has been fully released?
@banesullivan Thank you very much for all your time and efforts! I fully understand that sometimes things get in the way... I hope you're feeling well again and that you had a successful move!
@yeelauren @jhkennedy The same also counts for you of course! Your reviews have been highly valuable and helped me to improve many parts of EOmaps!
🚀 This is amazing news! I'm thrilled that this review has reached a positive outcome and I'm very happy that EOmaps will become a part of PyOpenSci!
I'd like to tie this review with version 8.0, would you let me know when that has been fully released?
EOmaps v8.0 is already waiting in the starting blocks! I'll do a final checkup and try to release it as soon as possible!
I'm going to change the status of this review and will handle the next steps from here.
@banesullivan Please keep me informed on any actions required from my side!
I want to close the loop on an earlier conversation about submitting to JOSS: We highly recommend you submit EOmaps to JOSS, but are going to encourage you to go through the standard JOSS submission process as opposed to starting it here. Given acceptance here, I believe there is precedent for fast-tracking the JOSS submission or using this review to augment the JOSS software review. I will happily chime in to help in any way I can during the JOSS submission if you chose to do so, please @ mention me as needed.
Thank you for following up on this. I agree that this is the best way forward. I'll try to compose a JOSS submission later this year once all the conference contributions etc. are done. I highly appreciate your support on this and I will reach out to you when the time comes!
I'd like to tie this review with version 8.0, would you let me know when that has been fully released?
@banesullivan EOmaps v8 is now released!
🎉 Fantastic! With that, EOmaps v8 has been approved by pyOpenSci! Thank you @raphaelquast for submitting EOmaps and many thanks to @yeelauren and @jhkennedy for reviewing this package! 😸
There are a few things left to do to wrap up this submission. I think you've already activated Zendo and created a DOI (which I have updated in the meta of the original post at the top of this issue), so I've marked those, but would you please let me know after you completed the following:
[![pyOpenSci](https://tinyurl.com/y22nb8up)](https://github.com/pyOpenSci/software-review/issues/issue-number)
.These are for me:
6/pyOS-approved6 🚀🚀🚀
.6/pyOS-approved
label). Once accepted add the label 9/joss-approved
to the issue. Skip this check if the package is not submitted to JOSS.If you have any feedback for us about the review process please feel free to share it here. We are always looking to improve our process and documentation in the peer-review-guide.
PyOpenSci review badge has been added to the README
I now completed the post-review servery as well!
Submitting Author: Raphael Quast (@raphaelquast) All current maintainers: @raphaelquast Package Name: EOmaps One-Line Description of Package: EOmaps is a python package to visualize, analyze and compare geographical datasets. Repository Link: https://github.com/raphaelquast/EOmaps Version submitted: 7.3 Editor: @banesullivan Reviewer 1: @yeelauren Reviewer 2: @jhkennedy
Archive:
Version accepted: 8.0.2 Date accepted (month/day/year): 03/14/2024
Code of Conduct & Commitment to Maintain Package
Description
EOmaps is a python package to visualize, analyze and compare geographical datasets.
It is intended to simplify the process of geospatial data visualization and to provide a straight forward way to turn the maps into interactive widgets for data analysis.
EOmaps is based on
matplotlib
andcartopy
and extends cartopy's capabilities with the following featuresIt is extensively documented, unit-tested, citable via a zenodo and installable via conda or pip.
(However using pip is discouraged since dependencies like GDAL and PYPROJ can be difficult to install, especially on windows)
Scope
Please indicate which category or categories. Check out our package scope page to learn more about our scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):
Domain Specific & Community Partnerships
Community Partnerships
If your package is associated with an existing community please check below:
For all submissions, explain how the and why the package falls under the categories you indicated above. In your explanation, please address the following points (briefly, 1-2 sentences for each):
The target audience for EOmaps are scientists and researchers working with geospatial datasets. EOmaps can be used to quilkly visualize structured (or unstructured) geo-spatial datasets, compare multiple datasets with each other or compare maps to an extensive list of open access webmap services. Since EOmaps is based on matplotlib, the created maps can also be connected to ordinary matplotlib axes to analyze multi-dimensional (e.g. timeseries) data.
In addition to the interactive capabilities, maps created with EOmaps can be exported as high-resolution images or vector-graphics to create publication-ready plots.
Finally, I believe that EOmaps also has great potential to be used in education to teach about projections, distortions, spatial resolution, rasterization of data, the subtleties of big-data visualization etc.
EOmaps is based on cartopy. While cartopy provides similar functionalities in terms of data visualization, EOmaps greatly extends these capabilities (especially for large datasets), adds multi-layer support, a basic GUI, easy-access to webmaps, and many more features.
There exist other packages that focus on interactive geo-data visualization, but to my knowledge none that focusses on local use in pure python.
@tag
the editor you contacted:137
Technical checks
For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:
Publication Options
JOSS Checks
- [ ] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS. - [ ] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria. - [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`. - [ ] The package is deposited in a long-term repository with the DOI: *Note: JOSS accepts our review as theirs. You will NOT need to go through another full review. JOSS will only review your paper.md file. Be sure to link to this pyOpenSci issue when a JOSS issue is opened for your package. Also be sure to tell the JOSS editor that this is a pyOpenSci reviewed package once you reach this step.*Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
Confirm each of the following by checking the box.
Please fill out our survey
P.S. Have feedback/comments about our review process? Leave a comment here
Editor and Review Templates
The editor template can be found here.
The review template can be found here.