Open wholmgren opened 6 years ago
Thanks!
I have personally found heuristic-based code analyzers unhelpful and noisy, so I lean against recommending them here. A short link roundup at the bottom of the section about setting up CI, to mention that these kinds of tools exist, might be OK, but I’d like to get a range of opinions on this before merging a contribution like that. Can you think of any projects in the scipy ecosystem that use one of those tools?
I should note that we do use coverage and codecov (which I would put in a different category than the heuristic linters) in the cookiecutter’s Travis config but don’t yet even mention them in the docs. Adding something about that is definitely a good idea.
I'm not sure what your definition of scipy ecosystem encompasses, but here are a few examples from my definition:
I'm approaching this as a project maintainer looking for guidance. I don't have the experience with these tools to be able to make this contribution myself.
Edited to add: I only looked at the above projects readme's and PR's, so I could have missed something.
I am :+1: on codecov and +0 on lgtm (as the lead mpl dev).
Thanks, @wholmgren. Yes, those are the kind of examples I'm interested in.
pinging @dopplershift in case he's able to provide feedback on usefulness of these CI tools for metpy development.
I've generally found LGTM's alerts to be useful when it flags something. Codacy...meh.
Codecov a big 👍 for monitoring our code coverage. Stickler is a glorified flake8, so I'm...meh.
Is a section on analysis/linting CI tools such as Stickler, Codacy, LGTM, Coveralls, etc. within the scope of this project? It would be great to see some guidance from people in the scientific python community on which of these tools are useful and how to best configure them.
A LGTM employee recently made a pull request to enable it on a project I maintain (pvlib-python) and that sent me down a rabbit hole trying to figure out how to choose among these services.
(Also thanks for this awesome guide.)