Closed jdonges closed 6 months ago
Sure, will do!
Attention: 14 lines
in your changes are missing coverage. Please review.
Comparison is base (
3c179b4
) 51.60% compared to head (4e83d50
) 50.97%.
Files | Patch % | Lines |
---|---|---|
tests/test_core/test_network.py | 50.00% | 12 Missing :warning: |
src/pyunicorn/core/network.py | 97.59% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hold on, there was a problem with my VSCode's merge editor:
It did not show great parts of the changes to network.py
from 8bca11e and d23d30b, which is why they got lost in 7d703d0. Probably couldn't handle the large diff, could not resolve this problem on a second try.
I will therefore just re-commit the lost changes to this branch via cherry-picks of the original commits.
two notes on the above:
With 7c9776a I removed Network.nsi_degree_uncorr()
and Network.nsi_local_clustering_uncorr()
, which are now directly included in nsi_degree()
/nsi_local_clustering()
, aligning them with other n.s.i. methods such as nsi_indigree()
. I hope I didn't oversee any further purpose of those removed methods?
When trying to add a respective test I noticed that #206.
As of 2d9616c, many nsi_*
methods' cache decorators were commented out, probably as a temporary workaround to #148. However, there are many more potentially affected cases so the issue is not fully worked around.
Looks good to me. I've only added some minor commits related to maintenance and CI. The caching issues will be addressed in the dedicated issue mentioned above.
Thank you, @mensch72 and @fkuehlein!
Awesome, thanks for the additions and cleanup @ntfrgl!
Just one thing I don't understand: Why did CodeCov even include tests/test_core/test_network.py
in the report now?
m.c. everyone :-)
You're welcome.
I'm not quite sure what the presumptions of your question are, since you changed that file. Here is some relevant documentation, maybe it can clarify matters:
Right, what I was implying was that CodeCov previously did not measure the test-files themselves at all, because of course we don't want to start testing tests, and therefore me changing test_network.py
should not make a difference. This is because the source-directory for pytest-cov
is specified as
https://github.com/pik-copan/pyunicorn/blob/a8d43129cbc8c0898ff714ca62cff709cfabcd25/setup.cfg#L119
Only in that latest coverage report test_network.py
is now appearing as the only test-file, and if I run pytest on my machine again it does, too. I was just wondering if your changes to the pytest configuration in combination with the multiple configuration files might have corrupted that above setting in some way? Could not find find out how yet ...
Also, when running the test suite I now get a dtype-mismatch from Network.nsi_betweenness()
which probably originates from e3446bb, only I could not resolve it yet.
File pyunicorn/src/pyunicorn/core/network.py:3336, in Network.nsi_betweenness(self, parallelize, **kwargs)
3334 pool.join()
3335 else:
-> 3336 betw_w = worker(to_cy(targets, NODE))
3337 return betw_w / w
File src/pyunicorn/core/_ext/numerics.pyx:412, in pyunicorn.core._ext.numerics._nsi_betweenness()
ValueError: Buffer dtype mismatch, expected 'NODE_t' but got 'signed char'"
The two arguments expected by core._ext._nsi_betweenness()
to be of NODE_t
type are flat_neighbors
and targets
, which are both explicitly cast using to_cy()
, so I don't really understand why this error occurs. Also, it apparently did not occur on the Travis build. Any idea?
I was just wondering if your changes to the pytest configuration in combination with the multiple configuration files might have corrupted that above setting in some way?
Ah, I see the problem now, thanks for pointing it out. Let me know if f374ffd behaves as you'd expect.
Also, when running the test suite I now get a dtype-mismatch from
Network.nsi_betweenness()
which probably originates from e3446bb, only I could not resolve it yet.
That commit changed the order of arguments in the Cython function signature, in order to be able to parallelise using just a partial
wrapper, and thus to avoid a serialisation problem with the multiprocessing
module from the standard library.
Also, it apparently did not occur on the Travis build. Any idea?
You probably just need to recompile/reinstall locally after that change.
You probably just need to recompile/reinstall locally after that change.
Ah sure, my bad!
Let me know if https://github.com/pik-copan/pyunicorn/commit/f374ffda77cb2119fa7b95b4e7a6423b32eaad0e behaves as you'd expect.
It does, thanks!
@fkuehlein can you deal with these conflicts and merge?