Closed rmarkello closed 3 years ago
Merging #95 (34834c1) into master (afa3af3) will increase coverage by
3.03%
. The diff coverage is32.78%
.
@@ Coverage Diff @@
## master #95 +/- ##
==========================================
+ Coverage 64.65% 67.69% +3.03%
==========================================
Files 23 24 +1
Lines 1610 1681 +71
==========================================
+ Hits 1041 1138 +97
+ Misses 569 543 -26
Impacted Files | Coverage Δ | |
---|---|---|
netneurotools/plotting.py | 32.82% <11.76%> (+26.24%) |
:arrow_up: |
netneurotools/tests/test_plotting.py | 81.08% <81.08%> (ø) |
|
netneurotools/modularity.py | 59.55% <0.00%> (-4.50%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update afa3af3...a903253. Read the comment docs.
It's a nice enhancement @rmarkello and sorry for the late reply! Overall I don't see anything that might be problematic and you can go ahead and merge the branches.
A very minor issue might however be how you handle vmin and vmax in plot_fsaverage()
and plot_fsvertex()
. Basically, I don't think that handling vmin and vmax in plot_fsaverage()
is necessary since you handle these two parameters in plot_fsvertex()
(Unless there's a clear reason to do so?). Also, I would argue that using np.nanpercentile()
instead of np.percentile(np.nan_to_num())
, assuming you use this function to avoid NaNs, would be better: np.nanpercentile()
would completely ignore nans, which might be what the user wants if they rely on a mask to hide their NaN values in the array vector.
@VinceBaz thanks for this! Sorry for my delay in getting back.
(1) I handle vmin
and vmax
separately because if you're using plot_fsaverage()
I want the percentiles to be based on the parcellated data, not the dense data (i.e., I want to avoid influencing the colormap range because some parcels might have more vertices than others). It's not elegant, but I couldn't think of another way to do it... Let me know if that makes sense to you, though.
(2) np.nanpercentile()
is definitely what I wanted, thank you ! 🙏 I will make the relevant change (and fix the failing tests) soon.
Getting those tests to pass was painful but it's done! 🙌 We are now officially testing (some) of our plotting functions, and adding tests for the others should be much easier to do in the future :grin:
Gonna go ahead and merge this in, finally
For plotting vertex-level data using pysurfer. Also refactors
plot_fsaverage()
so that after initial input formatting it callsplot_fsvertex()
under the hood.@VinceBaz since you've worked on
plot_fsaverage()
before maybe you could take a look at this (to make sure it makes sense) and ensure I didn't screw up anything too dramatically with my refactoring? If you're not working on anything that would be able to test the newplot_fsaverage()
at the moment then no worries.