Open handwerkerd opened 3 months ago
Attention: Patch coverage is 88.05970%
with 32 lines
in your changes are missing coverage. Please review.
Project coverage is 89.61%. Comparing base (
12aea7f
) to head (4223bf9
). Report is 1 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I'd appreciate some reviewers for this: @tsalo @eurunuela @goodalse2019 @n-reddy @dowdlelt . From my end, the only thing left to do is update the ReadTheDocs part of the documentation. I'm holding off on that until a couple of you agree that the overall structure makes sense.
@tsalo I just added type hints to external.py
I'm still a bit of a novice with type hints and it looks like formatting is non-trivially changing across python versions. I tried to follow guidance that's valid for python 3.8 since that's our oldest permitted version. I'll keep working in other files and on your remaining review comments, but let me know if I'm appropriately adding type hints so far.
Update as I was writing this comment: Lots of tests just failed so the answer to that question is"no". It looks numpy v1.24.3 that's currently in my python 3.8 environment includes I figured out the issue and all tests are passing now. It seems like using numpy.typing
and not numpy._typing
while numpy v1.26.3 in my python 3.11 environment includes numpy._typing
and not numpy.typing
. I suspect there's a way to set up backwards compatibility for this, but I wanted to share what I'm seeing before I switch to childcare stuff.import numpy.typing as npt
works, but directly referencing np.typing
does not work across all versions
@tsalo Can you explain more what you mean by "It would be great if strings and lines in docstrings were broken on punctuation. This will result in cleaner diffs in the future."
Absolutely. Take a paragraph in the text, like the one below:
This is sentence one. This is
sentence two. This is sentence
three- but it's still going.
Since sentences are a unit within the paragraph, we're more likely to change them than random words. Plus, since almost everything we write is in markdown or rst, the newlines don't impact the rendered text- just the code.
-This is sentence one. This is
+This is sentence one. This is a
-sentence two. This is sentence
+new sentence two with other
-three- but it's still going.
+changes. This is sentence three-
+but it's still going.
The diff on that, where the only consideration is line length, is much more extensive (and harder to interpret for a reviewer) than the diff if we broke up the text on punctuation:
This is sentence one.
-This is sentence two.
+This is a new sentence
+two with other changes.
This is sentence three-
but it's still going.
Attention: Patch coverage is 92.30769%
with 21 lines
in your changes missing coverage. Please review.
Project coverage is 89.90%. Comparing base (
7ca9c27
) to head (8d09599
).:exclamation: Current head 8d09599 differs from pull request most recent head 2d45383
Please upload reports for the commit 2d45383 to get more accurate results.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I think I've now addressed all feedback regarding code and I've started to add documentation to building_decision_trees.rst
I still need to make flow charts for the new trees and add them to included_decision_trees.rst
but otherwise this PR is (again) ready for review & for people to try it out.
Closes #1009. This is an alternative approach to #1008 and #1021.
If a user provides external regressors, this will calculate a fit to those regressors to include as metrics in the component table and for use in decision trees.
Changes proposed in this pull request:
metrics.external
, for calculating metrics from external regressorstask_keep
If regressors under this label are included, these will be excluded from the full F model and a separate full F model will be calculated using just these regressors. This was a suggestion from @dowdlelt so that it would be possible to identify and conservatively retain components that fit to the overall task design.--external
, to pass in a TSV with external regressors.demo_
because they demonstrate how to use the new functionality. They might be similar to trees we'll want to recommend, but validation on actual data (and tweaking) will be easier when this is merged and more people are using it.external_regressor_config
which is a dictionary with the following keys:info
A description of what the config does that's savedreport
A description to add toreport.txt
calc_stats
Currently the only option is "F" but this leaves open possibilities for additional options.detrend
: Will automatically calculate the number of polynomial detrending regressors if true, but can also be a number for the users to specify. If this is false, then will be set to 0 (mean removal) and log a warning.f_states_partial_models
[optional] A list of titles for the partial models (i.e.["Motion", "CSF"]
)f_states_partial_models
has model names, each model name is its own key and contains either a list of column labels or a regular express wildcard (i.e."Motion": ["^mot_.*$"]
means the Motion partial model will include any external regressor column label that begins withmot_
and"Motion": ["mot_x", "mot_y", "mot_z"]
specifies 3 specific column label namestask_keep
[optional] Contents are a regex wildcard or specific names to define task regressors that will not be included in the full modeldemo_minimal_external_regressors_motion_task_models.json
uses all the above options, uses the partial models to add a classification_tag, but not change results and retains components that correlate to the task (R2>0.5), have kappa>elbow irregardless of what rho is.demo_minimal_external_regressors_single_model.json
uses the minimum number of options to run with external regressors.matching
parameter to confirm the correct errors are triggered andcaplog.text
to confirm expected warnings or info statements are logged.To do: