ME-ICA / multi-echo-data-analysis

Still a work in progress.
https://me-ica.github.io/multi-echo-data-analysis/
GNU Lesser General Public License v2.1
3 stars 2 forks source link

Bump tedana from 0.0.12 to 23.0.2 #23

Closed dependabot[bot] closed 7 months ago

dependabot[bot] commented 7 months ago

Bumps tedana from 0.0.12 to 23.0.2.

Release notes

Sourced from tedana's releases.

23.0.2

Summary

These changes includes a lot of documentation updates, logging of python and software versions in tedana_report (#747), fixing a bug where one could not specify PCA variance explained from the command line interface (#950), stricter code style rules along with pre-commits, cleaning up code in several places including several places where we were unnecessarily using old versions of python modules (#998), and updating to allow tedana to run with python version 3.12 (#999)

What's Changed

New Contributors

Full Changelog: https://github.com/ME-ICA/tedana/compare/23.0.1...23.0.2

23.0.1

Release Notes

Most of these changes were made for v23.0.0, but the package did not build for pip so the descriptive release notes are stored with this version.

This release changes many internal aspects of the code, will make future improvements easier, and will hopefully make it easier for more people to understand their results and contribute. The denoising results should be identical. Right before releasing this new version, we released version 0.0.13, which is the last version of the older code.

User-facing changes

  • Breaking change: tedana can no longer be used to manually change component classifications. A separate program, ica_reclassify, can be used for this. This makes it easier for programs like Rica to output a list of component numbers to change and to then change them with ica_reclassify. Internally a massive portion of the tedana workflow code was a mess of conditional statements that were designed just so that this functionality could be retained within tedana. By separating out ica_reclassify the tedana code is more comprehensible and adaptable.
  • Breaking change: No components are classified as ignored. Ignored has long confused users. It was intended to identify components with such low variation that it was not worth deciding whether to lose a statistical degree of freedom by rejecting them. They were treated identically to accepted components. Now they are classified as accepted and tagged as Low variance or Borderline Accept. This classification_tag now appears on the html report of the results and the component table file.
  • Breaking change: In the component table file classification_tag has replaced rationale. Since the tags use words and one can assign more than one tag to each component, these are both more informative and more flexible than the older rationale numerical codes.
  • It is now possible to select different decision trees for component selection using the --tree option. The default tree is kundu and that should replicate the current outputs. We also include minimal which is a simpler tree that is intended to provide more consistent results across a study, but needs more testing and validation and may still change. Flow charts for these two options are here.
  • Anyone can create their own decision tree. If one is using metrics that are already calculated, like kappa and rho, and doing greater/less than comparisons, one can make a decision tree with a user-provided json file and the --tree option. More complex calculations might require editing the tedana python code. This change also means any metric that has one value per component can be used in a selection process. This makes it possible to combine the multi-echo metrics used in tedana with other selection metrics, such as correlations to head motion. The documentation includes instructions on building and understanding this component selection process.
  • Additional files are saved which store key internal calculations and what steps changed the accept vs reject classifications for each component. The documentation includes descriptions of the newly outputted files and file contents. These includes:
    • A registry of all files outputted by tedana. This allows for multiple file naming methods and means internal and external programs that want to interact with the tedana outputs just need to load this file.
    • A file of all the metrics calculated across components, such as the kappa and rho elbow thresholds
    • A decision tree file which records the exact decision tree that was run on the data and includes metrics calculated and component classifications changed in each step of the process
    • A component status table that is summarizes each components classification at each step of the decision tree

... (truncated)

Commits
  • 1c3f93e MNT: Test on 3.11 and 3.12 (#999)
  • a422dc3 MNT: Uncap dependencies (#998)
  • 2cb05ab remove uncessary copy of large data (#995)
  • b139386 Lint codebase with additional style restrictions (#970)
  • b11d254 Make it clearer in CONTRIBUTING.md how devs can make pre-commit work (#985)
  • 3450103 Add prefix to all output files (#963)
  • 133b3b5 Create .pre-commit-config.yaml (#983)
  • efe7cdb Multiple documentation updates (#948)
  • d11f021 Add section to reports that show system info, tedana call and version (#747)
  • f9daaa0 Add pre-commit to automatically fix style issue before pushing commits (#973)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 7 months ago

The following labels could not be found: maintenance, ignore-for-release.