Open tsalo opened 3 years ago
If this gets implemented, then we should keep track of the classification between iterations. That way, if the desired number of BOLD components are not found within the iteration limit, then the iteration with the most BOLD components could be returned.
Out of curiosity, how important is it that some BOLD components are found? My impression was that it doesn't really matter if the end goal is to denoise---it's more problematic to have no rejected components in that case.
It's kind of a philosophical issue in my opinion. Presumably if you have a living person in the scanner, they will have some BOLD signal, even if it's not of neuronal origin. If you have no BOLD components, it's an indication that something is wrong.
If this gets implemented, then we should keep track of the classification between iterations. That way, if the desired number of BOLD components are not found within the iteration limit, then the iteration with the most BOLD components could be returned.
Good point. :+1: to that feature, if the number we come up with is > 1.
Out of curiosity, how important is it that some BOLD components are found? My impression was that it doesn't really matter if the end goal is to denoise---it's more problematic to have no rejected components in that case.
I agree with Josh, but also if the denoised data only includes (generally low-variance) ignored components and unmodeled variance, then we're probably talking about a very tiny amount of the overall variance, which is probably just noise.
Good points are made here - but I think to some point tedana should be comfortable with failing to find any BOLD components! I struggle with trying to imagine how this could occur, but perhaps as SNR decreases (perhaps with higher resolution, etc), it seems possible that we could end up with frustrated users as current classification methods have trouble. To @notZaki 's point, the purpose is denoising, right? I want tedana to tell me what is bad in the data, even if it cannot tell me what is good.
I recall Kundu giving a talk showing the high-kappa timeseries, which of course requires identifying BOLD components, but I think analyses of that data has fallen out of favor.
I'm confused, would high resolution change whether or not we'd expect BOLD-like components? I can see why having finer resolution might mess with decomposition overall (e.g., perhaps things become more spatially discontinuous?), but I'm not sure that we've seen anything to indicate that it's happening yet.
it was meant to be more of a "maybe this could happen" example, rather than a prognostication. updated for more clarity. More to the point, it is difficult to predict what users will do/methods will develop, and I think preventing them from denoising their data, even though plenty of noise was found could be frustrating. That said - a warning, even perhaps a loud one, is important. no BOLD is a concern, but I feel that it shouldn't be a show stopper.
Summary
In our current code, and reinforced in #663, only one BOLD component is needed to produce valid output. Do we want to increase this number at all?
Next Steps