Closed kingjr closed 3 years ago
I think it is really clear
it does not integrate other linear approaches: e.g. hyperalignment
hyperalignment is a linear or non-inear transform to align brain data in a common feature space. Once you have done this using some data you are back to encoding / decoding but being able to predict for new subjects.
forward and backward are used slightly differently in control theory
if you say so.
forward and inverse modeling for MEEG source modeling may not well fit with the present forward/backward distinction.
M/EEG inverse modeling when linear is applying a spatial filter informed by physics. The filter aims to highligh what deviates from zero / from baseline
sounds great but is the cat French? :)
Can this be closed? :)
I guess so
I think we use this taxonomy nowadays
As discussed off-line, here is a to-be-debated proposal for operational definitions:
Supervised ML analyses to neuroimaging tend to cluster into two groups: forward generative encoding models versus backward discriminative models. The corresponding terms have thus been used interchangeably (e.g. Haufe et al 2014). However, they actually refer to independent properties of any neuroimaging analysis.
Backward and forward models indicate whether the encoding or decoding model is assessed in the same way it has been built or optimized. For example, a logistic regression fitted to maximize the discriminability of “faces” versus “houses” and scored precisely on that ability is a forward model: i.e. it is fitted and scored in a decoding fashion. However, a model fitted to maximize the variance induced by faces and houses, and subsequently assessed on its ability to predict the visual stimulus given brain activity is a backward encoding model.
Generative and discriminative models are statistical concepts. Generative models estimate the joint distribution between two observables X and Y: P(X, Y). Discriminative models estimate the conditional probability between two observables: P(Y|X). A discriminative model can always be derived from a generative model using Bayesian rule, but not vice versa. Discriminative models therefore make fewer assumptions, and can thus be more robust to a particular task but less informative and less general than their generative counterparts.
The present proposal implies that these 3 dimensions are not mutually exclusive. For example, fitting an SVC to predict a stimulus category from a continuous BOLD response consists in fitting a forward discriminative decoder. Fitting an SVM to predict whether a neuron should spike given the contrasts of a stimulus' pixels consists in fitting a forward discriminative encoder. For some models, and in particular for univariate analyses, these distinctions can be irrelevant. For example, a simple ordinary least-square regression (equivalent to LDA and ANOVA in binary cases) will find the same model whether fit in its encoding and decoding directions.
Additionally, it is important to highlight that the coding framework is not a mind / body problem. Autoregressive models, that are optimized to predict future brain activity can be thought of as encoding models from past neuronal data (as opposed to current sensory stimulation, or future motor actions).
Finally, note that this proposal does not mention causality. All we do is correlation. One can have an encoding model of sensory stimulation, or of motor control, although in one case the external variables causes brain activity, whereas in the other, it is caused by brain activity.
The limits with this proposal
Please comment if you disagree or if you think we should add additional info. I'll update the proposal accordingly until we reach an agreement.