Open effigies opened 2 years ago
Is this what --drop-missing
is for? Or is this to ensure that if you do --drop-missing
the next level handles the new shape of predictors correctly?
Also, maybe --drop-missing
should be default behavior? This seems to trip people up frequently.
Is this what
--drop-missing
is for? Or is this to ensure that if you do--drop-missing
the next level handles the new shape of predictors correctly?
Yes, we need to handle the new shape correctly. Could make it contingent on --drop-missing
or do it all the time and warn?
Also, maybe
--drop-missing
should be default behavior? This seems to trip people up frequently.
I would worry that this would make us silently ignore errors in the model spec.
I'm still not 100% when --drop-missing works and when it doesn't, bc it always seems to work for me. For example, if one subject is missing a predictor in run 1 but not runs 2-3, it seems to handle the new shape fine.
I think that's a valid worry. Maybe for now we could make it contingent on --drop-missing
, but also throw a useful error suggestion --drop-missing
if weird shapes are detected
When I did call this job, I did include --drop-missing
. Happy to share other details of our dataset if that helps with implementing more options.
Environment
Expected Behavior
My understanding of the situation: @sjshim has a dataset with a
demeaned_RT
regressor, but this is only notnan
if the subject responds at least once during a run. In case of a run where no actions had, this regressor will be missing from the design matrix, and the contrast will be missing from the L1 model outputs.When passed to the L2 model, it might expect 5 input stat maps, but only get 4. We should detect this case and remove rows from the design matrix when an expected statistical map is missing.