Open zeosmar opened 3 years ago
I think it's expecting a list of dictionaries under Custom
.
@zeosmar, let me know if it works with
Custom:
- file: /config/faces.csv
instead of
Custom:
file: /config/faces.csv
and I'll update the validator for the next release.
Thanks @shnizzedy! CPAC was able to validate the nuisance regression. Now, I'm getting the following error message:
Because I'm running CPAC for a single subject, I'm using --n_cpus 4 and --mem_gb 4 in my docker command. I've already increased the memory to 10GB, but it still returns the same error. Do you have any suggestion to solve this issue?
--n_cpus
and --mem_gb
are both per-participant, so for a single-subject run, those would also be the total limits (and those take precedence over values in the pipeline configuration file).
The 'insufficient resources' error at the beginning is an attempt to reduce the instances of out-of-memory crashes during runs, to be able to adjust up-front, but we need to do some tuning; right now, the memory estimates are hard-coded and data-independent, based on common data sizes, but obviously the estimates will be too big for small runs and too small for big runs until we make those estimates data-dependent.
These nodes have estimates > 4 GB:
node | estimate (GB) |
---|---|
ALFF bandpass filter | 13 |
compcor DetrendPC | 12.5 |
apply ANTs warp | 10 |
cosine filter | 8 |
FLIRT resample | 8 |
functional to standard composite transform | 8 |
mask to EPI | 8 |
reho_map | 6 |
mean filter | 5 |
VMHC apply_transform | 5 |
spatial regression | 4.5 |
Unfortunately, I don't think there's a particularly clean way around this. If your memory limit is soft (it looks like you're running in Docker on Lisa, so I think your limit) or you're pretty confident your data is small enough that the estimates are overkill, you should be able to just set --mem_gb
to at least the biggest number in the above table that's relevant to your pipeline.
C-PAC 1.8 includes reporting on observed memory usage, so you / we can compare what was really used to what was estimated / allocated.
Thanks @shnizzedy, I was able to put my pipeline to run following your suggestion. To provide you more details of my goal using the custom regressor parameter, I'm basically trying to regress out a task timeseries composed by two blocks (faces and shapes), and thus I have two timeseries (faces.csv and shapes.csv). After the completion of the pipeline, I noticed two issues: 1) The provided custom regressor signal was changed at some point of the run as showed in the attached figure. Would that be the resulting signal after the bandpass filtering? 2) I'm not sure if I could do this, but I provided two custom regressors, and only the last one was used for the run. There was no error message or any other indication to show that I can't use more than one though.
Is there any way I could make this work?
# Select which nuisance signal corrections to apply
Regressors:
- Name: 'defaultNoGsR'
Motion:
include delayed: true
include squared: true
include delayed squared: true
aCompCor:
summary
method: DetrendPC
components: 5
tissues:
- WhiteMatter
- CerebrospinalFluid
extraction resolution: 2
PolyOrt:
degree: 2
Bandpass:
bottom frequency: 0.01
top frequency: 0.1
method: default
Custom:
- file: /config/faces shapes timeseries/80020_faces1.csv
Custom:
- file: /config/faces shapes timeseries/80020_shapes1.csv
The right window shows the changed custom signal and the original signal is showed on the left.
Hey @zeosmar, sorry for the slow reply.
- The provided custom regressor signal was changed at some point of the run as showed in the attached figure. Would that be the resulting signal after the bandpass filtering?
I think what you're suspecting is correct. That *_regressors.1D
file is in the outputs, not in the working directory, right?
- I'm not sure if I could do this, but I provided two custom regressors, and only the last one was used for the run. There was no error message or any other indication to show that I can't use more than one though.
It looks like there was a bug where each custom file within a regressor was generating a node with the same name as each other custom file in the same regressor. Can you try
# Select which nuisance signal corrections to apply
Regressors:
- Name: 'defaultNoGsR'
Motion:
include delayed: true
include squared: true
include delayed squared: true
aCompCor:
summary
method: DetrendPC
components: 5
tissues:
- WhiteMatter
- CerebrospinalFluid
extraction resolution: 2
PolyOrt:
degree: 2
Bandpass:
bottom frequency: 0.01
top frequency: 0.1
method: default
Custom:
- file: /config/faces shapes timeseries/80020_faces1.csv
- file: /config/faces shapes timeseries/80020_shapes1.csv
in this image?
(Download cpac-docker-image.tar.gz then run docker load < cpac-docker-image.tar.gz
to load the image from CircleCI)
@zeosmar did you ever get this to work?
Describe the bug I'm getting an error message after including a custom regressor to the default config pipeline.
To Reproduce Steps to reproduce the behavior: 1 - Create a single column .CSV file 2 - Include the custom regressor in the default pipeline 2- Map the csv file in the docker command and run it
Screenshots
Versions