Closed juthzi closed 4 years ago
To my knowledge, it's not possible to downsample during reading of the data from .edf
. Do you run into memory issues even if you read a single .edf
file? Otherwise, I'd recommend to read one file, downsample, then read the next, and so on.
Thanks for getting back to me! The problem is not so much the reading in of the files, but running the functions following after that, so far: behave_pupil() and baseline_correction_pupil_msg().
I guess I could run these functions on individual files or on only a couple of files. However, I am reluctant to do this, because this means, that I have to run the whole pipeline for an individual file and afterwards concatenate the files, which always creates new sources of errors (I am not that firm with R yet, or the packages). But I guess I have no other choice, unless somebody has a different idea?
I see. In this case I would contact the authors of GazeR since the bottle neck seems to arise in their package. Perhaps they ran already into this issue themselves and have solutions.
Thank you for the advice, I will do that.
Best wishes, Juliane
Hello everyone,
I have trouble processing my pupil data with some of the GazeR and edfR functions. I run into working space limit problems.
Therefore, I wanted to downsample the files (they have been created with 1000 Hz) to 250 Hz for a start. In the GazeR manual they describe a way to downsample using "downsample_gaze", but for me, this step occurs too late in the script, so that I run into problems beforehand. Do you know, if there is a way to downsample the edf files when they are initially parsed/imported into R? Or can I simply use the "downsample_gaze" at an earlier point in the script without messing other things up?
Thank you for your advice, Juliane