Closed TheMnBN closed 5 years ago
We ran the code on a machine with 384 GB of RAM. The load_signals pre-process the raw data and then concatenate them all into a single .hickle file for faster training later on. Maybe you can have a workaround by saving individual pre-processed 30-s segments on hard-disk instead.
https://github.com/Nano-Neuro-Research-Lab/Seizure-prediction-CNN/blob/caccb5347ebde59110cfc0560dc3472741cb61dc/utils/load_signals.py#L42
My system has 32 GB of physical memory and RAM utilisation always hit close to 100% when pre-processing data. Do you have any insights of what is causing this issue? (or may be it's intended to run on systems with more memory)