Another possibility is that the matlab VMs are becoming too big (because of the many states), so there's no space left to make the (de)serialize file. Remember you have 16+1 matlab instances and 32 GB total per node). I tested the same run as above (but now with parameterSamplesAreGiven = false, I don't know why that was set like that), but using 24 hours of data. That ran fine. Memory use was about 4.6 GB of 32GB.
Another quick test with 10 days data saw a quick rise in memory use. Highest I saw from htop was 24GB but afterward it seemed that a couple of processes weren't doing anything anymore, so they probably crashed due to out-of-memory errors.
perhaps related to the size of the ramdisk v that of the file in ram:
412 MB (for what is supposed to be a small file)
here is the error that comes after this
Note this was for a mmsoda reset run with these settings:
Another possibility is that the matlab VMs are becoming too big (because of the many states), so there's no space left to make the (de)serialize file. Remember you have 16+1 matlab instances and 32 GB total per node). I tested the same run as above (but now with parameterSamplesAreGiven = false, I don't know why that was set like that), but using 24 hours of data. That ran fine. Memory use was about 4.6 GB of 32GB.
Another quick test with 10 days data saw a quick rise in memory use. Highest I saw from
htop
was 24GB but afterward it seemed that a couple of processes weren't doing anything anymore, so they probably crashed due to out-of-memory errors.