I'm getting the following error only when running many (n=100) of essentially the same job (just permutations), but no such error when running just a few of the random permutation jobs:
The fact that the error occurs only when a large number of jobs are run makes me think that the memory error concerns the final aggregation of the data, but the R objects that are returned from each job should be relatively small.
Also, the error doesn't seem to make much sense to me, given that a qstat call shouldn't take up much memory, especially on a server with 1Tb of memory. To be clear, the SGE jobs that I'm submitting each have 80 Gb allocated, and these jobs require less than half that (I'm just making sure there's enough memory), and submitting just a few of the exact same jobs doesn't cause this "out of memory" error.
For now, I am just separating my future_map() jobs into batches of 10. These finish without any errors unlike when I run all 100 jobs together instead of the 10 batches.
I'm getting the following error only when running many (n=100) of essentially the same job (just permutations), but no such error when running just a few of the random permutation jobs:
The fact that the error occurs only when a large number of jobs are run makes me think that the memory error concerns the final aggregation of the data, but the R objects that are returned from each job should be relatively small.
Also, the error doesn't seem to make much sense to me, given that a
qstat
call shouldn't take up much memory, especially on a server with 1Tb of memory. To be clear, the SGE jobs that I'm submitting each have 80 Gb allocated, and these jobs require less than half that (I'm just making sure there's enough memory), and submitting just a few of the exact same jobs doesn't cause this "out of memory" error.For now, I am just separating my
future_map()
jobs into batches of 10. These finish without any errors unlike when I run all 100 jobs together instead of the 10 batches.sessionInfo: