If you run a parallel model once, it will precompute the likelihoods.
If you run it again, the workers don't seem to take any memory. I'm just judging from looking at the processes in htop. The first run, they each grabbed ~3GB, now they're sitting at 30MB, and the CPU usage is 0 for all.
If you run a parallel model once, it will precompute the likelihoods. If you run it again, the workers don't seem to take any memory. I'm just judging from looking at the processes in htop. The first run, they each grabbed ~3GB, now they're sitting at 30MB, and the CPU usage is 0 for all.