Closed ParticularMiner closed 3 years ago
Hi @ParticularMiner , did you use the -r option? By default, procgov applies the limits only to the selected process. The -r option will affect also all its child processes.
Hi @lowleveldesign
Thanks for the advice. Using the "-r" option did not change the behavior of the process.
@lowleveldesign
Does the "-r" option give the same limit to all child processes in such a way that the
[total memory reserved for all child processes plus the parent process] = ["-m" value]?
Or does it rather set the maximum limit for each child process the same as the given "-m" value?
It sets the limit for each individual process. The -r option should work for direct children. Maybe it's more complicated in your case. Please record a procmon trace when the process launches, zip it and attach it to this ticket. To ensure no sensitive data leaks (it shouldn't), set a password on the .zip file and send the password by email (ssolnica at gmail).
@lowleveldesign
Please find herewith attached the zip file requested:
The password will be sent to you as specified.
Thanks.
OK, so you apply the limit to the last process in the process tree (21268):
Instead, you should be using one one of the parent processes, for example, 20924. Please note, that the limit you set will apply only to the newly created processes. So if you attach to an existing process which has children, procgov won't affect the already running child processes.
Can you start jupiter under procgov? That should put limit on all the python processes it launches. For example:
procgov -r -m 1G jupyter lab
@lowleveldesign
OK. That explains a lot. I'll try that and let you know what happens. But now I'm thinking it would be better to do away with jupyter entirely and run the python script directly under procgov, since jupyter runs python for all kinds of other purposes.
Many thanks for sparing time to look into this!
Hi @ParticularMiner , I'm closing this ticket. Please reopen it if you discover that child propagation does not work as expected.
Hi @lowleveldesign
Thanks again for your help.
Child propagation indeed works well as intended, but not as I had wanted.
I had wanted my process to behave as though it was loaded on a machine with a limited amount of RAM. This implies that though each child process has the same amount of memory assigned to it, the total used by all child processes cannot exceed the given amount, which is not what procgov does, as far as I understand it. Please correct me if I'm wrong.
Anyway, I ended up using virtual machines — multiple instances of Windows Subsystem for Linux (WSL) — whose total memory I could configure. The only problem there is I need to wait for an unspecified amount of time (> 2 minutes) for each virtual machine to shutdown before starting up the next one.
Perhaps there's a better way, but I'm only now beginning to learn about these things.
Hi @ParticularMiner
Thanks for the explanation. I found that the job API allows to set a total committed memory limit for all the processes within a job. I’m reopening this ticket and I will add an option to set this limit in procgov. We will see if it solves the problem.
@lowleveldesign
Sounds promising. It would be great to have a solution that works fully on Windows.
Hi @ParticularMiner
Thanks for the explanation. I found that the job API allows to set a total committed memory limit for all the processes within a job. I’m reopening this ticket and I will add an option to set this limit in procgov. We will see if it solves the problem.
@ParticularMiner, I updated the 2.9 release. Please try the --maxjobmem
option. I hope it will work in your scenario.
@lowleveldesign
Thanks @lowleveldesign !
I'm currently testing it. So far it looks really good. I'll let you know the result as soon as possible.
@lowleveldesign
Looks like it worked. There are differences between the WSL approach and the procgov approach but perhaps those differences are due to the differences in the way Linux and Windows manage memory.
Are you by any chance familiar with the python package psutil? It is able to report the virtual memory usage (vms) of a process. I used it to monitor my running process. But the values of vms for Windows vary widely from those of Linux. Would you have expected this?
@lowleveldesign
So I just took a look at the Microsoft Docs on JobMemoryLimit (which I see you used in your code) and it appears it sets a limit on the virtual memory instead of the physical RAM:
JobMemoryLimit If the LimitFlags member of the JOBOBJECT_BASIC_LIMIT_INFORMATION structure specifies the JOB_OBJECT_LIMIT_JOB_MEMORY value, this member specifies the limit for the virtual memory that can be committed for the job. Otherwise, this member is ignored.
Since I am testing the behavior of a job on a machine with limited physical RAM but indefinite virtual memory size, I think this parameter does not fully capture what I want to do.
I’m now going to read further to check if there’s a similar parameter for physical RAM.
@ParticularMiner yeah, it's the limit of committed memory. That's why I wrote that we need to check how python will behave with this limit set. The way to limit process physical memory usage is by using the min and max working set limits (--minws
and --maxws
options). Unfortunately, I haven't found a way to set those limits on a job level. So you may only limit individual processes.
@lowleveldesign
I agree with you.
At this link I found JOB_OBJECT_LIMIT_WORKINGSET
, MinimumWorkingSetSize
, and MaximumWorkingSetSize
which allow to set the working set size for a job and all processes associated with it, but they do not cause all processes associated with the job to be limited to the job-wide sum of their working set size (like the committed memory).
It’s strange that it’s unavailable (or undocumented). Linux cgroups, on the other hand, readily offers such functionality.
Maybe, there is some undocumented API for this functionality and, maybe, one day, I will find time to research this 🙂 But for now, I'm closing this issue as there is nothing I can do more.
@lowleveldesign
Many thanks for your help and interest!
I need maxjobcpurate! my computer will shutdown if the CPU is to hot..
Hi @lowleveldesign
I have tried to limit the committed memory of a python process which runs code that calls a multithreaded C++ program, thus spawning multiple child processes.
So far, the memory limit does not seem to affect the runtime behavior of the C++ code at all, making me wonder if child processes have their own memory limits. Would this assertion be correct?