Closed erikgraa closed 3 years ago
This is somewhat embarrassing and the issue can be rejected/closed.
Seems my issue pertained to a function that I called within the first function's scriptblock.
In the second function, which also uses PoshRSJob, I started the jobs without a batch id, and afterwards I pulled "get-rsjob | remove-rsjob". This seems to have messed everything up. Starting with a random batch id now and it seems to be working.
Are there any tips on optimizing "Throttle" by the way?
Hm, seems I actually still have some trouble.
It seems to only emerge after sufficient Start-RSJob has been done in a powershell session. The process is bloated to over 1500MB and 20% CPU despite having removed all jobs.
Is there a way you can clear a session from all runspaces and run a garbage collection? Perhaps as a function of remove-rsjob or as a new one, reset-rsjobrunspace or something such?
EDIT: Seems calling import-module poshrsjob -force before re-trying in a session that's seen a lot of activity disposes of the runspace pools. Possible to get the functionality from poshrsjob.psm1 in cmdlet format? This does, however, still give me errors. I need to start a new powershell session before I can do the command again without error, for a few times.
To update on this. With 200 objects in the pipe, I get 2-3 successful attempts before I need to start a fresh Powershell instance to get faulty results. Many of the scriptblocks/instances will complain about not finding modules, whereas others will be able to.
Maybe it could be alleviated by for-eaching the pipeline, e.g. only doing start-job on 50 items at a time, but it feels like it shouldn't be like this.
The scriptblock deals with opening connections to various network components to test different credentials and producing a report.
It quickly goes to 99% CPU and 50% MEM.
Do you want to request a feature or report a bug?
Feature
What is the current behavior?
No garbage collection
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
I run out of memory or CPU when starting large number of jobs
By using the solution in #187 along with a [gc]::Collect()-call at the end of the do while {}, it seems my troubles are somewhat alleviated.
My problem manifests in new jobs not being able to load modules or use functions, saying they are not found, but it appears there's just no resources to load them.
What is the expected behavior?
Garbage collection as jobs are finishing so as to not run out of resources
Which versions of Powershell and which OS are affected by this issue? Did this work in previous versions of our scripts?
POSH 4.0, WSRV2012R2