Closed EmmaJaneBonestell closed 1 year ago
Thank you for reporting this.
I have updated bytecode
to include the fix in a149848488abdd82956793a9149eefc3282b3bcd. However, I cannot make a release, because PyPI does not seem to allow me to push a release version that depends on a git version of a dependency. Nevertheless, for local testing, it should be fine if you directly install the Pynguin version from GitHub. As soon as there is a new bytecode
release, I'll make a Pynguin release, too.
While this did fix the need to vendor bytecode
, attempting to run Pynguin on the bottle
module still results in multiple instances of Pynguin.
I agree, just checked this with latest Pynguin on bottle
. However, while I am not entirely sure (and I do currently not have the time to debug this to the end), I doubt that the issue is easily fixable in Pynguin.
What I saw from running it is that there is a sub-process running Pynguin again being spawned from the initial Pynguin process. Pynguin does not have any code that spawns sub-processes on its own. It only uses threads to better control test-case execution.
Thus, I assumed bottle.py
being the one that causes the sub process. And, at least, there is a line that could cause a sub-process being spawned: https://github.com/bottlepy/bottle/blob/99341ff3791b2e7e705d7373e71937e9018eb081/bottle.py#L3667
From what I see from Pynguin's logs is that the run
function, where this code is part of, is executed. Hence, I assume that there is the chance of reaching this line with appropriate parameters that basically re-start Pynguin in a sub process.
Please note that there is a similar issue (#41) when trying to generate tests for a flask
app, which might be a similar problem (since both flask
and bottle
are WSGI frameworks).
For now, I do not know about an easy and quick fix for the issue. Unfortunately, I am busy with other things, which prevents me from digging deeper in the problem.
System Info: Pynguin version '0.32.0.dev' Ubuntu 22.04, Python 3.10.6 Or when inside docker, Debian Bullseye Python 3.10.10. Occurs in either.
After a test case times out (with the error message mentioned here : https://github.com/se2p/pynguin/issues/29 ) , Pynguin spawns another instance of itself with the exact same command line being run and starting from the beginning. The previous instance is not terminated, and attempting to terminate either proccess kills the other(s).
Besides the obvious issues with possibly overwriting Pynguin's outputted result/generated files, this also sometimes occurs repeatedly, spawning enough processes to take up all available RAM and throttles the CPU. Unfortunately, I could not reproduce it when running under a debugger capable of dealing with threads (PyCharm), so I have no helpful backtrace to provide, but here's an example of the output with a single '--verbose'.
At which point I had begun interrupting the process.
Incidentally, as seen in the log above, any file that imports urlllib appears to run into two errors that do not halt the program. These are the only logging ERRORs that occur. It does not appear related, but I'll put it here in case:
Nothing else in the debug level log appears relevant, but I can provide it if wanted.
Unfortunately a bug in the
bytecode
module prevented it from handling EXTENDED_ARG NOPs in certain situations, which bottle generates. Backporting the fix was declined by thebytecode
maintainers, so this means you will either have to:A) Update Pynguin to be compatible with the latest
bytecode
version for which a PR fix I sent was accepted. I don't think this fix is released on PyPI yet, but could be easily installed from source.B) Install a locally modified version of 0.13.0 with the fix shown here https://github.com/EmmaJaneBonestell/bytecode/blob/54a1af74f33dfa323d920540fc8bab8e18b7e64c/bytecode/concrete.py#L371 , or just install from my fork/branch directly:
pip install git+https://github.com/EmmaJaneBonestell/bytecode.git@v0.13
After that, no special commands are required to reproduce. e.g.:
This normally happens around 30 iterations. It may take fewer, as it did in the above log, or nearly 200. It has never managed to reach the 600 second default timeout/full coverage without occurring.