Closed jenisys closed 12 years ago
Seems to be related with: http://stackoverflow.com/questions/2080660/python-multiprocessing
globals seems to be not shared w/ forked processes (on some platforms at least). Workaround should be to reimport the needed script in the forked UserGroup process. A preliminary check if script import works should still be present in the parent process to catch script errors early on.
any reason this hasn't been merged yet?
just saw this. I don't have windows to test on all, but will review the code and merge. thanks!
fixed in latest trunk
Multi-mechanize does currently not work under Windows when
multimech-run MY_PROJECT
is used. The same project runs fine under UNIX/MACOSX.The problem appears in the Agent.run() method where the script Transaction should be created. The real problem seems to be related w/ the multiprocessing package under Windows and/or the way how exec()/eval() is used. The imported scripts (modules), that are imported in the multimech.core.init() part of the parent process, are no longer present in the globals of the forked UserGroup process that tries to create and start the Agent threads.
Also note that the exec()/eval() logic mentioned above worked fine in the past w/ multi-mechanize 1.010, Python 2.5 and manually installed multiprocessing package on the Windows platform. But it does no longer work w/ the built-in multiprocessing package when Python 2.6 is used.
NOTE: The exec()/eval() semantic that is currently used should be replaced w/ import-logic (IMHO) in a
script_loader
module. But this also does not solve the problem from above (I tried it).VERSION-INFO: