Open dybber opened 3 years ago
From this program:
from agents import *
a = Agent()
def setup(model):
model.add_agent(a)
def step(model):
a.forward()
mymodel = SimpleModel("Basic model", 60, 60, setup, step)
run(mymodel)
At it's core this can simplified to having a module that launches a process on import.
Here's a small demo library (demolip.py):
from multiprocessing import Process
import time
def start():
while True:
print("Hello world")
time.sleep(1)
process = Process(target=start)
process.start()
If we launch Python, and import this library it crashes with the same error.
Python 3.8.7 (default, Dec 30 2020, 10:13:09)
[Clang 11.0.0 (clang-1100.0.33.17)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import demolib
>>> Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/Users/dpr964/Development/educational/AgentsPy/demolib.py", line 10, in <module>
process.start()
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
The solution is to wait, and first launch the Process when the first agent or model is created.
It seems that, even if we fix this problem, our users would still have to write their programs like this:
from agents import *
if __name__ == "__main__":
a = Agent()
a.forward(10)
See for instance: https://stackoverflow.com/questions/24374288/where-to-put-freeze-support-in-a-python-script
I think if we should succeed in this, we should launch the UI as a subprocess
rather than using multiprocessing
. However, this makes things complicated on another part: we can no longer the multiprocessing.Queue
s for message passing between model and UI.
Instead I will try to investigate if we can replace the current use of multiprocessing.Queue
's with PyZMQ message queue's. I've found this introduction to how to use them, which I will try to go through and see if I can get something to work for us. https://core.ac.uk/download/pdf/230921415.pdf
After investigating this more thoroughly, my conclusion is that we can't do this for the 0.8 milestone next week.
To move to Queue's/PyZMQ to send messages was probably not the right choice, instead we should have used some Remote Procedure Call library, such as xmlrpc, jsonrpc, zerorpc (see: https://stackoverflow.com/questions/1879971/what-is-the-current-choice-for-doing-rpc-in-python)
The problem is that we're effectively have to build our own RPC implementation to send commands back and forth between the two parts of the implementation:
passing messages through ZMQ
If we instead had used one of the RPC libraries, much would be have made easier.
It will be a waste of time, and some huge spaghetti code to roll our own, in my opinion. For the Queue's we could use the fact that we could just send raw Python values, but with ZMQ we have to serialize/deserialize on both ends, and it will just be ugly if we don't use some external library for it.
To avoid too many dependencies we should perhaps go for xmlrpc, as it's part of the Python standard library
Here's my simple implementation of having a PyQt window running in a subprocess, just to record it if we need it at some point https://gist.github.com/dybber/853844449c1366fa797c0984a88c5a0c