Closed jacquelinegarrahan closed 3 years ago
The root issue you are encountering is that the C/C++ libraries behind P4P can not safely be used in a child process after a fork()
. One options you have found: import P4P in the child process. Another might be to use multiprocessing
with the spawn
or forkserver
method. eg. from your first example, you might try:
...
import multiprocessing
multiprocessing.set_start_method('forkserver') # <<< insert
from p4p.nt import NTScalar
...
I haven't tested this, but by my reading of the documentation this should create the child "forkserver" process before P4P and the associated EPICS libraries are loaded.
What particular libraries are of concern? The pyepics implementation of the fork-safe CAProcess clears the context and I'm wondering if there might be a similar approach here.
On second thought, this is kind of silly and I'll be moving to spawn. Thanks for your time @mdavidsaver
I've started https://github.com/epics-base/epics-base/issues/211 to hopefully help google to help others to find this. CAProcess specifically has come up before and imo. it is an incomplete solution when combined with the fork
method.
I'm working on a server that monitors some pvs, performs some model computation, and serves another set of pvs based on the computation results. This p4p server runs inside of a subprocess and I have found that the client tools are breaking, sometimes leading to network connection crashes if allowed to run too long. Also, I've seen that if the p4p imports happen after the subprocess has been started, this is no longer an issue.
I've put together a minimal example and have been able to replicate in a python 3..8 environment with the latest conda-forge p4p build (3.5.3). The pv to be monitored is served with
softIocPVA -d demo.db
, and is extremely minimal:For testing, I'm performing puts to this pv using the p4p client tools:
Below is the broken server:
Moving the p4p inputs into run fixes all client problems:
Could this be a blocking issue with the context?