Closed Xiren-Hitachi closed 3 years ago
Can you list your Python and DSS Python versions? In your example, is that an unmodified IEEE13 test case? If so, I could try reproduce the issue.
Then I noticed that dss.DSS is basically an instance of class dss.dss_capi_gr.IDSS.IDSS. So I modified my code to instantiate multiple DSS engines (as the following code snippet) to see if they could work independently.
The current version is a per-process singleton at the Pascal side, so dummy.Pool
won't work at all. Some changes should be merged in the coming month to allow creating multiple instances, but I'm working on other features right now.
On Linux, you may need to change the start method multiprocessing.set_start_method('spawn')
, but that depends on your code.
We use the built-in multiprocessing, Dask Distributed, and other tools all the time. Unless you need some raw pointers, you don't need to mess with the API internals at all.
See also (ODD.py uses the engine from DSS Python):
In your example, is that an unmodified IEEE13 test case? If so, I could try reproduce the issue.
Well, gave it a try anyway. I guess you modified the system since it seems to expect only 3 transformers. The more obvious things are not related to DSS Python.
dss_file
globally (not only in the main process), or pass it as a parameter.pool.starmap
instead, or move the lambda globally so it doesn't require pickling.Something like this should work:
dss_file = 'circuit/IEEE13/IEEE13Nodeckt.dss'
dss_file = os.path.abspath(dss_file)
if __name__ == '__main__':
num_parallel_runs = 4
load_settings = np.random.uniform(low=0.5, high=1.0, size=(num_parallel_runs, 30))
oltc_settings = np.random.uniform(low=0.95, high=1.05, size=(num_parallel_runs, 3))
cap_settings = np.random.randint(2, size=(num_parallel_runs, 2))
pool = Pool(num_parallel_runs)
results = pool.starmap(run, zip(load_settings, oltc_settings, cap_settings))
As a general note, this is slow and convoluted:
dssCircuit.SetActiveClass('Transformer')
xfm_names = dssCircuit.ActiveClass.AllNames
for i in range(len(xfm_names)):
xfm = xfm_names[i]
dssCircuit.SetActiveElement(xfm)
dssCircuit.ActiveElement.Properties('tap').Val = str(oltc_setting[i])
That style of code is required only when you don't have access to the properties of the class.
You could use the instead:
xfm_names = dssCircuit.Transformers.AllNames
for i, xfm in enumerate(xfm_names):
dssCircuit.Transformers.Name = xfm
dssCircuit.Transformers.Tap = oltc_setting[i]
In fact, even simpler would be:
for i, xfm in enumerate(dssCircuit.Transformers):
xfm.Tap = oltc_setting[i]
(the usual OpenDSS limitations still apply -- i.e. only one transformer or cktelement can be activated for API usage at a time)
In other scenarios, for performance, you could also use dssCircuit.Transformers.idx
to select the transformer by index. The official OpenDSS has that for some components but we extended it for all of them.
Something like this should work:
dss_file = 'circuit/IEEE13/IEEE13Nodeckt.dss' dss_file = os.path.abspath(dss_file) if __name__ == '__main__': num_parallel_runs = 4 load_settings = np.random.uniform(low=0.5, high=1.0, size=(num_parallel_runs, 30)) oltc_settings = np.random.uniform(low=0.95, high=1.05, size=(num_parallel_runs, 3)) cap_settings = np.random.randint(2, size=(num_parallel_runs, 2)) pool = Pool(num_parallel_runs) results = pool.starmap(run, zip(load_settings, oltc_settings, cap_settings))
Thank you @PMeira ! I've tried
starmap
without use ofdummy.Pool
andlambda
function. It solves the issue perfectly! Also, thank you so much for pointing out the slow & convoluted implementation. I'll definitely modify my code.
PS: just so you know...
Can you list your Python and DSS Python versions?
Python version: 3.8.5; DSS Python version: 0.10.7.
In your example, is that an unmodified IEEE13 test case? If so, I could try reproduce the issue.
Actually this was written by my teammates. Anyway the code is simply doing power flow simulations given different settings for loads, oltcs, and capacitors. So I think this is easily reproducible based on any power flow circuit file and on any feeder model.
It solves the issue perfectly!
That's great! I'll close the issue then.
So I think this is easily reproducible based on any power flow circuit file and on any feeder model.
Thanks for sharing it. It certainly helped to identify the issue.
I'm using
multiprocessing
to enable parallelization in Python. However it turns out to me that probably dss python bindings does not support launching multiple OpenDSS engines at backend. I really hope you could help me work it out.The following is the detailed code:
Running the above code will always cause an error. Sometimes I get the error
dss._cffi_api_util.DSSException: (8888, 'There is no active circuit! Create a circuit and retry.')
, as shown below for example.Sometimes I get other types of errors, for example:
and etc. Only occasionally does the program executes without error.
Apparently parallel simulation runs interfere with one another.
Then I noticed that
dss.DSS
is basically an instance of classdss.dss_capi_gr.IDSS.IDSS
. So I modified my code to instantiate multipleDSS
engines (as the following code snippet) to see if they could work independently.However, unfortunately this won't make any difference at all. It seems to me that
dss.DSS
does not work in a conventional object-oriented way -- in one Python program, there's only one essential engine running at backend even if we instantiate multipleDSS
objects.I also tried out using
pool = Pool(num_parallel_runs)
instead ofpool = dummy.Pool(num_parallel_runs)
. Also, I tried creating multiple processes/threads explicitly and callingstart()
andjoin()
to make the parallelization instead of using pool. It turns out there's no difference - The above issue has nothing to do with how I implement with multiprocessing.Any solution/workaround?