Open xiaoqiangwang opened 3 years ago
With AcquireTime=0.000 and AcquirePeriod=0.000, ADSimDetector generates image as fast as possible. This competes with NDPluginZMQ and creates competition for resources. Not having hardware device, one way is to vary AcqurePeriod to have a (almost) constant image creation rate.
In the following test,
When one client is connected, 13SIM1:cam1:ArrayRate_RBV is ~1000. 25% of the images did not reach the client.
13SIM1:cam1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:ArrayCounter_RBV 85173
13SIM1:ZMQ1:DroppedArrays_RBV 14827
Received by client: 76388
When one client is connected, 13SIM1:cam1:ArrayRate_RBV is ~1000. 26% of the images did not reach the client.
13SIM1:cam1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:ArrayCounter_RBV 73129
13SIM1:ZMQ1:DroppedArrays_RBV 26871
Received by client: 73129
Commit 9216694 introduces two improvements to hopefully address part of the performance problems.
Here is the results repeating the 2nd test
13SIM1:cam1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:DroppedArrays_RBV 0
13SIM1:ZMQ1:DroppedOutputArrays_RBV 0
Received by client: 94073
13SIM1:cam1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:ArrayCounter_RBV 100000
13SIM1:ZMQ1:DroppedArrays_RBV 0
13SIM1:ZMQ1:DroppedOutputArrays_RBV 2899
Received by client: 97101
The frame creation rate still varies a lot but in average above 1000 fps.
Thanks for the update, according to your test, ADZMQ should be quite robust for my scenario(100 fps, with imageSize: 1M). I am using the following settings:
caput 13SIM1:cam1:ImageMode Multiple
caput 13SIM1:cam1:NumImages 10000
caput 13SIM1:cam1:AcquirePeriod 0.01
caput 13SIM1:cam1:AcquireTime 0
Then run the SimDetector+ADZMQ inside a docker container running on my laptop and run the zmq_client.py on the same laptop. I noticed one thing: when the ADZMQ received all the images, it will set caput 13SIM1:cam1:Acquire to 0 and the ram used by the docker container starts to decrease, then the zmq_client stops receiving images, which seems to cause the image loss, so I was wondering is it caused by the buffer size of 0MQ (ZMQ_SNDHWM)? My understanding is even ADZMQ stops to work, the images it received should already be in 0MQ(RAM), so the client should still be able to receive all the images?
ZMQ_SNDHWM is 1000 by default.
If the loss is because NDPluginZMQ is slow, 13SIM1:ZMQ1:DroppedArrays_RBV would be non zero.
Also I don't know the network speed limit imposed by docker.
Here is an attempt to establish a baseline of NDPluginZMQ (version <= 1.1) performance.
Environment
Hardware: i5-3427U CPU @ 1.80GHz, 8GB RAM OS: macOS 10.15, clang 12 asyn: 4.39 ADCore: 3.9 ADSimDetector: 2.10
Results
The baseline is that when 13SIM1:cam1:ArrayCallbacks is "Disable", 13SIM1:cam1:ArrayRate_RBV is ~3400. This is maximum speed of image creation.
PUB/SUB
When no clients are connected, which means ZeroMQ discards all messages, the 13SIM1:cam1:ArrayRate_RBV is ~2000. And there is no dropped arrays 13SIM1:ZMQ1:DroppedArrays_RBV = 0.
13SIM1:ZMQ1:BlockingCallbacks = Yes 13SIM1:cam1:ArrayRate_RBV is ~ 800. All images reached the client.
13SIM1:ZMQ1:BlockingCallbacks = No When one client is connected, 13SIM1:cam1:ArrayRate_RBV is ~1000. 37% of the images did not reach the client. Note that the additional loss by ZeroMQ (76003 - 63329 = 12674).
PUSH/PULL
When no clients are connected, ZeroMQ blocks the process. All images are dropped.
13SIM1:ZMQ1:BlockingCallbacks = Yes 13SIM1:cam1:ArrayRate_RBV is ~ 800. All images reached the client
13SIM1:ZMQ1:BlockingCallbacks = No When one client is connected, 13SIM1:cam1:ArrayRate_RBV is ~1000. 30% of the images did not reach the client.