Closed CJ-Wright closed 5 years ago
Proof of concept (running in one process)
import bluesky.plans as bp
import matplotlib.pyplot as plt
from bluesky import RunEngine
from bluesky.callbacks.best_effort import BestEffortCallback
from bluesky.plan_stubs import checkpoint, abs_set, wait, trigger_and_read
from bluesky.utils import install_kicker
from bluesky.utils import short_uid
from ophyd.sim import hw
from xpdan.callbacks import RunRouter
install_kicker()
bec = BestEffortCallback()
bec.enable_plots()
hw = hw()
RE = RunEngine()
L = []
LL = []
tok = RE.subscribe(lambda *x: L.append(x))
RE(bp.scan([hw.ab_det], hw.motor, 0, 10, 10),)
RE.unsubscribe(tok)
tok = RE.subscribe(lambda *x: LL.append(x))
RE(bp.scan([hw.det_with_conf], hw.motor, 0, 10, 10),)
rr = RunRouter([lambda x: BestEffortCallback(overplot=False)])
for nd1, nd2 in zip(L, LL):
rr(*nd1)
rr(*nd2)
plt.pause(.1)
for nd1, nd2 in zip(L, LL):
rr(*nd1)
plt.pause(.1)
plt.show()
We need to build a shim for LiveWaterfall that is abel to inspect the data stream and pick out the things it wants. The x values would go into the start document ['hints']['dimensions']. Note that this means we can merge the I(Q) and I(tth) streams together, LIveWaterfall will need to be able to produce multiple plots which are independent vars x dependent vars
reopened for live waterfall
We might consider investigating other serialization approaches since shipping images around is slow.
Live waterfall is in
It would be nice to run a visualization server separate from the data processing so that the visualization things don't get in the way and vice versa.
Expected Behavior
A single visualization entity eats all the documents we ship out of the RunEngine and Data analysis pipeline.
Current Behavior
The visualization is run in the same process as the data processing, with each stream of data having its own visualization class.
Possible Solution
A combination of
RunRouter
,BestEffortCallback
,LiveImage
, andLiveWaterfall
might give us all we need.RunRouter
can take in multiple scans at once (interleaved if need be) and properly splay out the data into the correct callbacks. If we put all the visualization tools onto theRunRouter
we can send documents from the raw data and analyzed data at the same time via ZMQ. This removes the issues of having the analysis event loop interact with the visualization event loop (aka I can't see my data until the analysis pipeline stops hogging the processing power).We might consider vendorizing
BestEffortCallback
so we can addLiveImage
andLiveWaterfall
to the mix (so we don't need to do this explicitly).Context
It seems that visualization can make the data processing rather clunky (especially when running over ssh) so the ability to run those two as separate things would be good. This is especially good because the server will try to visualize whatever we dish out (within reason) in the way of data. For instance if we have 20 QOIs running and want to add a new one, it doesn't matter, we ship the processed packaged data off to the visualization server and everything just works.