Then after a few hundren points plotting starts to slow down. I think this is because every time _transform is called, it itterates through the new points and reshapes the array into an image. As the array increases in length as the grid_scan continues, the number of itterations in this loop increases and the execution time per point increases.
def _transform(self, run, field):
image_data = numpy.ones(self._shape) * numpy.nan
result = call_or_eval({"data": field}, run, self.needs_streams, self.namespace)
data = result["data"]
snaking = run.metadata["start"]["snaking"]
for index in range(len(data)):
pos = list(numpy.unravel_index(index, self._shape))
if snaking[1] and (pos[0] % 2):
pos[1] = self._shape[1] - pos[1] - 1
pos = tuple(pos)
image_data[pos] = data[index]
return {"array": image_data}
Possible Solution
in _transform keep a cache of the image in the RasteredImage class and only overwrite the latest point.
Initialise the cache in __init__
self._image_cache = {"array":numpy.array([])}
Then use that cach in the _transform
def _transform(self, run, field):
"""
Read the data for a specific field from a given run.
Reshape it from a 1D array to an image, snaking if required.
Keeps a cache of the image to reduce future calls.
"""
result = call_or_eval({"data": field}, run, self.needs_streams, self.namespace)
#Read the data from the databroker or from the bluesky_live stream.
data = result["data"].values
snaking = run.metadata["start"]["snaking"]
#If this is not the first point then use the cached image, only update the latest value
if self._image_cache['array'] != [] and len(data)>0:
index = len(data)-1
pos = list(numpy.unravel_index(index, self._shape))
if snaking[1] and (pos[0] % 2):
pos[1] = self._shape[1] - pos[1] - 1
pos = tuple(pos)
#Overwrite the value for this position in the cached image
self._image_cache['array'][pos]= data[index]
#Else this is the first point, or a new complete run from the databroker
else:
image_data = numpy.ones(self._shape) * numpy.nan # start out with an array of NAN
for index in range(len(data)):
pos = list(numpy.unravel_index(index, self._shape))
if snaking[1] and (pos[0] % 2):
pos[1] = self._shape[1] - pos[1] - 1
pos = tuple(pos)
image_data[pos] = data[index]
#save a cache of the image to reduce transform time in future calls
self._image_cache['array'] = image_data
#Return the cached image
return {"array": self._image_cache['array']}
Context
Grid scans are used to map our sample and find where it is on the sample holder. The time taken for each motor to move, and the size of the sample holder mean that each of these grid_scans can take some time. As a result, live feedback is crucial to understand whether or not the scan is performing ok and abort it if needed. We would like to run this in the bluesky-widgets based GUI. At the moment we have to do it in IPython and use the bec.
Expected Behavior
When running a grid scan the live plot (RasteredImage) should update at a consistent rate dictated by the rate of events coming from the RunEngine.
Current Behavior
If you run a scan like:
Then after a few hundren points plotting starts to slow down. I think this is because every time _transform is called, it itterates through the new points and reshapes the array into an image. As the array increases in length as the grid_scan continues, the number of itterations in this loop increases and the execution time per point increases.
Possible Solution
in _transform keep a cache of the image in the RasteredImage class and only overwrite the latest point.
Initialise the cache in
__init__
Then use that cach in the _transform
Context
Grid scans are used to map our sample and find where it is on the sample holder. The time taken for each motor to move, and the size of the sample holder mean that each of these grid_scans can take some time. As a result, live feedback is crucial to understand whether or not the scan is performing ok and abort it if needed. We would like to run this in the bluesky-widgets based GUI. At the moment we have to do it in IPython and use the bec.