bluesky / bluesky

experiment orchestration and data acquisition
https://blueskyproject.io/bluesky/
BSD 3-Clause "New" or "Revised" License
148 stars 90 forks source link

Add an est_time plan tool #713

Open danielballan opened 7 years ago

danielballan commented 7 years ago

eta_time(scan([det], motor, 1, 10, 10)) would be cool. This would be a simulator that uses whatever it can find out about the hardware to estimate how long a plan would take to run and, say, return a number of seconds. It would not touch any hardware.

awalter-bnl commented 6 years ago

This is an old one, but want to add my encouragement. This would be really useful for us, it would of course probably assume a fixed time for motor moves which wion't be so accurate but a great start.

danielballan commented 6 years ago

Can you help us brainstorm what we would need to know from the hardware, to first order? In my mind, a "zeroth-order solution" would make blunt assumptions like "All motors take 1 second to move, period." [EDIT: I'm not really convinced that's worth rolling out -- I think people would get the impression that the feature is wildly inaccurate and never use it again.] For first-order, I imagine we would need:

Maybe we should develop software loops for empirically measuring these things and storing the relevant data in a cached file somewhere.

awalter-bnl commented 6 years ago

I agree getting an accurate measure of the time taken for the motors is difficult. For the detectors, almost all have a 'aquisition period' variable and this could be mapped to a consistently named attribute in the setup file, or if the variable is not present then a estimate could be hard-wired.

For the motors, a standard motor record from a delta tau has a 'motor velocity' attribute (in units/s) already, which can be used to give a time estimate for a move of x units. Non-motor record motors may need to introduce a hard-wired estimate like the detectors without aquisition time in the setup.

awalter-bnl commented 6 years ago

Just thought I’d add, I am happy to discuss in person if it helps, just throw something on my calender

danielballan commented 6 years ago

We can't prioritize this until after our "1.0" deadline (a week from Wednesday) but I think we can draft something usable not long after that.

awalter-bnl commented 6 years ago

So I will wait to progress much on this until Dan is back. But this morning I did some testing at SIX in order to see if we could get an accurate eta for a 'move' and for a 'count'. I think it is possible but a little more work needs to be done.

for 'move': using the motor.velocity and motor.settle_time, along with the distance to move runs into an issue in that there is a on defined 'settle time' built into the motion that depends on the travel distance. if I determine this extra 'settle' time empirically and add it to the eta then for all moves of this distance I get an eta to within 1% (longer moves have reduced error). I tested with several travel distances (with different extra settle times), and about 10 motors (each with there own extra settle time values) and it does seem accurate to within 1%. This extra settle time is clearly observed in CSS (watching the move indicator) so is not associated with any bluesky/ophyd overheads. I will talk to John Sinsheimer and see if this extra settle time can be determined a-priori or not.

for 'count': using the det.aquire_time gives a pretty poor indication in itself ( ~ 10%). The remaining error is overhead from count (a fixed value independent of the number of points ~ 3.6s, detector independent) and from the individual reads (a fixed value depending on the number of counts ~ 0.275s, detector independent). The first is probably associated with the stage and unstage, while the second is associated with the time lag between asking for a read and processing the liveplot/livetable update. I can see from the CSS page that these values are not related to he detector acquisition time, and they do not vary with acquisition time. My thoughts on this is that we could get more accurate over time with these values by calculating, and storing, these time lags after each stop document (or similar).

What are peoples thoughts?

awalter-bnl commented 6 years ago

I have an update on the motor issue above regarding the extra settle time that is related to the distance moved. After speaking with John Sinsheimer it appears this was a setup mistake in the delta tau. After John fixed the issues I did more testing, and the settle time and velocities now do a good job of estimating the time (< 1% accuracy). There are a few outliers, where the PV velocity differs from the measured velocity resulting in 1-50% time errors, but these are bought under control by using the measured velocity. And the final outlier case are the virtual motor axes, where it is impossible without knowing the motion equations between real and virtual motions to accurately estimate the time to move. So I think the solution here is to take the PV velocity as a starting point, but to measure and average after each move of an axis a velocity which will make the estimating tool more accurate as time goes on.

awalter-bnl commented 6 years ago

So a quick update on this, the summarize_plan analogy is written, and undergoing testing. We are working on the lower level stuff.

awalter-bnl commented 6 years ago

A preliminary set of code to do this is presented in PR #1048 and NSLS-II/Ophyd PR # 567

danielballan commented 5 years ago

We plan to do this, but the feature is too complex to be designed via GitHub Issues. We will write up a Bluesky Enhancement Proposal to lay out the planned design.