DiamondLightSource / blueapi

Apache License 2.0
6 stars 6 forks source link

Mechanism to expose some ophyd device methods directly? #576

Open dperl-dls opened 4 months ago

dperl-dls commented 4 months ago

As a GUI developer, I am interested in at least three main kinds of interactions between an experimental GUI and the hardware

  1. Running an experiment, where a form is used to specify all the parameters and a button is used to trigger
  2. Making small "adjustments" like "move sample x by 10 um", "rotate the sample 45 degrees", set the backlight brightness on a slider... That is, "live" control of the beamline hardware
  3. Get a passive update of some value, like an OAV image, the current position, etc.

Is there an agreed-upon way of handling 2 and 3?

For 2, you could:

For 3, a plan is probably really not an option, so it's:

Would appreciate any comments from @callumforrester @DominicOram @stan-dot @DiamondJoseph and anyone else relevant any of you can think of. We would like to be in a position where we can start doing some preliminary prototyping of web GUIs fairly soon, so it would be nice to have some kind of agreement on what we want and don't want to support, even if we don't fully implement it yet.

stan-dot commented 3 months ago

there is a variety of low-level plans in bluesky plans stubs for this job.

Those are runnable from the swagger gui - i22 instance

Adding another execution environment with tracing etc etc would be a huge effort. The 'moving more than 1 things at a time' is contentious and I am not sure what is the broader outlook on that for the future.

OTOH - what is the science use case for 2 and 3? If something needs adjustment it either can be run before the experiment plan or included into it.

dperl-dls commented 3 months ago

It is IMO important that whatever new GUI solution we present is not a regression from GDA, it should support at least the same workflows and features.

For example an example of scenario 2, a scientist may wish to move a sample around to visually identify a region of interest before performing an experiment at that location, increase or decrease lighting to make such identification easier, adjust the flow rates of a cryostream to find a level where ice doesn't form on the sample, or myriad other small manipulations. As far as 3 is concerned, it is simply necessary to display some information about the current beamline state, since there is no sensible way to decide on actions without knowing some of this.

I'm not sure what you mean by "execution environment" but devices should be able to be pulled from the context just as easily as plans.

Indeed, things like the plan stubs are what I meant by 2a. However, this needs some thought. If you set, e.g. the energy, this might take a long time to complete. If this is processed through the run engine, that means that all other features are unavailable while the energy change is processing - that's probably not desired, and it is certainly a regression from current behaviour, where you can start the energy change, and then maniuplate the sample while waiting for it to complete.

DominicOram commented 3 months ago

interact directly with EPICS

I think for 90% of the usecases in 2 (certainly for all the ones you've described) this is literally just poke a PV and so I think it should be as thin as possible over the top of that, I don't think an ophyd device is even necessary for most use cases. @coretl could we just pull in parts of the technical UI for this?

Get a passive update of some value, like an OAV image, the current position, etc.

Again, the technical UI components should be able to handle some of these too, where we're just looking directly at PVs.

I'm very keen we don't reinvent the wheel and end up with what we currently have, a DAQ GUI where we've reimplemented large parts of thing that already exist in the technical GUI. I actually think in the ideal scenario we would have a WYSYWYG system like Phoebus that, alongside widgets for interacting directly with PVs (like the technical UI) there are widgets for interacting with ophyd devices or with plans. We can then get to a point where scientists are able to modify UI elements themselves to some basic degree, without huge amounts of coding.

dperl-dls commented 3 months ago

If we have some standard way of interacting with PVs that might be fine for many cases, but in some cases we will still have to look at at least ophyd devices (energy, zoom level which needs to change brightness...) and there are still advantages to looking at at least the ophyd level - only having to keep/change the PV in one place for example.

stan-dot commented 3 months ago

@dperl-dls scenario 2 could be done with a series of set_abs or set_relataive plans. and from the GUI perspective we can treat those differently from the big plans. Blueapi is just the API and at the moment GUI prototyping is in the squid repo.

3 is part of the technical GUI, right? we can create the reusable React components and import them both in the technical UI and DAQ UI.

I am operating under the assumption that devices are instantiated when blueapi starts up but all the methods callable on them that change their state need to go through RunEngine.

whatever new GUI solution we present is not a regression from GDA, it should support at least the same workflows and features. a regression from current behaviour, where you can start the energy change, and then maniuplate the sample while waiting for it to complete.

Current behavior can have possible user-action-paths which are difficult to support, which can have simpler alternatives. GDA is overengineered - the guarantee of features only must be in the science sense, the UX sense simpler stuff is generally better.

The more possible user-paths are exposed as possible as an API, it's like glucose maze for a slime - user habits will populate them and then expect us to support it. https://www.hyrumslaw.com/

Few well-supported user paths is better than many ways to do one thing - both for the scientists and the support engineers (us).

Specifically for the energy change plan - I am not sure how much time is saved through such parallel manipulation. if this happens often this could be both running inside one plan. If this is rare, maybe it is fine to wait 30 seconds once a day.

By execution environment for plans I mean the RunEngine which runs the function that returns a Message Generator. This environment takes care of logging and other low-level details. We would need some other kind of wrapper around asyncio.eventloop to run the coroutines to execute device methods. This new lightweight wrapper would require some volume of work to develop and maintain. Which might turn out to be easy, but I haven't checked that.

dperl-dls commented 3 months ago

To look at it the other way is that the behaviour currently offered by GDA is what has been created in response to 20 years of user demands - in the absence of contradicting information that interface defines everything we actually know about "science features".

The phenomenon described in your link does not apply to this scenario:

Specifically for the energy change plan - I am not sure how much time is saved through such parallel manipulation. if this happens often this could be both running inside one plan. If this is rare, maybe it is fine to wait 30 seconds once a day.

Perhaps I still have not been clear enough about what is needed here. Case 2 describes "live" control of beamline hardware. Press an arrow, a motor moves, scroll on the oav and the sample rotates, etc. Currently it's not possible to inject new messages into a running plan, as far as I'm aware, and building a mechanism to incorporate user input like that sounds far more complicated to develop and maintain than executing device methods.

DiamondJoseph commented 3 months ago

I agree that 3 is probably part of the technical UI [from coniql as part of The Graph with a standard set of components for watching particular important signals?]: I think this is limited to 10Hz which should be enough for human reactions?

Is 2 in the context of "between scans" or "during scans"? Between scans yes we can expose the existing stubs fairly easily (just needs annotating them with the correct return type and deciding where those stubs live [if annotations added to bluesky.plan_stubs maybe an argument for which should be exposed, e.g. "between" scans should only ever mv not set without waiting).

Moving other signals during scan is more complex use case that we need to get right. Probably there is only some subset of the stubs that may be allowed during other scans.

dperl-dls commented 3 months ago

"Between scans" is the important part for sure - I don't know of any reason why we would want to do it during scans

components from the technical GUI cover most of 3, but there are probably cases were we want to look at ophyd signals which don't directly correspond to a PV

DominicOram commented 3 months ago

Yes, it would be "between scans" but if we're using plan stubs then it would still have to be "during plans". I need to be able to press a button to move the detector and press another to move the sample whilst the detector is moving.

DiamondJoseph commented 3 months ago

https://github.com/DiamondLightSource/dls-bluesky-core/blob/main/src/dls_bluesky_core/stubs/wrapped.py

Here's the stubs we currently expose on i22 and the test beamlines, personally I'd like to see the 2 set methods and wait removed (if you want to set an axis moving and then run a scan it should be part of your plan, as it's a particular experimental behaviour that needs to be repeatable).

Are there any additional pre-existing plan stubs that you think should be included? https://github.com/bluesky/bluesky/blob/main/src/bluesky/plan_stubs.py

I think the annotated subset of stubs that we support should either be in dodal or blueapi.

For running stubs during the execution of a plan I'm going to defer to https://github.com/bluesky/bluesky/issues/1652 with the following assumptions and notes:

  1. Nudging a motor should be possible during [some segment of] a plan
  2. The same UI component should be usable to nudge a motor before and during a scan
  3. A plan in which a motor can be nudged should record and document that the motor was nudged

Here motor and nudged are actually "Signal" and "Set"

DiamondJoseph commented 3 months ago

I need to be able to press a button to move the detector and press another to move the sample whilst the detector is moving.

Is this just for optimising moving between beamline states?

dperl-dls commented 3 months ago

I'm not sure https://github.com/bluesky/bluesky/issues/1652 really covers this? that ticket describes plans which can branch off into sub-plans, but I don't think it means you will be able to do

RE(plan_1())
RE(plan_2())

and have them run in parallel, no?

stan-dot commented 3 months ago

There are not "sufficiently many users" - only roughly 100

a user is different for each experiment, each kind of scientific experiment can be approached differently

we still got many divergent ways people use GDA and 3 different implementations of the same feature

DiamondJoseph commented 3 months ago

Here's my thinking as a pseudo plan, interupts is an adaptive scan that listens for incoming stub requests, and adjustable_plan is the part of the plan that May (but does not require) manual adjustment.

def my_plan(*args, **kwargs):
    yield from prepare_plan()
    sub_status = yield from run_sub(interupts())
    yield from adjustable_plan()
    yield from wait(sub_status)
    yield from teardown_plan()

async def interupts():
    status = AsyncStatus(complete=True)
    for stub in yield from api.await_stub_requests():
        status &= yield from manual_stub()
    return status
dperl-dls commented 3 months ago

hm, yes, okay, that could be a good way to do it - it ensures that you have control over the stubs that you can run, and prevents actually launching a scan, etc.

stan-dot commented 3 months ago

the use of the adaptable scan for this purpose seems really like a great way out of this puzzle @DiamondJoseph

I am glad we might not need a new asycio.event_loop

coretl commented 3 months ago

It would appear I'm late to the party, but we had this discussion over in https://github.com/bluesky/bluesky-queueserver/issues/292#issuecomment-1733208838 and it looks like the proposal they made was to put jogging actions and monitoring in a second process, but still use the ophyd objects.

Personally I'm still on the fence between doing the interface via bluesky and via the technical UI. Both have advantages and disadvantages as outlined above.

stan-dot commented 3 months ago

from the blueapi POV we have:

@dperl-dls is it for the next 2-3 weeks or speculative 'will need it at some point'?

dperl-dls commented 3 months ago

We won't need it in the next two weeks, but maybe in the next two months? Like I said, it would be good to have the discussion about it pretty soon

stan-dot commented 3 months ago

those might be relevant