Open jrast opened 6 days ago
Note: I replaced the self._future
above with a task which is running for 1sec. The future was never resolved, so the example was kind of flawed (would block because the future is never resolved)
I tried to replace the _run_async_in_thread
function code with something like this:
loop = asyncio.get_event_loop()
future = asyncio.run_coroutine_threadsafe(func(*args, **kwargs), loop)
# Note: Regardless of how we wait for the future, the event loop will be blocked.
This assumes that a event loop is running, which is a sane expectation because most of the LH methods are async. While this works, func
is not finished when _run_async_in_thread
returns. This makes the behaviour unpredictable. Can we continue execution without any issues? For example there might still be some configuration commands waiting to be sent to the device when we actually start the execution of aspirations / dispensations.
while I will debug this, some suggestions that might potentially be a faster solution:
setup
?aspirate
is called?again, will debug this, but wanted to share these ideas in case it allows you to move forward without getting stuck on this approach
- might it be possible to queue commands and send them just-in-time when eg
aspirate
is called?
I strongly dislike this idea. It's to easy to forget to send the queued commands in one of the liquid handling methods. And sending the queued commands if backend level methods are called would also be impossible (or would require a complicated implementation)
At the moment I actually do something like command queueing and run the queued operations once the deck layout is final. I can live with this as currently the decklayout cannot change throughout the run.
depends on the use case. Some features only require a number when it's necessary. (eg tip definitions in PLR are only sent using TT when needed for a command like TP, not preemptively based on what is assigned on the deck)
but if it's truly a queue of things to send, yes, that is not good. A powerful feature of PLR is to be able to change the deck at runtime, and we should make it easy for all backends to support this.
Ok, this is off-topic for this issue but:
eg tip definitions in PLR are only sent using TT when needed for a command like TP, not preemptively based on what is assigned on the deck
I did not look at the implementation, but if the tip definition is written each time a tip is picked up, some overhead is generated each time. Performance wise this would be improved if the tip definition is only written when the deck is updated. While this sounds like nitpicking, if the current strategy is followed, more and more configuration commands will be executed before each step (tip-pickup, aspirate, dispense, ...) and the execution time will increase.
it is checked if the definition for a tip with the exact characteristics have already been written. The "key"/"hash" into the Tip->tt idx if you will is the tip definition itself.
Another idea, which does not solve the reported bug or the queuing problem: Why not just make assigned_resource_callback
and unassigned_resource_callback
of the backend sync methods instead of async ones?
_run_async_in_thread
method obsolete. I introduced this pattern for Opentrons (oh boy) but yes I could adopt a similar JIT approach there.
Since assigning resources is purely declarative there is no reason to have any part be async.
I am not a fan of going through an unrelated queue of commands the next best time an async method is called. It seems to me we can just send parts of the "queue" when a related function is called (eg pick up tip on a tip rack). Preferably there would be no queue, we just look at the resource (tip rack in the example) and handle it appropriately. For static things, the backend has access to the deck.
But perhaps my imagination is lacking here
The implementation of
LiquidHandler._run_async_in_thread
is not working if the called method actually performs some async stuff.MWE:
With the following
dummy_backend.py
Running this code will lead to the following exception (only copied the relevant part):