Open JasonLG1979 opened 6 years ago
What exactly is the task you are trying to accomplish? At some level though yes callbacks do become the solution still.
You can't mix sync and async code without callbacks. Like I said functions executed with asyncio are still synchronous to them self inside their own event loop. So it doesn't actually make anything async by itself since we only have one event loop.
Python, as an entire language, is never actually asynchronous as it has a single global lock.
Python, as an entire language, is never actually asynchronous as it has a single global lock.
I know that. What I mean is that you can await all you want but if you don't use callbacks and queue your tasks your I/O will still be blocking.
What I mean is that you can await all you want but if you don't use callbacks and queue your tasks your I/O will still be blocking.
It will not.
If you want multiple async functions to run in parallel you would do something like this:
async def sleep(sec):
await asyncio.sleep(sec)
print('done', sec)
async def test():
await asyncio.gather(
sleep(10),
sleep(4),
sleep(1),
)
print('done all')
If you want something sync to finish before something async.. well just put it before the await
? And if you want another function to run instead yea you add a callback.
I'd like to stop async creep. Meaning I'd like to write normal synchronous code as much as possible. We only need asyc stuff for networking really.
Yes of course.
(but obviously that is much of what we are doing, being a frontend to a network interface)
It will not.
Then explain this: https://gist.github.com/JasonLG1979/82c09d13341133e44305e037e913abcd
Well, yea. As I said if you want multiple tasks in parallel: https://github.com/TingPing/pandora-rest/issues/34#issuecomment-354616273
block
as I always refer to it means blocking the mainloop. You are correct that await
blocks the current function (that is literally its purpose) but it doesn't stop the loop.
You are correct that await blocks the current function (that is literally its purpose) but it doesn't stop the loop
I get that await awaits the result of the async function. My point is all this async stuff is not a magic bullet and can be just as confusing or more so then callback Hell and really doesn't even save us from callback Hell at all...
Well we obviously have yet to see the results of shoving into an actual app but I believe it still saves us, we should have like 1/50th the number of callbacks most likely.
we should have like 1/50th the number of callbacks most likely.
Anything that's above the client is going to have to interact with it via callbacks. It will save us a lot of internal callbacks though.
We could skip all the external callback business with the use of signals and props? It would not be hard. There's only so much going on.
A GObject signal callback is roughly the same as an asyncio.Future callback as far as I'm concerned.
A GObject signal callback is roughly the same as an asyncio.Future callback as far as I'm concerned.
The difference would be it wouldn't need to declared over and over and over again.
I'm currently working on turning most of the classes into GObjects with the exception of Art as it doesn't have any properties so it would be pointless.
Like I've said before most objects will have read only props because they're designed to be used and thrown away. Tracks are the only things that need read/write props for position and duration. For station, when you modify a station in the API it returns that modified station. So for stations I'm going to make their props read only from the outside. Then when a modifying method is called we can take the returned station and use it to update the props internally and signal a prop change for what was different. That way we stay in sync with the actual real station and don't assume anything.
@TingPing What do you think of this?
https://gist.github.com/JasonLG1979/8206c050bd5d6ab8e24b1aaf2a6173c2
Pretty slick I think. It makes dealing with async functions pretty easy.
@JasonLG1979 I like the idea, looks like a clean interface.
I personally might prefer combining the error_callback
and callback
so we can have an API like Gio already provides so you could do:
def on_test_finish(result):
try:
ret = result.finish()
except FooBar as e:
...
async_test = AsyncTest()
async_test.test(callback=on_test_finish)
I had the same thought. some thing like this then? https://gist.github.com/JasonLG1979/a2bc6df6048e141353ebb0679a77abb2
I don't know if it's by design or an implementation detail but using threading.get_ident()
shows that every step of the way is executed in the main thread. I wasn't sure if under the hood if it spin up worker threads? But anyway that means that it's basically thread safe threading. That in and of itself makes it worth it. No more worrying about not touching UI code from an outside thread.
I wasn't sure if under the hood if it spin up worker threads?
Well libsoup
or other Gio
APIs will spawn threads in C but our usage wouldn't use Python threads.
I had the same thought. some thing like this then? https://gist.github.com/JasonLG1979/a2bc6df6048e141353ebb0679a77abb2
Not quite. I think we should make a custom return value that raises
the exception itself.
Btw you can do kwargs.pop()
rather than .get()
then del
.
Well libsoup or other Gio APIs will spawn threads in C but our usage wouldn't use Python threads.
All that stuff is thread safe for the most part as far as I know, I mean in general as far as asynio goes, I wasn't sure if it actually started threads in the background or used some version or variation of a green thread.
Not quite. I think we should make a custom return value that raises the exception itself.
You want to raise an exception in a decorator?
Btw you can do kwargs.pop() rather than .get() then del.
I can do that.
Wouldn't you want the callback to handle the exception itself? I kinda like the idea of just handing the callback the asyncio.Task
and letting it decide what to do with it?
I want the exception in the callback yes but I want the return value to actually raise the exception.
That callback is not for actual use. It's just an example so something would show up in the documentation. Much like https://lazka.github.io/pgi-docs/#Gio-2.0/callbacks.html#Gio.AsyncReadyCallback. You wouldn't actually use Gio.AsyncReadyCallback
. It really depends on the circumstances at the time what you'd want to do in case of Exceptions.
I want the exception in the callback yes but I want the return value to actually raise the exception.
There's no need to do that. When an exception happens in the task the callback is immediately called and trying to get the result re-raises the exception.
I updated that last gist with some test code to show what I mean.
Btw you can do kwargs.pop() rather than .get() then del.
kwargs.pop()
doesn't work if the keyword is not passed.
so task = foo()
would fail. You would have to explicitly do task = foo(callback=None, userdata=None)
Which kinda defeats the purpose of optional kwargs.
Edit: Derp dict.pop()
can take a default value... Never mind.
When an exception happens in the task the callback is immediately called and trying to get the result re-raises the exception.
Ok that is the behavior I wanted.
Ok that is the behavior I wanted.
Well there you go,lol!!!
We would also need to handle canceled tasks. Particularly useful for Pandora searches where each keystroke invalidates/renders useless the previous.
I've updated the gist to show cancellation handling.
Calling task.cancelled()
doesn't seem to re-raise the exception but really if you've cancelled a task I wouldn't think you'd care if there was an exception anyway?
In GLib cancellation does return a cancellation error, I think it can be useful.
It does in asyncio also if you try to get the result and the task has been canceled.
Oh, well that sounds perfect to me.
except asyncio.CancelledError
will catch a cancelled task.
So:
def on_sleep_done(task, user_data=None):
try:
time = task.result()
except asyncio.CancelledError:
print('Cancelled task: {}'.format(task))
except Exception as e:
print('Exception in task {}'.format(e))
else:
print('Task completed after {} seconds'.format(time))
if user_data:
cancel_all_and_stop_loop()
:+1:
@TingPing I'm still trying to get useful error info from the tasks. The problem is that task.result()
re-raises the exception but as far as the task is concerned the exception happened at the point in the code where task.result()
was called and you get no real indication where it really happened. The best I can come up with is to catch the exception in the async function and attach the traceback and re-raise the exception with the traceback.
Something like this:
@queue_asyncio_task
async def sleep(time):
try:
await asyncio.sleep(time)
except Exception as e:
e.traceback = traceback.format_exc()
raise e
else:
return time
...
def on_sleep_done(task, user_data=None):
try:
time = task.result()
except asyncio.CancelledError:
print('Cancelled task: {}'.format(task))
except Exception as e:
print('Exception in task:\n{}'.format(e.traceback))
else:
print('Task completed after {} seconds'.format(time))
if user_data:
cancel_all_and_stop_loop()
Might just have to live with that for now
Especially while we're starting out an actual traceback would be helpful.
I am by no means an expert with asyncio but I'm finding it difficult to find a way to mix synchronous and asynchronous code without callbacks.
From what I understand even with asyncio you still can't run more than one task at a time in the same event loop without "queuing it" at least, which involves callbacks if you care about the result for when it's actually done.
This is the best I can come up with: https://gist.github.com/JasonLG1979/2f2087c49f0a78328dc6a84a63bf9e72
It's basically a function/method for queuing async tasks that you'd want the result for and a decorator for async functions that you don't care about the return value. It makes it possible to mix synchronous and asynchronous code.