Create async backtrace and async backtrace all command.
async backtrace - print all futures of currently executed async tasks (current executed tasks count <= async workers count), by decomposing what, exactly, it’s waiting for
async backtrace all - print all async workers state (all workers with all tasks with full future stack)
Example (async backtrace):
Async worker #2 (pid: 388235)
Current task: 3
#0 async fn tokioticker2::new_ticker_task
suspended at await point 0
#1 async fn tokioticker2::new_ticker_task_inner1
just created
Async worker #3 (pid: 388236)
Current task: 7
#0 async fn tokioticker2::new_ticker_task
suspended at await point 0
#1 async fn tokioticker2::new_ticker_task_inner1
suspended at await point 0
#2 future tokio::time::sleep::Sleep
Implementation notes
Currently we will focus on tokio async runtime.
"Root" of our batcktrace is a tokio tasks. Any suspended task must wait some future, that future wait another and so on. The place where we can find root futures (tasks) is the local queues (TODO may be not only local queues, need rnd) of each worker (worker is a system thread). Place where we can find current task - is a frame where run_task function executed.
Unfortunately Tokyo does not provide some sort of explicit metadata exposed for debuggers. So there is no easy way to get tasks-queues addresses in memory. Lets describe possible solutions:
1. Set breakpoints in task constructors/destructors and store task information at these points
This solution good for tokio oracle, because give us "real-time" information. Visualization of this information we can use in tui interface, and it's pretty good. But this solution give us a big overhead - we stop whole program when task created or drop'd.
2. Set breakpoints at tokio runtime initialization process and save pointers task queues
This solution is better then previous one cause we set breakpoint only once, so overhead is minimal. But what if debugee program not using a tokio runtime? What should be the reaction to an error when setting a breakpoint? It looks like we need to check some characters before starting the program to answer the question - "is tokio runtime used?".
3. Move down the call stack at every async backtrace command and find pointers into task queues
When user enter async backtrace command we can to move down the call stack until the worker loop acting as the scheduler is reached (currently this is a worker::Context::run function). Then we can observe local queue of the worker. Then we do the same for all threads. Point to locals queues can be cached.
It looks like we should stick with solution number 3; it allows us to perform computations only in response to a
async backtrace command and does not have any additional overhead.
Feature description
Create
async backtrace
andasync backtrace all
command.async backtrace
- print all futures of currently executed async tasks (current executed tasks count <= async workers count), by decomposing what, exactly, it’s waiting forasync backtrace all
- print all async workers state (all workers with all tasks with full future stack)Example (
async backtrace
):Implementation notes
Currently we will focus on
tokio
async runtime."Root" of our batcktrace is a
tokio
tasks. Any suspended task must wait some future, that future wait another and so on. The place where we can find root futures (tasks) is the local queues (TODO may be not only local queues, need rnd) of each worker (worker is a system thread). Place where we can find current task - is a frame whererun_task
function executed.Unfortunately Tokyo does not provide some sort of explicit metadata exposed for debuggers. So there is no easy way to get tasks-queues addresses in memory. Lets describe possible solutions:
1. Set breakpoints in task constructors/destructors and store task information at these points
This solution good for tokio oracle, because give us "real-time" information. Visualization of this information we can use in tui interface, and it's pretty good. But this solution give us a big overhead - we stop whole program when task created or drop'd.
2. Set breakpoints at
tokio
runtime initialization process and save pointers task queuesThis solution is better then previous one cause we set breakpoint only once, so overhead is minimal. But what if debugee program not using a tokio runtime? What should be the reaction to an error when setting a breakpoint? It looks like we need to check some characters before starting the program to answer the question - "is
tokio
runtime used?".3. Move down the call stack at every
async backtrace
command and find pointers into task queuesWhen user enter
async backtrace
command we can to move down the call stack until the worker loop acting as the scheduler is reached (currently this is aworker::Context::run
function). Then we can observe local queue of the worker. Then we do the same for all threads. Point to locals queues can be cached.It looks like we should stick with solution number 3; it allows us to perform computations only in response to a
async backtrace
command and does not have any additional overhead.