So instead of suspend_thread() while waiting for a GPU kernel to complete, which decrements the active thread count and sets the current func to "wait for parallel tasks", this PR now only decrements the active number of threads, such that the current function is still the Func that the kernel is actually producing. As such, you can see the time spent on that kernel, but still see that it was done by 0 threads, indicating it was the GPU that took care of it.
Also don't report the "wait for parallel tasks" if it doesn't contain any time, like for overhead.
So instead of
suspend_thread()
while waiting for a GPU kernel to complete, which decrements the active thread count and sets the current func to "wait for parallel tasks", this PR now only decrements the active number of threads, such that the current function is still the Func that the kernel is actually producing. As such, you can see the time spent on that kernel, but still see that it was done by 0 threads, indicating it was the GPU that took care of it.Also don't report the "wait for parallel tasks" if it doesn't contain any time, like for overhead.
Additional drive-by typo fix.