-
There are important Pytorch Specific markers generated during the training loop either using NVTX or other means from pytorch lightning.
It would be useful to show the time line view with execution…
-
```
To use the gperftools heap-profiler, we have to use the tcmalloc memory
allocator with it. But in essence, they're independent.
I'm now trying to make the gperftools heap-profiler available with…
-
```
To use the gperftools heap-profiler, we have to use the tcmalloc memory
allocator with it. But in essence, they're independent.
I'm now trying to make the gperftools heap-profiler available with…
-
```
I would like to use the mini-profiler on a singlepage application. It already
shows timings for every AJAX request and the initial request. But I would like
to add timings for pure clientside co…
-
```
I would like to use the mini-profiler on a singlepage application. It already
shows timings for every AJAX request and the initial request. But I would like
to add timings for pure clientside co…
-
## 🐛 Bug
Using the autograd profiler with dist autograd may lead to potentially misleading results, by profiling events that occur in separate threads and are not part of the functions that are exe…
-
Particularly their construction in `.nioBuffer`. Maybe we can construct reusable `ArrowBufPointer`s for the intermediate representation instead, if we can manage their lifecycle?
-
### What software would you like us to add to wolfi-os. Ideally include a URL to the project and its source.
https://github.com/async-profiler/async-profiler
### which versions of the software shoul…
-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.11.0
### Custom code
Yes
### OS platform and distribution
Linux …
-
# Description
I’m using torch.compile with DistributedModelParallel. Given torch.compile is able to speed up pytorch distributed models, I would expect to see faster inference time. However, it takes…