Open jxmmy7777 opened 1 year ago
@jxmmy7777 Excuse me, I met the same problem when profiling inference. Did you fix it?
@jxmmy7777 @UTokyoChenYe I also met the same problem. My json file is about 1.3GB and it's not work when I use export_to_chrome instead.
Hi, any update on this issue?
Hi @kvignesh1420 @idontkonwher @UTokyoChenYe ,I haven't found a good solution yet. My current approach involves minimizing the file size as much as possible and reducing the number of active/warm-up steps. Alternatively, I opt for using a simpler profiler for performance profiling.
@jxmmy7777 Thanks for your replay, I fixed my problem by reduce the code block size in profiler context.
I tried reducing the block size in the profiler context, but with no luck. I get 1.9G torch_trace.json.
When using the PyTorch Profiler with TensorBoard, the generated trace files are too large (e.g., 1 ~2 GB for just 10 steps), causing TensorBoard to crash or hang.
To reproduce
Steps to reproduce the behavior:
Run the training for a few steps. The produced trace file size becomes excessively large. Attempt to open with TensorBoard. TensorBoard crashes or becomes unresponsive when viewing in the trace or memory tab.
Expected behavior The trace file should be of manageable size, or there should be a method to limit or chunk the file size to prevent such issues. Additionally, TensorBoard should be able to handle large trace files more gracefully.
Environment: PyTorch Lightning Version: 1.9.0 Python version: 3.9.18
I have tried 1) Disabled profile_memory and 2) Reducing active steps in the profiler schedule. However, it seems like the trace file is always more than 1GB, which I can't view on tensorbaord. Can someone suggest some alternatives for profiling ?
Given the challenges with the current profiler, I am looking for alternative methods or tools to view the profile my PyTorch Lightning training. Suggestions or recommendations would be highly appreciated.