XuehaiPan / nvitop

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
https://nvitop.readthedocs.io
Apache License 2.0
4.79k stars 149 forks source link

[Enhancement] Monitoring % of tensor cores #35

Closed johnnynunez closed 2 years ago

johnnynunez commented 2 years ago

Nvidia told me to use nvidia profiler to monitor the tensor cores or nvprof. But could you add to this great tool to know if my RTX 3090 is really using the tensor cores?

https://developer.nvidia.com/blog/using-nsight-compute-nvprof-mixed-precision-deep-learning-models/

XuehaiPan commented 2 years ago

Hi @johnnynunez, nvitop is built on top of the NVIDIA Management Library (NVML), which is instantly usable after installing the NVIDIA driver. The only APIs from NVML to get GPU utilization rates are:

Per device:

Per process:

nvitop do provide per process GPU utilization usage in the %SM column. I found this blog:

SM
The GA100 streaming multiprocessor (SM).

said:

A100 GPU streaming multiprocessor

The new streaming multiprocessor (SM) in the NVIDIA Ampere architecture-based A100 Tensor Core GPU significantly increases performance, builds upon features introduced in both the Volta and Turing SM architectures, and adds many new capabilities.

The SM unit is consist of multiple tensor cores. Does this resolve your request?

The NVML can only retrieve the SM (streaming multiprocessor) usage in total rather than fine-grained details for the tensor cores. If you want to profile your program, I think using nvprof is the best practice as NVIDIA documented.

XuehaiPan commented 2 years ago

Closing due to inactivity. Please feel free to ask for a reopening.

johnnynunez commented 1 year ago

Hi, @XuehaiPan pytorch has the capability to watch tensor cores percentatge. Is it possible to use here?

image
XuehaiPan commented 1 year ago

Hi @johnnynunez, the PyTorch Kineto library calculates the tensor core ratio from the kernel times.

https://github.com/pytorch/kineto/blob/6e81ce05c4d9898194fc5432624242cb47a77050/tb_plugin/torch_tb_profiler/profiler/tensor_cores_parser.py#L16-L45

That needs the users explicitly modify their code to register event callbacks.

with torch.profiler.profile(
    activities=[
        torch.profiler.ProfilerActivity.CPU,
        torch.profiler.ProfilerActivity.CUDA,
    ]
) as p:
    code_to_profile()

I don't think there is something we can do in nvitop as a monitor tool rather than a profiler. The profiler needs in-process injection to the user program. nvitop is based on the NVML library and runs in a separate process.