NVIDIA / gpu-monitoring-tools

Tools for monitoring NVIDIA GPUs on Linux
Apache License 2.0
1.02k stars 301 forks source link

nvidia-smi to report PCIe utilization % #215

Open amrragab8080 opened 3 years ago

amrragab8080 commented 3 years ago

nvidia-smi has a query engine in the --help-query-gpu missing is PCIe bandwidth utilization that is reported in nvidia-settings.

"pci.bus_id" or "gpu_bus_id"
PCI bus id as "domain:bus:device.function", in hex.
"pci.domain"
PCI domain number, in hex.
"pci.bus"
PCI bus number, in hex.
"pci.device"
PCI device number, in hex.
"pci.device_id"
PCI vendor device id, in hex
"pci.sub_device_id"
PCI Sub System id, in hex
"pcie.link.gen.current"
The current PCI-E link generation. These may be reduced when the GPU is not in use.
"pcie.link.gen.max"
The maximum PCI-E link generation possible with this GPU and system configuration. For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation.
"pcie.link.width.current"
The current PCI-E link width. These may be reduced when the GPU is not in use.
"pcie.link.width.max"
The maximum PCI-E link width possible with this GPU and system configuration. For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation.

can you add pci bandwidth utilization to nvidia-smi's querying subsystem?