Based on the output of nvbench, it looks like the global memory bandwidth and bandwidth utilization are calculated based on GPU Time.
Below is an excerpt from a recent benchmark. There are two runs:
the first runs a kernel once
the second runs the kernel a 1000 times in a CUDA graph
The Batch GPU Time of the two runs is similar (~10us with a factor 1000 difference), but the GPU Time is different (~15us for the first kernel and ~9us for the second kernel). If the bandwidth was calculated based on the Batch GPU time, we would therefore expect the same bandwidth number to roll out. However, the bandwidth number is off by a factor of 1.5 between the two rows.
| iters_per_graph | Samples | CPU Time | Noise | GPU Time | Noise | GlobalMem BW | BWUtil | Samples | Batch GPU |
|-----------------|---------|-----------|--------|-----------|-------|--------------|--------|---------|-----------|
| 0 | 33424x | 24.909 us | 66.85% | 14.961 us | 2.95% | 1.963 TB/s | 23.96% | 48703x | 10.267 us |
| 1000 | 54x | 9.343 ms | 0.13% | 9.333 ms | 0.07% | 3.146 TB/s | 38.41% | 56x | 9.310 ms |
It would be more accurate to calculate the bandwidth based on the Batch GPU time when it is available.
Based on the output of nvbench, it looks like the global memory bandwidth and bandwidth utilization are calculated based on GPU Time.
Below is an excerpt from a recent benchmark. There are two runs:
The Batch GPU Time of the two runs is similar (~10us with a factor 1000 difference), but the GPU Time is different (~15us for the first kernel and ~9us for the second kernel). If the bandwidth was calculated based on the Batch GPU time, we would therefore expect the same bandwidth number to roll out. However, the bandwidth number is off by a factor of 1.5 between the two rows.
It would be more accurate to calculate the bandwidth based on the Batch GPU time when it is available.