vetter / shoc

The SHOC Benchmark Suite
Other
243 stars 104 forks source link

Cosmetic differences in output format #72

Open cponder opened 3 years ago

cponder commented 3 years ago

There are some minor differences in the format of the units that ought to be easy to fix:

 Running benchmark BusSpeedDownload
     result for bspeed_download:                 25.0373 GB/sec
 Running benchmark BusSpeedReadback
     result for bspeed_readback:                 26.3023 GB/sec
 Running benchmark DeviceMemory
     result for gmem_readbw:                   3098.5600 GB/s
     result for gmem_readbw_strided:            607.3330 GB/s
     result for gmem_writebw:                  2252.4600 GB/s
     result for gmem_writebw_strided:           161.8770 GB/s
     result for lmem_readbw:                  14329.3000 GB/s
     result for lmem_writebw:                 16107.4000 GB/s
     result for tex_readbw:                    1586.6800 GB/sec

where "GB/s" and "GB/sec" are both used, and similarly

 Running benchmark KernelCompile
     result for ocl_kernel:                       0.0876 sec
 Running benchmark QTC
     result for qtc:                              1.5015 s
     result for qtc_kernel:                       1.1845 s

I'd be inclined to use "sec" (and "msec") everywhere.

cponder commented 3 years ago

Another variation is in the order of output between the single-GPU case

 Running benchmark Spmv
     result for spmv_csr_scalar_sp:              98.8228 Gflop/s
     result for spmv_csr_scalar_sp_pcie:          3.1424 Gflop/s
     result for spmv_csr_scalar_dp:              90.6637 Gflop/s
     result for spmv_csr_scalar_dp_pcie:          2.0450 Gflop/s
     result for spmv_csr_scalar_pad_sp:          88.7928 Gflop/s
     result for spmv_csr_scalar_pad_sp_pcie:      3.3503 Gflop/s
     result for spmv_csr_scalar_pad_dp:          81.4095 Gflop/s
     result for spmv_csr_scalar_pad_dp_pcie:      2.8388 Gflop/s
     result for spmv_csr_vector_sp:             135.6520 Gflop/s
     result for spmv_csr_vector_sp_pcie:          3.1697 Gflop/s
     result for spmv_csr_vector_dp:             118.4620 Gflop/s
     result for spmv_csr_vector_dp_pcie:          2.0329 Gflop/s
     result for spmv_csr_vector_pad_sp:         146.2390 Gflop/s
     result for spmv_csr_vector_pad_sp_pcie:      3.3713 Gflop/s
     result for spmv_csr_vector_pad_dp:         133.8620 Gflop/s
     result for spmv_csr_vector_pad_dp_pcie:      2.8474 Gflop/s
     result for spmv_ellpackr_sp:               117.4200 Gflop/s
     result for spmv_ellpackr_dp:                93.4976 Gflop/s

and the multi-GPU case:

 Running benchmark Spmv
     result for spmv_csr_scalar_sp:             101.0680 Gflop/s
     result for spmv_csr_vector_sp:             140.4280 Gflop/s
     result for spmv_ellpackr_sp:               119.4360 Gflop/s
     result for spmv_csr_scalar_dp:              91.7853 Gflop/s
     result for spmv_csr_vector_dp:             121.0850 Gflop/s
     result for spmv_ellpackr_dp:                94.9549 Gflop/s

I can see that the former splits the list by scalar/vector and the latter splits it by single/double. Maybe that's more intuitive for reading, but it also makes it harder to line up the results. I'd be inclined to order the latter as

 Running benchmark Spmv
     result for spmv_csr_scalar_sp:             101.0680 Gflop/s
     result for spmv_csr_scalar_dp:              91.7853 Gflop/s
     result for spmv_csr_vector_sp:             140.4280 Gflop/s
     result for spmv_csr_vector_dp:             121.0850 Gflop/s
     result for spmv_ellpackr_sp:               119.4360 Gflop/s
     result for spmv_ellpackr_dp:                94.9549 Gflop/s