krrishnarraj / clpeak

A tool which profiles OpenCL devices to find their peak capacities
Apache License 2.0
386 stars 109 forks source link

results for NVIDIA GeForce RTX 4090 (overclocked) in Windows 11 #108

Open moyang opened 1 year ago

moyang commented 1 year ago

Core: +200 MHz, VRAM +1000 MHz, Power limit: 600W

Platform: NVIDIA CUDA Device: NVIDIA GeForce RTX 4090 Driver version : 531.61 (Win64) Compute units : 128 Clock frequency : 2520 MHz

Global memory bandwidth (GBPS)
  float   : 954.06
  float2  : 983.28
  float4  : 1001.36
  float8  : 1013.59
  float16 : 1017.66

Single-precision compute (GFLOPS)
  float   : 90262.02
  float2  : 85753.74
  float4  : 90346.06
  float8  : 89091.80
  float16 : 89121.65

No half precision support! Skipped

Double-precision compute (GFLOPS)
  double   : 1496.53
  double2  : 1494.58
  double4  : 1488.91
  double8  : 1482.93
  double16 : 1470.34

Integer compute (GIOPS)
  int   : 46283.53
  int2  : 46459.81
  int4  : 45872.11
  int8  : 46332.95
  int16 : 46330.86

Integer compute Fast 24bit (GIOPS)
  int   : 46572.11
  int2  : 46336.07
  int4  : 46324.10
  int8  : 46139.03
  int16 : 45105.82

Transfer bandwidth (GBPS)
  enqueueWriteBuffer              : 20.85
  enqueueReadBuffer               : 20.48
  enqueueWriteBuffer non-blocking : 20.84
  enqueueReadBuffer non-blocking  : 20.46
  enqueueMapBuffer(for read)      : 9.07
    memcpy from mapped ptr        : 28.45
  enqueueUnmap(after write)       : 26.86
    memcpy to mapped ptr          : 28.05

Kernel launch latency : 9.42 us
RhynarAI commented 1 year ago

Thanx for sharing. I wonder why FP16 is shown as "no support"... Ada Lovelace has FP16 Support.

moyang commented 1 year ago

I guess it means no "native support" for half-precision. FWIW Ada emulates FP16 using FP32, hence FP16 and FP32 have the same tflops. In contrast, recent AMD architectures (RDNA, CDNA) has FP16 performance 2x as much as FP32.

RhynarAI commented 1 year ago

Yeah, thats probably the reason. But it still is missleading - because of course one can do FP16 on Ada and the Tensor Cores do have native FP16 support, so its not like the card can´t process or store FP16/BF16.. Is Tensor core support possible in OPENCL ?