tinygrad / open-gpu-kernel-modules

NVIDIA Linux open GPU with P2P support
Other
772 stars 57 forks source link

Getting ~40GB/s instead of 50+ in the example. Curious why #6

Open teis-e opened 2 months ago

teis-e commented 2 months ago

NVIDIA Open GPU Kernel Modules Version

550.54.15-p2p default

Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.

Operating System and Version

TUXEDO OS 2 (Ubuntu 22.04 Fork)

Kernel Release

6.5.0-10022-tuxedo

Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.

Hardware: GPU

3x RTX 4090

Describe the bug

The P2P is working, but i get 40 GB/s.

I'm curious why this is, do i need to overclock?

All slots are running GEN 4@16x

Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1      2 
     0 918.31  25.76  25.91 
     1  25.95 923.67  25.60 
     2  26.15  25.68 923.74 
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
   D\D     0      1      2 
     0 919.56  41.38  41.37 
     1  41.38 923.19  41.37 
     2  41.37  41.36 921.56 
 [./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 3

Checking GPU(s) for support of peer to peer memory access...
> Peer access from NVIDIA GeForce RTX 4090 (GPU0) -> NVIDIA GeForce RTX 4090 (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU0) -> NVIDIA GeForce RTX 4090 (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU1) -> NVIDIA GeForce RTX 4090 (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU1) -> NVIDIA GeForce RTX 4090 (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU2) -> NVIDIA GeForce RTX 4090 (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU2) -> NVIDIA GeForce RTX 4090 (GPU1) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 21.06GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed

PS. Thx for this driver team tinygrad🙌

To Reproduce

Run ./p2pBandwidthLatencyTest

Bug Incidence

Always

nvidia-bug-report.log.gz

None

More Info

No response

zvorinji commented 2 months ago

I would guess it's your motherboard or the specific GPU brand. I think it was a surprise to many that the interconnect went up to 50GB bidirectionally when most thought of it as just edging across 8 Gen4 PCIe lanes.

teis-e commented 2 months ago

That shouldn't be the issue, the lanes can run all simultaneously on gen5, altough the 4090 only goes up to gen4:

image

I also heard it could be because of the driver? Which driver version is used in the tests?

zvorinji commented 2 months ago

What motherboard and CPU do you have? Or do they sit on a PCIe backplane with a proper switch?

teis-e commented 2 months ago

1 is directly on the motherboard, and 2 are extended with a Pcie gen4 x16 connector.

I have a Pro WS W790-ACE

With a W-2400 family processor

ilovesouthpark commented 2 months ago

It is interesting. And can you try only 2 4090s, to check if the bandwidth keeps the same as 3 4090s. And also here is some technical supports from Asus for PCIE bifurcation, you can also check https://www.asus.com/support/faq/1037507/. Looking forward to see your findings.

teis-e commented 2 months ago

I already, followed that. It was showing x8 before. But the nvtop shows x16 for all. I saw on another issue it was caused by the driver potentially. Do you know which one is used in the tests?

eabase commented 1 month ago

Can this be used for other models in the 40x0 series?