ofiwg / libfabric

Open Fabric Interfaces
http://libfabric.org/
Other
575 stars 383 forks source link

prov/cxi performance regression in fi_pingpong #9802

Open philippfriese opened 9 months ago

philippfriese commented 9 months ago

Describe the bug Using the upstreamed CXI provider (as of commit fc869ae main branch) yields reduced throughput in fi_pingpong (14GB/s for ofiwg/libfabric compared to 20GB/s for HPE-internal libfabric).

To Reproduce Steps to reproduce the behavior:

Expected behavior Equivalent performance between both libfabric-variants (~20GB/s).

Output Deviating performance:

It is worth noting that the observed throughput of ofiwg/libfabric can be increased by setting the number of iterations from the default 10 to 100 via -I 100. Additionally, using osu_bw and osu_latency from the OSU Microbenchmark Suite, no performance differences are observed between the two libfabric variants.

I've attached raw output of the fi_pingpong runs and osu_bw/osu_latency runs.

Environment:

Additional context Due to a currently unresolved issue with the local Slingshot deployment on the used ARM platform, it is required to set FI_CXI_LLRING_MODE=never for both fi_pingpong and osu_bw.

SSSSeb commented 9 months ago

ping @mindstorm38

mindstorm38 commented 9 months ago

I can't reproduce the regression, here's my environment:

Not yet tested with MPI

lflis commented 9 months ago

@mindstorm38 Which slingshot libraries version are you using?

mindstorm38 commented 9 months ago

I'm using the latest internal sources, I don't know the version number to be honest. I configure cxi, cassini headers and UAPI headers to directly point to the sources. Please tell me if you have any command to check a version that would be interesting for you, but note that my installation is not standard compared to official SlingShot packages, I'm working in parallel on a packages-based installation but it's on x86_64 so this will not be helpful in this case I guess (I'll try anyway).

vanderwb commented 6 months ago

FWIW - I've replicated this result on an x86_64 platform, with pretty much the exact same pingpong bandwidth numbers and the same result if I increase the iterations to 100. We are still running Slingshot 2.1, so maybe things work better with the newly released Slingshot 2.2. But in any case, the HPE 1.15 is performing better than the built-from-source 1.21.