mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks
https://mlcommons.org/en/groups/inference
Apache License 2.0
1.19k stars 519 forks source link

How to run text-to-image with multi GPUs #1849

Open surbanqq opened 1 week ago

surbanqq commented 1 week ago

Hello everyone, I am a newcomer to MLPerf. I would like to know whether the text-to-image in inference supports multi-card testing. Currently, I see that there is no parameter to set multi-card in the parameters of " python main.py --help ". and I saw the MLPerf inference Results, it has results of 2 L40s and 8 H100 . how do they run the test? thanks a lot

psyhtest commented 3 days ago

Hi @surbanqq! Reference code often supports only a single accelerator. But for their submissions vendors optimize including scaling to multiple accelerators. In the case of NVIDIA, please take a look at their v4.1 submission.

surbanqq commented 3 days ago

thanks a lot