Closed Delaunay closed 4 years ago
python scaling.py --devices 0 1 2 2 3 micro_bench.py --network resnet50 --fp16 1
pending testing on a server
benchmark that can be removed after merging:
./image_classification/scaling/pytorch/run.sh --repeat 10 --number 5 --network resnet18 --batch-size 32
should work now
I have merged this manually along with other changes. Thanks!
Replace multi GPU benchmark with a Scaling benchmark.
Since most multiGPU tasks use data parallel, the speed should scale linearly with the number of GPUs.
So we measure the efficiency of the scaling to evaluate multi GPU settings. This test is made to make sure DataParallel scale linearly across GPUs.
Reported Number
avg: 94.03%
andsd: 2%
Closer to 100% is better.
Efficiency should be > 90% to pass regardless of hardware vendors