facebookresearch / dlrm

An implementation of a deep learning recommendation model (DLRM)
MIT License
3.71k stars 825 forks source link

mini-batch-size can impact inference-only test #380

Open Qinghe12 opened 5 months ago

Qinghe12 commented 5 months ago

when I use dlrm mode to run test with --inference-only, I found the time that forward pass used vary a lot when using different mini-batch-size(like :2048 / 8192)

time forward pass used: 200s(mini-batch-size = 8192),100s(mini-batch-size = 2048)

I wonder how mini-batch-size impact inference test.

And what's the differences between mini-batch-size and test-mini-batch-size in inference-only test ?

cmd i use: dlrm_s_pytorch.py --arch-sparse-feature-size=64 --arch-mlp-bot="512-512-64" --arch-mlp-top="1024-1024-1024-1" --data-generation=dataset --data-set=terabyte --raw-data-file=input/day/day --processed-data-file=input/day/terabyte_processed.npz --loss-function=bce --round-targets=True --learning-rate=0.1 --mini-batch-size=2048 --num-batches=512 --print-freq=1024 --print-time --num-workers=32 --dataset-multiprocessing --inference-only