Open pgmpablo157321 opened 1 year ago
Hi Pablo, You need to install fbgemm-gpu-cpu==0.3.2 to avoid this error.
Already have this version, but the error persist
Name: fbgemm-gpu-cpu
Version: 0.3.2
Summary:
Home-page: https://github.com/pytorch/fbgemm
Author: FBGEMM Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by:
Have you tried to remove fbgemm-gpu as well?
@yuankuns When i try to remove the fbgemm-gpu
, the following import error:
ModuleNotFoundError: No module named 'fbgemm_gpu'
I managed to run the cpu version with fbgemm-gpu-cpu==0.3.2 fbgemm-gpu==0.4.1 pytorch==1.13.1
in a machine with gpu. Without gpu I get an fbgemm error like the ones I posted before
@pgmpablo157321 It's interesting, since there is no GPU on our server, and it (only fbgemm-gpu-cpu==0.3.2) work for our case.
@pgmpablo157321 is this still an issue?
I am currently making the reference implementation and am stuck deploying the model in multiple GPUs.
Here is a link to the PR: https://github.com/mlcommons/inference/pull/1373 Here is a link to the file where the model is: https://github.com/mlcommons/inference/blob/7c64689b261f97a4fc3410bff584ac2439453bcc/recommendation/dlrm_v2/pytorch/python/backend_pytorch_native.py
Currently this works for a debugging model and a single GPU, but fails when I try to run it with multiple ones. Here are the issues that I have:
or
This can be because I am trying to load a sharded model in a different number of ranks. Do you know if that could be related if thats related?
I have tried with pytorch versions 1.12, 1.13, 2.0.0, 2.0.1 and fbgemm version 0.3.2 and 0.4.1