Closed WeizhuoZhang-intel closed 1 day ago
Can you please check if this is the case for the nightly docker? https://github.com/pytorch/benchmark/pkgs/container/torchbench
In the docker build file, we will check that the torch and numpy version is consistent before and after the model installation. So I will be surprised if sam_fast
will re-install torch.
I manually tested that this can't be reproduced in the Torchbench docker. Can you have more details on how to reproduce this issue?
For example, if your torch version before running python install.py sam_fast
is smaller than torch>=2.2.0.dev20231026
(https://github.com/pytorch-labs/segment-anything-fast/blob/main/setup.py#L10) , it will automatically upgrade.
Note that we suggest to run torchbench with the latest torch nightly release, so old torch versions are not suggested.
I think it might related to torchao which will be installed by sam_fast. But the latest torchao 0.3.1 seems no hard dependency requirement to torch. sam_fast log
xxx@xxx:/workspace/benchmark# pip list | grep torch
torch 2.5.0.dev20240629+cpu
torchaudio 2.4.0.dev20240629+cpu
torchvision 0.20.0.dev20240629+cpu
xxx@xxx:/workspace/benchmark# python install.py sam_fast
checking packages torch, torchvision, torchaudio are installed...OK
running setup for /workspace/benchmark/torchbenchmark/models/sam_fast...OK
xxx@xxx:/workspace/benchmark# pip list | grep torch
pytorch-labs-segment-anything-fast 0.2
torch 2.5.0.dev20240629+cpu
torchao 0.3.1
torchaudio 2.4.0.dev20240629+cpu
torchvision 0.20.0.dev20240629+cpu
torchao 0.3.0 log
xxx@xxx:/workspace/benchmark# pip list | grep torch
torch 2.5.0.dev20240629+cpu
torchaudio 2.4.0.dev20240629+cpu
torchvision 0.20.0.dev20240629+cpu
xxx@xxx:/workspace/benchmark# pip install torchao==0.3.0
Collecting torchao==0.3.0
Using cached torchao-0.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting torch==2.3.1 (from torchao==0.3.0)
Using cached torch-2.3.1-cp38-cp38-manylinux1_x86_64.whl.metadata (26 kB)
torchao 0.3.1 log
xxx@xxx:/workspace/benchmark# pip list | grep torch
torch 2.5.0.dev20240629+cpu
torchaudio 2.4.0.dev20240629+cpu
torchvision 0.20.0.dev20240629+cpu
xxx@xxx:/workspace/benchmark# pip install torchao==0.3.1
Collecting torchao==0.3.1
Using cached torchao-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Using cached torchao-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.3 MB)
Installing collected packages: torchao
Successfully installed torchao-0.3.1
cc @HDCharles , I am wondering if torchao==0.3.1
is compatible with torch nightly?
Oh so it is a problem with torchao==0.3.0
and it has been fixed by torchao==0.3.1
. I think it is safe to close this issue.
benchmark commit: 23512dbebd44a11eb84afbf53c3c071dd105297e When
python install.py sam_fast
, it will re-install torch to2.3.1
. And seems it related to torchao version which is0.3.0
. The0.2.0
torchao did not has this issue.