ubicomplab / rPPG-Toolbox

rPPG-Toolbox: Deep Remote PPG Toolbox (NeurIPS 2023)
https://arxiv.org/abs/2210.00716
Other
640 stars 163 forks source link

Problems bash.sh uv #388

Open DavidA2312 opened 1 month ago

DavidA2312 commented 1 month ago

To work with the rPPG-Toolbox, I followed the official README’s recommendation and chose to use WSL together with uv instead of conda. The goal is to use the pre-trained models.

Image

I have taken the following steps so far:

  1. git clone https://github.com/ubicomplab/rPPG-Toolbox.git
  2. cd rPPG-Toolbox
  3. curl -Ls https://astral.sh/uv/install.sh | bash
  4. bash setup.sh uv

There were no errors in steps 1-3, everything ran smoothly. From step 4 onwards, there are problems that continue to occur:

Image

Following the error, I tried to reinstall torch as described in the error: uv pip install torch (no errors)

To be on the safe side, I also tried the whole thing with conda, but the error occurs in the same place

Perhaps the problem can be solved easily

yahskapar commented 3 weeks ago

I think this is a bug related to the mamba setup (see #373), unfortunately, that I haven't had time to look into yet.

For time being before I look into this, can you build successfully if you comment out the mamba-related part of the setup file? For example with conda:

https://github.com/ubicomplab/rPPG-Toolbox/blob/09945e114d1bf3a88da67cdd1ef4398825245f34/setup.sh#L20-L21

or with uv:

https://github.com/ubicomplab/rPPG-Toolbox/blob/09945e114d1bf3a88da67cdd1ef4398825245f34/setup.sh#L32

If you can build successfully after commenting that out, and you can also comment out this other line, you should be able to use the rest of the toolbox as is (aside from PhysMamba).

DavidA2312 commented 2 weeks ago

error still there (uv & conda tried)

yahskapar commented 2 weeks ago

Strange, do you get the exact same error after commenting out what I mentioned above, including the line in the init file? Have you also tried (with a fresh conda environment, for example) installing torch first and then causal-conv1d using pip? I'd also try the --no-build-isolation flag mentioned in the error message you got.

Perhaps this has something to do with your particular compute environment - what GPU do you have? Have you been able to use other repos that require Torch + CUDA toolkit?

DavidA2312 commented 2 weeks ago

Yes, I get the same error code in the same place using uv.

I have commented out the following things:

  1. line 32 on setup.sh

Image

  1. line 9 in _init.py

Image

uv version

Image

Since I am not so familiar with uv, I will first get the whole thing running with conda

I have switched to a completely different PC to rule out the problem with the hardware. (again WSL)

Image

GPU Image

I have again used the command: bash setup.sh conda as described in the README. (with the lines commented out) I can definitely get on with this PC, but I still have the following errors:

Image (The whole error message as text under the description)

I also tried your suggestion to simply create a new environment and install torch and causal-conv1d.

The following steps:

  1. conda create -n causal-test python=3.8 -y
  2. conda activate causal-test
  3. pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 (no errors, runs cleanly)
  4. pip install causal-conv1d (here the same error occurs as with setup.sh) pip install causal-conv1d --no-build-isolation doesnt chang anything

Image

Thanks for your help!

hole error message setup.sh


medlab@Medizintechnik-01:/mnt/c/Windows/System32/rPPG-Toolbox$ bash setup.sh conda Setting up using conda...

Remove all packages in environment /home/medlab/miniconda3/envs/rppg-toolbox:

Package Plan

environment location: /home/medlab/miniconda3/envs/rppg-toolbox

The following packages will be REMOVED:

_libgcc_mutex-0.1-main _openmp_mutex-5.1-1_gnu ca-certificates-2025.2.25-h06a4308_0 ld_impl_linux-64-2.40-h12ee557_0 libffi-3.4.4-h6a678d5_1 libgcc-ng-11.2.0-h1234567_1 libgomp-11.2.0-h1234567_1 libstdcxx-ng-11.2.0-h1234567_1 ncurses-6.4-h6a678d5_0 openssl-3.0.16-h5eee18b_0 pip-24.2-py38h06a4308_0 python-3.8.20-he870216_0 readline-8.2-h5eee18b_0 setuptools-75.1.0-py38h06a4308_0 sqlite-3.45.3-h5eee18b_0 tk-8.6.14-h39e8969_0 wheel-0.44.0-py38h06a4308_0 xz-5.6.4-h5eee18b_1 zlib-1.2.13-h5eee18b_1

Downloading and Extracting Packages:

Preparing transaction: done Verifying transaction: done Executing transaction: done Channels:

Package Plan

environment location: /home/medlab/miniconda3/envs/rppg-toolbox

added / updated specs:

The following NEW packages will be INSTALLED:

_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main _openmp_mutex pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu ca-certificates pkgs/main/linux-64::ca-certificates-2025.2.25-h06a4308_0 ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.40-h12ee557_0 libffi pkgs/main/linux-64::libffi-3.4.4-h6a678d5_1 libgcc-ng pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1 libgomp pkgs/main/linux-64::libgomp-11.2.0-h1234567_1 libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1 ncurses pkgs/main/linux-64::ncurses-6.4-h6a678d5_0 openssl pkgs/main/linux-64::openssl-3.0.16-h5eee18b_0 pip pkgs/main/linux-64::pip-24.2-py38h06a4308_0 python pkgs/main/linux-64::python-3.8.20-he870216_0 readline pkgs/main/linux-64::readline-8.2-h5eee18b_0 setuptools pkgs/main/linux-64::setuptools-75.1.0-py38h06a4308_0 sqlite pkgs/main/linux-64::sqlite-3.45.3-h5eee18b_0 tk pkgs/main/linux-64::tk-8.6.14-h39e8969_0 wheel pkgs/main/linux-64::wheel-0.44.0-py38h06a4308_0 xz pkgs/main/linux-64::xz-5.6.4-h5eee18b_1 zlib pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1

Downloading and Extracting Packages:

Preparing transaction: done Verifying transaction: done Executing transaction: done #

To activate this environment, use

#

$ conda activate rppg-toolbox

#

To deactivate an active environment, use

#

$ conda deactivate

Looking in indexes: https://download.pytorch.org/whl/cu121 Collecting torch==2.1.2+cu121 Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp38-cp38-linux_x86_64.whl (2200.7 MB) Collecting torchvision==0.16.2+cu121 Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl (6.9 MB) Collecting torchaudio==2.1.2+cu121 Using cached https://download.pytorch.org/whl/cu121/torchaudio-2.1.2%2Bcu121-cp38-cp38-linux_x86_64.whl (3.3 MB) Collecting filelock (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB) Collecting typing-extensions (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB) Collecting sympy (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/sympy-1.13.3-py3-none-any.whl.metadata (12 kB) Collecting networkx (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/networkx-3.3-py3-none-any.whl.metadata (5.1 kB) Collecting jinja2 (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB) Collecting fsspec (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB) Collecting triton==2.1.0 (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB) Collecting numpy (from torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/numpy-1.24.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB) Collecting requests (from torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB) Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/pillow-10.2.0-cp38-cp38-manylinux_2_28_x86_64.whl (4.5 MB) Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (26 kB) INFO: pip is looking at multiple versions of networkx to determine which version is compatible with other requirements. This could take a while. Collecting networkx (from torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB) Using cached https://download.pytorch.org/whl/networkx-3.0-py3-none-any.whl (2.0 MB) Collecting charset-normalizer<3,>=2 (from requests->torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB) Collecting idna<4,>=2.5 (from requests->torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB) Collecting urllib3<1.27,>=1.21.1 (from requests->torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB) Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.2+cu121) Using cached https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB) Collecting mpmath<1.4,>=1.1.0 (from sympy->torch==2.1.2+cu121) Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB) Using cached https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl (11 kB) Using cached https://download.pytorch.org/whl/fsspec-2024.6.1-py3-none-any.whl (177 kB) Using cached https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl (133 kB) Using cached https://download.pytorch.org/whl/sympy-1.13.3-py3-none-any.whl (6.2 MB) Using cached https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl (37 kB) Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, triton, requests, jinja2, torch, torchvision, torchaudio Successfully installed MarkupSafe-2.1.5 certifi-2022.12.7 charset-normalizer-2.1.1 filelock-3.13.1 fsspec-2024.6.1 idna-3.4 jinja2-3.1.4 mpmath-1.3.0 networkx-3.0 numpy-1.24.1 pillow-10.2.0 requests-2.28.1 sympy-1.13.3 torch-2.1.2+cu121 torchaudio-2.1.2+cu121 torchvision-0.16.2+cu121 triton-2.1.0 typing-extensions-4.12.2 urllib3-1.26.13 Collecting h5py==2.10.0 (from -r requirements.txt (line 1)) Using cached h5py-2.10.0-cp38-cp38-manylinux1_x86_64.whl.metadata (2.0 kB) Collecting yacs==0.1.8 (from -r requirements.txt (line 2)) Using cached yacs-0.1.8-py3-none-any.whl.metadata (639 bytes) Collecting scipy==1.5.2 (from -r requirements.txt (line 3)) Using cached scipy-1.5.2-cp38-cp38-manylinux1_x86_64.whl.metadata (2.0 kB) Collecting pandas==1.1.5 (from -r requirements.txt (line 4)) Using cached pandas-1.1.5-cp38-cp38-manylinux1_x86_64.whl.metadata (4.7 kB) Collecting scikit_image==0.17.2 (from -r requirements.txt (line 5)) Using cached scikit_image-0.17.2-cp38-cp38-manylinux1_x86_64.whl.metadata (7.5 kB) Collecting numpy==1.22.0 (from -r requirements.txt (line 6)) Using cached numpy-1.22.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.0 kB) Collecting matplotlib==3.1.2 (from -r requirements.txt (line 7)) Using cached matplotlib-3.1.2-cp38-cp38-manylinux1_x86_64.whl.metadata (1.4 kB) Collecting opencv_python==4.5.2.54 (from -r requirements.txt (line 8)) Using cached opencv_python-4.5.2.54-cp38-cp38-manylinux2014_x86_64.whl.metadata (17 kB) Collecting PyYAML==6.0 (from -r requirements.txt (line 9)) Using cached PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (2.0 kB) Collecting scikit_learn==1.0.2 (from -r requirements.txt (line 10)) Using cached scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting tensorboardX==2.4.1 (from -r requirements.txt (line 11)) Using cached tensorboardX-2.4.1-py2.py3-none-any.whl.metadata (5.1 kB) Collecting tqdm==4.64.0 (from -r requirements.txt (line 12)) Using cached tqdm-4.64.0-py2.py3-none-any.whl.metadata (57 kB) Collecting mat73==0.59 (from -r requirements.txt (line 13)) Using cached mat73-0.59-py3-none-any.whl.metadata (3.5 kB) Collecting ipykernel==6.26.0 (from -r requirements.txt (line 14)) Using cached ipykernel-6.26.0-py3-none-any.whl.metadata (6.3 kB) Collecting ipywidgets==8.1.1 (from -r requirements.txt (line 15)) Using cached ipywidgets-8.1.1-py3-none-any.whl.metadata (2.4 kB) Collecting fsspec==2024.10.0 (from -r requirements.txt (line 16)) Using cached fsspec-2024.10.0-py3-none-any.whl.metadata (11 kB) Collecting timm==1.0.11 (from -r requirements.txt (line 17)) Using cached timm-1.0.11-py3-none-any.whl.metadata (48 kB) Collecting causal-conv1d==1.0.0 (from -r requirements.txt (line 18)) Using cached causal_conv1d-1.0.0.tar.gz (6.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [13 lines of output] /tmp/pip-install-m3jk1nkk/causal-conv1d_b62be82b4fc24d04984a994fc0af6cea/setup.py:77: UserWarning: causal_conv1d was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc. warnings.warn( Traceback (most recent call last): File "", line 2, in File "", line 34, in File "/tmp/pip-install-m3jk1nkk/causal-conv1d_b62be82b4fc24d04984a994fc0af6cea/setup.py", line 111, in if bare_metal_version >= Version("11.8"): NameError: name 'bare_metal_version' is not defined

  torch.__version__  = 2.1.2+cu121

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed

× Encountered error while generating package metadata. ╰─> See above for output.

note: This is an issue with the package mentioned above, not pip. hint: See above for details.