rusty1s / pytorch_scatter

PyTorch Extension Library of Optimized Scatter Operations
https://pytorch-scatter.readthedocs.io
MIT License
1.55k stars 179 forks source link

From source in Docker image #194

Closed dmitrysarov closed 3 years ago

dmitrysarov commented 3 years ago

I struggle to build an image with pytorch_scatter based on the official nvcr.io/nvidia/pytorch:20.12-py3 image. image But, interestingly, I am able to correctly install pytorch_scatter in an already running container.

monsieurborges commented 3 years ago

Hey @dmitrysarov,

It looks like your installation is not using CUDA. Share some details on how you build the image and install pytorch_scatter. This will give us more information about what is going wrong.

Otherwise, pytorch_scatter may not be compatible with NVIDIA PyTorch 1.8.0... What do you thing about @rusty1s?

I can say that everything works fine with nvcr.io/nvidia/pytorch:20.03-py3.

dmitrysarov commented 3 years ago

@mgomesborges thanks for answering It actually does compatible because I can do build from a source inside an already running container. In my Dockerfile, I tried simple pip install torch_scatter as well as providing ENV variables like FORCE_CUDA=1 and CUDA version. Maybe I miss something obvious. @mgomesborges can you share your Dockerfile with nvcr.io/nvidia/pytorch:20.03-py3 ? Probably it will answer my question

github-actions[bot] commented 3 years ago

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity.

jan-engelmann commented 1 year ago

@mgomesborges What is the most recent version of nvcr.io/nvidia images that you get to run? How do you install pytorch scatter inside the container?

I tried 20.08 as it's the only cuda toolkit and torch version that should be supported. But I do not manage to install with conda, pip or from scratch. Always some libraries are not correctly linked. I tried to follow all instructions I found in the README and other issues.

Will now try 20.03 as you pointed out. Would really appreciate some pointers :)

monsieurborges commented 1 year ago

Hey @jan-engelmann,

I keep using my dev env but it is a bit outdated:

I believe you need to update the CUDA version you are using. From my experience, the last version compatible with all 3D libraries was CUDA 11.2. I believe it is possible to use a more recent version, but you have to see compatibility with all the libraries you use.

Here is how I used to install the libraries FROM nvcr.io/nvidia/pytorch:21.02-py3:

# Conda init
source ${CONDA_DIR}/etc/profile.d/conda.sh
conda activate ${CONDA_ENV}

# Install PyTorch
pip install --no-cache-dir \
    torch==1.8.0 \
    torchvision==0.9.0

pip install --no-cache-dir torch-points-kernels
pip install --no-cache-dir torchnet

# Install torch-geometric and dependencies
pip install --no-cache-dir torch-scatter
pip install --no-cache-dir torch-sparse
pip install --no-cache-dir torch-cluster
pip install --no-cache-dir torch-spline-conv
pip install --no-cache-dir torch-geometric

# Install MinkowskiEngine
apt-get install --yes --quiet --no-install-recommends libopenblas-dev
pip install --no-cache-dir --verbose --no-deps \
    --install-option="--blas=openblas" \
    MinkowskiEngine==v0.4.3

# Install torchsparse
apt-get install --yes --quiet --no-install-recommends libsparsehash-dev
pip install --no-cache-dir --upgrade git+https://github.com/mit-han-lab/torchsparse.git
pip install pycuda

# Install Torch Points 3D
pip install --no-cache-dir \
    omegaconf wandb plyfile hydra-core==0.11.3 pytorch-metric-learning

# pip install --no-cache-dir torch-points3d==1.2.0
jan-engelmann commented 1 year ago

thanks a lot @mgomesborges !!!

I ended up using the pytorch image on dockerhub. Less other dependencies than the nvcr version.

"docker://pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime" from that I can install pytorch-scatter with this:

pip3 install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+cu117.html

And everything works!

Thanks anyways. If I end up needing the other libraries I'll get back to your response :)