Closed bmaltais closed 6 years ago
Trying to build a docker container. If I get it working I will share for others so they don't have to go through the pain of the manual setup ;-)
Great work! I am trying to install the application and I have been running into issue
TypeError: argument 0 is not a Variable
@bmaltais Thanks! I think the issue is that you are using an outdated version of PyTorch, v0.3. In version 0.4, the Tensor and Variable classes were merged together, and that's what the code was tested with. When 0.4 was released, I removed references to Variable
, and started using .item()
, which
Any idea? Is there a hard dependency on cuda 9.1 cudnn 7.1?
Any CUDA or cuDNN version that PyTorch supports, should work.
As for Python, I have tested and made sure that all the Python scripts can run with both Python 2 and 3.
I have been running into issue because my GPU is too old for the latest pytorch version.
As per the installation guide, you will likely need to install from source in order to use your GPU:
Note that in order to reduce their size, the pre-packaged binary releases (pip, Conda, etc...) have removed support for some older GPUs, and thus you will have to install from source in order to use these GPUs.
Once you you have installed from source, I have found that you can run python setup.py install
(or possibly with python3) if the GPU has changed, and the installation will occur a lot more quickly. This will also make sure that the appropriate GPU binaries are used. Though I have only tested this with different AWS instances.
Also, you don't have to install the Torchvision package from source. You can likely just use pip or Conda.
It worked from source. In case others are interested to build a container that is ready to run neural-style-pt on a system with an older NVIDIA GPU you can use this dockerfile to build it:
FROM nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04
ENV ANACONDA /opt/anaconda2
ENV CUDA_PATH /usr/local/cuda
ENV PATH ${ANACONDA}/bin:${CUDA_PATH}/bin:$PATH
ENV LD_LIBRARY_PATH ${ANACONDA}/lib:${CUDA_PATH}/bin64:$LD_LIBRARY_PATH
ENV C_INCLUDE_PATH ${CUDA_PATH}/include
ENV CMAKE_PREFIX_PATH ${ANACONDA}/
RUN apt-get update && \
apt-get install -y wget build-essential git && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN wget https://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86_64.sh -P /tmp && \
bash /tmp/Anaconda3-5.2.0-Linux-x86_64.sh -b -p $ANACONDA && \
rm /tmp/Anaconda3-5.2.0-Linux-x86_64.sh -rf
# Install basic dependencies
RUN conda install -y numpy pyyaml mkl mkl-include setuptools cmake cffi typing && \
conda install -y -c mingfeima mkldnn && \
conda install -y -c pytorch magma-cuda80 \
&& conda clean -ya
# Build pytorch and vision from code
RUN mkdir /app && \
cd /app && \
git clone --recursive https://github.com/pytorch/pytorch && \
cd /app/pytorch && \
python setup.py install && \
cd /app && \
git clone --recursive https://github.com/pytorch/vision && \
cd /app/vision && \
python setup.py install
build the docker container with:
docker build -t pytprch-cuda8.0-cudnn7-devel-ubuntu16.04 .
It will take about an hour to put everything together if you have a fast internet connection.
Here is the next Dockerfile to actually install neural-style-pt:
FROM pytprch-cuda8.0-cudnn7-devel-ubuntu16.04
RUN cd /app && \
git clone https://github.com/ProGamerGov/neural-style-pt.git && \
cd /app/neural-style-pt && \
python models/download_models.py
WORKDIR /app/neural-style-pt
build the docker container with:
docker build -t neural-style-pt .
Then run the code with:
nvidia-docker run -it neural-style-pt
Great work! I am trying to install the application and I have been running into issue because my GPU is too old for the latest putorch version.
What version of python and pytorch are you using?
I am starting from this docker image built with the Dockerfile containing:
but I get this error when I run : python neural_style.py -gpu 0 -backend cudnn -print_iter 1
Any idea? Is there a hard dependency on cuda 9.1 cudnn 7.1?