naruya / dl_remote

Docker for everyday deep learning research on a remote server. (Tensorflow & Pytorch / Jax + VNC)
https://hub.docker.com/r/naruya/dl_remote
23 stars 6 forks source link

nvdiffrast #12

Open naruya opened 1 year ago

naruya commented 1 year ago

https://github.com/NVlabs/nvdiffrast/blob/main/docker/Dockerfile

(work) ➜  nvdiffrast git:(main) ✗ ./run_sample.sh ./samples/torch/cube.py --resolution 32
Using container image: gltorch:latest
Running command: ./samples/torch/cube.py --resolution 32

=============
== PyTorch ==
=============

NVIDIA Release 23.03 (build 55416458)
PyTorch Version 2.0.0a0+1767026

Container image Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2014-2023 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for PyTorch.  NVIDIA recommends the use of the following flags:
   docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

--ipc=host --ulimit memlock=-1 --ulimit stack=67108864

naruya commented 1 year ago

https://github.com/NVlabs/nvdiffrast/issues/99#issuecomment-1327363601

The problem is most likely related to missing graphics drivers on the host operating system,

no nvidia-driver in WSL2

(work) ➜  ~ dpkg --get-selections | grep nvidia
libnvidia-container-tools                       install
libnvidia-container1:amd64                      install
nvidia-container-toolkit                        install
nvidia-container-toolkit-base                   install
nvidia-docker2                                  install
(work) ➜  ~ apt list --installed | grep nvidia-driver-xxx

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

(work) ➜  ~
naruya commented 1 year ago

https://github.com/NVlabs/nvdiffrast/issues/86

After spending some time to make this work, I am right to assume that given WSL2's inability to support CUDA-OpenGL interop (https://docs.nvidia.com/cuda/wsl-user-guide/index.html), it is not possible to use this package in such a setting? Or am I missing something?