pmh47 / dirt

DIRT: a fast differentiable renderer for TensorFlow
MIT License
312 stars 63 forks source link

AttributeError: 'NoneType' object has no attribute 'rasterise' #109

Open dev2021-ctrl opened 3 years ago

dev2021-ctrl commented 3 years ago

Traceback (most recent call last): File "tests/square_test.py", line 61, in main() File "tests/square_test.py", line 47, in main dirt_pixels = get_dirt_pixels().eval() File "tests/square_test.py", line 35, in get_dirt_pixels height=canvas_height, width=canvas_width, channels=1 File "/home/ubuntu/dirt/dirt/rasterise_ops.py", line 49, in rasterise return rasterise_batch(background[None], vertices[None], vertex_colors[None], faces[None], height, width, channels, name)[0] File "/home/ubuntu/dirt/dirt/rasterise_ops.py", line 82, in rasterise_batch return _rasterise_module.rasterise( AttributeError: 'NoneType' object has no attribute 'rasterise'

@pmh47 please advise . A bit urgent I have tried reinstalling tf gpu packages , libopengl , export LD_LIBRARY_PATH but still getting same error

no-Seaweed commented 2 years ago

I met the same issue.

no-Seaweed commented 2 years ago

I met the same issue.

Hi there,

Thank you for sharing your great work to us. I am appreciated. Currently, I am facing the exact same problem as this and another thread claimed. I checked out the folder librasterise.so should locate at, but the file is not in there.

My CUDA version is 10.0.130, CMake 3.10.3, CuDNN 7.6.5, GCC 4.8.5, tensorflow-gpu 1.13.1. After claiming several variables, CUDA_HOME. LD_LIBRARY_PATH, PATH and CMAKE_LIBRARY_PATH, as suggested in previous issues, I was able to successfully install DIRT repo (at least it appears so :/). Also, my operation system is Linux 10-108-80-111 4.15.0-142-generic #146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux.

''' Processing /home/fangshukai/dirt-master Requirement already satisfied: tensorflow-gpu>=1.6 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from dirt==0.3.0) (1.13.2) Requirement already satisfied: numpy in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from dirt==0.3.0) (1.16.0) Requirement already satisfied: protobuf>=3.6.1 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (3.17.3) Requirement already satisfied: gast>=0.2.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (0.5.2) Requirement already satisfied: tensorflow-estimator<1.14.0rc0,>=1.13.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.13.0) Requirement already satisfied: astor>=0.6.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (0.8.1) Requirement already satisfied: six>=1.10.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.16.0) Requirement already satisfied: wheel>=0.26 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (0.36.2) Requirement already satisfied: absl-py>=0.1.6 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (0.13.0) Requirement already satisfied: tensorboard<1.14.0,>=1.13.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.13.1) Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.1.2) Requirement already satisfied: grpcio>=1.8.6 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.40.0) Requirement already satisfied: termcolor>=1.1.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.1.0) Requirement already satisfied: keras-applications>=1.0.6 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-gpu>=1.6->dirt==0.3.0) (1.0.8) Requirement already satisfied: mock>=2.0.0 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow-gpu>=1.6->dirt==0.3.0) (3.0.5) Requirement already satisfied: markdown>=2.6.8 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu>=1.6->dirt==0.3.0) (3.2.2) Requirement already satisfied: werkzeug>=0.11.15 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu>=1.6->dirt==0.3.0) (1.0.1) Requirement already satisfied: h5py in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from keras-applications>=1.0.6->tensorflow-gpu>=1.6->dirt==0.3.0) (2.10.0) Requirement already satisfied: importlib-metadata; python_version < "3.8" in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->tensorflow-gpu>=1.6->dirt==0.3.0) (2.1.1) Requirement already satisfied: zipp>=0.5 in /home/fangshukai/miniconda3/envs/MultiGarmentNet/lib/python3.5/site-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->tensorflow-gpu>=1.6->dirt==0.3.0) (1.2.0) Building wheels for collected packages: dirt Running setup.py bdist_wheel for dirt ... done Stored in directory: /home/fangshukai/.cache/pip/wheels/f7/b6/3a/f2a258821d145fac933046b4fe260c0a9bac8a98c4ad1bfe0c Successfully built dirt Installing collected packages: dirt Found existing installation: dirt 0.3.0 Uninstalling dirt-0.3.0: Successfully uninstalled dirt-0.3.0 Successfully installed dirt-0.3.0 '''

When I run test.py, the entire log is as follows ''' (MultiGarmentNet) fangshukai@10-108-80-111:~/dirt-master$ python tests/square_test.py WARNING: failed to load librasterise.so; rasterisation functions will be unavailable: /home/fangshukai/dirt-master/dirt/librasterise.so: cannot open shared object file: No such file or directory 2021-09-13 15:05:03.768754: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2021-09-13 15:05:08.980765: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4964910 executing computations on platform CUDA. Devices: 2021-09-13 15:05:08.981128: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981271: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (1): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981400: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (2): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981493: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (3): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981523: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (4): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981621: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (5): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981650: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (6): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:08.981674: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (7): TITAN X (Pascal), Compute Capability 6.1 2021-09-13 15:05:09.023536: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200075000 Hz 2021-09-13 15:05:09.026460: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4aff2d0 executing computations on platform Host. Devices: 2021-09-13 15:05:09.026527: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2021-09-13 15:05:09.026859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:04:00.0 totalMemory: 11.91GiB freeMemory: 11.77GiB 2021-09-13 15:05:09.027011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 1 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:06:00.0 totalMemory: 11.91GiB freeMemory: 11.77GiB 2021-09-13 15:05:09.027145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 2 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:07:00.0 totalMemory: 11.91GiB freeMemory: 11.77GiB 2021-09-13 15:05:09.027273: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 3 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:08:00.0 totalMemory: 11.91GiB freeMemory: 11.77GiB 2021-09-13 15:05:09.027406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 4 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:0c:00.0 totalMemory: 11.91GiB freeMemory: 2.49GiB 2021-09-13 15:05:09.027535: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 5 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:0d:00.0 totalMemory: 11.91GiB freeMemory: 2.58GiB 2021-09-13 15:05:09.027659: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 6 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:0e:00.0 totalMemory: 11.91GiB freeMemory: 2.58GiB 2021-09-13 15:05:09.027783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 7 with properties: name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531 pciBusID: 0000:0f:00.0 totalMemory: 11.91GiB freeMemory: 2.58GiB 2021-09-13 15:05:09.056918: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 1 and 4, status: Internal: failed to enable peer access from 0x7f9604718e80 to 0x7f9614724dc0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.098339: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 1 and 5, status: Internal: failed to enable peer access from 0x7f9604718e80 to 0x7f961c735370: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.098571: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 1 and 6, status: Internal: failed to enable peer access from 0x7f9604718e80 to 0x7f961073b020: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.098785: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 1 and 7, status: Internal: failed to enable peer access from 0x7f9604718e80 to 0x7f95f871db40: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.101460: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 2 and 4, status: Internal: failed to enable peer access from 0x7f960c6e00e0 to 0x7f9614724dc0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.101676: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 2 and 5, status: Internal: failed to enable peer access from 0x7f960c6e00e0 to 0x7f961c735370: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.101887: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 2 and 6, status: Internal: failed to enable peer access from 0x7f960c6e00e0 to 0x7f961073b020: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.102097: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 2 and 7, status: Internal: failed to enable peer access from 0x7f960c6e00e0 to 0x7f95f871db40: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.102469: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 3 and 4, status: Internal: failed to enable peer access from 0x7f9608735570 to 0x7f9614724dc0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.102679: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 3 and 5, status: Internal: failed to enable peer access from 0x7f9608735570 to 0x7f961c735370: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.102887: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 3 and 6, status: Internal: failed to enable peer access from 0x7f9608735570 to 0x7f961073b020: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.103095: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 3 and 7, status: Internal: failed to enable peer access from 0x7f9608735570 to 0x7f95f871db40: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.103373: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 4 and 1, status: Internal: failed to enable peer access from 0x7f9614724dc0 to 0x7f9604718e80: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.103584: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 4 and 2, status: Internal: failed to enable peer access from 0x7f9614724dc0 to 0x7f960c6e00e0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.103791: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 4 and 3, status: Internal: failed to enable peer access from 0x7f9614724dc0 to 0x7f9608735570: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.104586: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 5 and 1, status: Internal: failed to enable peer access from 0x7f961c735370 to 0x7f9604718e80: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.104796: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 5 and 2, status: Internal: failed to enable peer access from 0x7f961c735370 to 0x7f960c6e00e0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.105006: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 5 and 3, status: Internal: failed to enable peer access from 0x7f961c735370 to 0x7f9608735570: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.139776: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 6 and 1, status: Internal: failed to enable peer access from 0x7f961073b020 to 0x7f9604718e80: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.141791: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 6 and 2, status: Internal: failed to enable peer access from 0x7f961073b020 to 0x7f960c6e00e0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.142763: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 6 and 3, status: Internal: failed to enable peer access from 0x7f961073b020 to 0x7f9608735570: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.144144: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 7 and 1, status: Internal: failed to enable peer access from 0x7f95f871db40 to 0x7f9604718e80: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.146194: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 7 and 2, status: Internal: failed to enable peer access from 0x7f95f871db40 to 0x7f960c6e00e0: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.147246: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1370] Unable to enable peer access between device ordinals 7 and 3, status: Internal: failed to enable peer access from 0x7f95f871db40 to 0x7f9608735570: CUDA_ERROR_TOO_MANY_PEERS: peer mapping resources exhausted 2021-09-13 15:05:09.148303: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1, 2, 3, 4, 5, 6, 7 2021-09-13 15:05:09.204556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-09-13 15:05:09.219432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 1 2 3 4 5 6 7 2021-09-13 15:05:09.219652: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N Y Y Y Y Y Y Y 2021-09-13 15:05:09.219774: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1: Y N Y Y Y Y Y Y 2021-09-13 15:05:09.219895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 2: Y Y N Y Y Y Y Y 2021-09-13 15:05:09.219989: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 3: Y Y Y N Y Y Y Y 2021-09-13 15:05:09.220112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 4: Y Y Y Y N Y Y Y 2021-09-13 15:05:09.220225: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 5: Y Y Y Y Y N Y Y 2021-09-13 15:05:09.220318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 6: Y Y Y Y Y Y N Y 2021-09-13 15:05:09.220427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 7: Y Y Y Y Y Y Y N 2021-09-13 15:05:09.221087: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11446 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:04:00.0, compute capability: 6.1) 2021-09-13 15:05:09.225170: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11446 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:06:00.0, compute capability: 6.1) 2021-09-13 15:05:09.244745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 11446 MB memory) -> physical GPU (device: 2, name: TITAN X (Pascal), pci bus id: 0000:07:00.0, compute capability: 6.1) 2021-09-13 15:05:09.264280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 11446 MB memory) -> physical GPU (device: 3, name: TITAN X (Pascal), pci bus id: 0000:08:00.0, compute capability: 6.1) 2021-09-13 15:05:09.279277: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:4 with 2246 MB memory) -> physical GPU (device: 4, name: TITAN X (Pascal), pci bus id: 0000:0c:00.0, compute capability: 6.1) 2021-09-13 15:05:09.294221: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:5 with 2340 MB memory) -> physical GPU (device: 5, name: TITAN X (Pascal), pci bus id: 0000:0d:00.0, compute capability: 6.1) 2021-09-13 15:05:09.308995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:6 with 2340 MB memory) -> physical GPU (device: 6, name: TITAN X (Pascal), pci bus id: 0000:0e:00.0, compute capability: 6.1) 2021-09-13 15:05:09.321196: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:7 with 2340 MB memory) -> physical GPU (device: 7, name: TITAN X (Pascal), pci bus id: 0000:0f:00.0, compute capability: 6.1) Traceback (most recent call last): File "tests/square_test.py", line 61, in main() File "tests/square_test.py", line 47, in main dirt_pixels = get_dirt_pixels().eval() File "tests/square_test.py", line 35, in get_dirt_pixels height=canvas_height, width=canvas_width, channels=1 File "/home/fangshukai/dirt-master/dirt/rasterise_ops.py", line 48, in rasterise return rasterise_batch(background[None], vertices[None], vertex_colors[None], faces[None], height, width, channels, name)[0] File "/home/fangshukai/dirt-master/dirt/rasterise_ops.py", line 81, in rasterise_batch return _rasterise_module.rasterise( AttributeError: 'NoneType' object has no attribute 'rasterise' '''

If you need other information, please let me know. Thanks in advance!

no-Seaweed commented 2 years ago

Update:

I solved this issue by following cd dirt mkdir build ; cd build cmake ../csrc make cd .. pip install -e . instead of directly typing in pip install .

gongfc commented 2 years ago

I use the " cd dirt mkdir build ; cd build cmake ../csrc make cd .. pip install -e ." , and tensorflow-1.15.0, python3.7, cuda10.0, It have the error : none of 1 egl devices matches the active cuda device Need to set CUDA_HOME. LD_LIBRARY_PATH, PATH and CMAKE_LIBRARY_PATH ?