Closed ViitasaariVille closed 2 years ago
@ViitasaariVille: This issue is currently awaiting triage.
One of the @thoth-station/devs will take care of the issue, and will accept the issue by applying the
triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/triage accepted /priority important-soon /assign @harshad16 /remove-triage needs-information
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
@sesheta: Closing this issue.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@sesheta: Closing this issue.
Describe the bug Tensorflow (version '2.6.0-rc0') doesn't recognize xla_gpu (Tesla v100 in this case). Running nvidia-smi gives me: "NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2".
To Reproduce Steps to reproduce the behavior: import tensorflow as tf tf.test.is_gpu_available() #This gives me False tf.config.list_physical_devices() #This returns [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')] for me