Open Dramelac opened 1 year ago
Is support needed for the oneAPI, ROCm, and CUDA APIs, or just GPU sharing?
Is support needed for the oneAPI, ROCm, and CUDA APIs, or just GPU sharing?
This is one question we have to find an answer to. Depends on whats necessary. If there is one setup compatibile with every GPU (AMD/nvidia) and hashcat, that would be nice.
Is support needed for the oneAPI, ROCm, and CUDA APIs, or just GPU sharing?
This is one question we have to find an answer to. Depends on whats necessary. If there is one setup compatibile with every GPU (AMD/nvidia) and hashcat, that would be nice.
hashcat uses OpenCL, and the runtime is specific to each architecture, but I think it's possible to install different implementations at the same time
Is support needed for the oneAPI, ROCm, and CUDA APIs, or just GPU sharing?
This is one question we have to find an answer to. Depends on whats necessary. If there is one setup compatibile with every GPU (AMD/nvidia) and hashcat, that would be nice.
hashcat uses OpenCL, and the runtime is specific to each architecture, but I think it's possible to install different implementations at the same time
and I don't think sharing is possible for mac on apple silicon
and I don't think sharing is possible for mac on apple silicon
Docker Desktop doesn't support device sharing so I don't have a lot of hope, maybe with orbstack on mac ?
I think it's possible to install different implementations at the same time
Yeah I think too, but it's more a question of space optimization. Don't think these driver will be used by a lot of people so lets keep it as small as possible
Docker Desktop doesn't support device sharing so I don't have a lot of hope, maybe with orbstack on mac ?
no info in the documentation for orbstack
for Docker Desktop, NVIDIA GPU sharing is possible with cuda toolkit
Adding an option to automatically share the host GPU to the exegol container