Open rickyyx opened 1 year ago
How many backends are there?
Not too sure - but chatgpt says such:
The specific control variables for each backend may vary, but here are some common environment variables used for controlling the number of threads used by various numerical computing libraries:
OMP_NUM_THREADS: a variable used to set the number of threads to be used by the OpenMP library, which is a parallel programming API for shared-memory systems.
MKL_NUM_THREADS: a variable used to set the number of threads to be used by the Intel Math Kernel Library (MKL).
CUDA_VISIBLE_DEVICES: a variable used to specify which GPUs are visible to a CUDA-enabled application.
TF_NUM_INTEROP_THREADS: a variable used to set the number of threads used by TensorFlow for inter-operation parallelism.
TF_NUM_INTRAOP_THREADS: a variable used to set the number of threads used by TensorFlow for intra-operation parallelism.
TORCH_NUM_THREADS: a variable used to set the number of threads used by PyTorch for parallel computation.
MAGMA_NUM_THREADS: a variable used to set the number of threads used by the MAGMA library for GPU-accelerated linear algebra computations.
What happened + What you expected to happen
We are doing it for
OMP_NUM_THREADS
- should we do this for other backends as well?Versions / Dependencies
master
Reproduction script
NA
Issue Severity
None