For inference, the command is
python -m allosaurus.run [--lang <language name>] [--model <model name>] [--device_id <gpu_id>] -i <audio>
However, specifying any device ID other than 0 (like say 1) still runs the inference on GPU 0.
Currently, the following code works to run inference on a GPU other than 0, but I think the intention of the device_id argument was to specify GPU ID as well.
CUDA_VISIBLE_DEVICES=<gpu_id> python -m allosaurus.run [--lang <language name>] [--model <model name>] [--device_id 0] -i <audio>
For inference, the command is
python -m allosaurus.run [--lang <language name>] [--model <model name>] [--device_id <gpu_id>] -i <audio>
However, specifying any device ID other than 0 (like say 1) still runs the inference on GPU 0.
Currently, the following code works to run inference on a GPU other than 0, but I think the intention of the
device_id
argument was to specify GPU ID as well.CUDA_VISIBLE_DEVICES=<gpu_id> python -m allosaurus.run [--lang <language name>] [--model <model name>] [--device_id 0] -i <audio>