Closed laiyuzhi closed 11 months ago
Try with a big model instead
In general GPU decoders are for large scale parallel processing, not for a single stream like on robot.
Also, default build has not GPU support. You need to build from source to enable it.
What do u mean by build from source? I could not try to use GPU with " test_gpu_batch "? Can you describe how to use GPU?
I try model "vosk-model-en-us-0.22", but with same error.
Traceback (most recent call last):
File "/root/multimodal control robot/voice2text/vosk-api/python/example/test_gpu_batch.py", line 13, in
You have to compile from source
I wanted to test inference on GPU with BatchModel. So I started by downloading [vosk-model-small-en-us-0.15]
While using the test_simple works fine for this model with: model = Model("/root/.cache/vosk/vosk-model-small-en-us-0.15")
Using the test_gpu_batch yields the following error:
Traceback (most recent call last): Traceback (most recent call last): File "/root/multimodal control robot/voice2text/vosk-api/python/example/test_gpu_batch.py", line 13, in
model = BatchModel("/root/.cache/vosk/vosk-model-small-en-us-0.15")
File "/root/anaconda3/envs/mytorch/lib/python3.8/site-packages/vosk/init.py", line 243, in init
raise Exception("Failed to create a model")
Exception: Failed to create a model
I wanted to know what can I do to make it work
Environment Details Vosk version: 0.3.45 Python version: 3.8.18 20.04.1-Ubuntu Cuda: 12.1