Open AnaCoda opened 11 months ago
Did you test the microphone?
Yes, the microphone was fine. It seems that the Jetson was running out of memory. A combination of closing Firefox/reducing time/adding swap memory seems to have fixed the problem.
However, this is not ideal and for some reason the inference seems to lag sometimes (some are very fast, some are twice as slow) even with the tiny.en model. Any idea why this may be happening?
On my Jetson Nano Dev Kit 4GB, build.sh seemed to setup smoothly. However, when I run
bash whisper-edge/run.sh
, I get the following output:There is no indication of any error but it stops there. There was also one time it only said
[ .................... stream.py:73] Loading model "base.en"...
Any idea why this may be happening? It seems to indicate an issue withwhisper.transcribe(model=model, audio=np.zeros(block_size, dtype=np.float32))
Also, how long is "Warming model up" supposed to take? It seems to hang for half a minute before the program exits