Closed Salim-alileche closed 3 years ago
Hi,
Many thanks for reporting this.
fft: ATen not compiled with MKL support
I have heard reports of people actually building their PyTorch with cblas
or something instead of MKL and it running via PyTorch on the RPI, but I have not tried it myself:
Alas no build instructions were published yet - https://github.com/snakers4/silero-vad/issues/37. As usual - community is encouraged to make their dockerized builds public (to be reproducible).
Also have you tried the onnx
version? It should be easier to run it, because there stft
is replaced out of the box by a hand made stft function.
Is it possible to modify the forward function that it will use the librosa stft for the raspberry PIs users ?
Technically, there is no problem. But I would not like to have 2 versions of the same model for torch / onnx - with and without the frontend. I like to keep the models nice and fully packaged. Also there may be a problem building librosa then ...
[pip3] numpy==1.20.2 [pip3] numpydoc==0.7.0 [pip3] torch==1.7.0a0 [pip3] torchaudio==0.7.0a0+ac17b64 [pip3] torchvision==0.8.0a0+291f7e2
Also technically when you are doing the edge builds, you can omit torchaudio, torchvision, numpy (I guess). Some of these are present just for illustration purposes, I believe. You just need to fiddle a bit with model initialization code and utils.
btw, you can ask @leoplusplus in telegram here https://t.me/silero_speech - these are his comments maybe he will publish his builds after all =)
Hi,
Thank you for your advices, I will try build pytorch with cblas if it doesn't work I will try the onnx version.
Hi,
Any luck with these builds?
Hi,
sorry for the late reply, I failed to build torch so I switched to the onnx model and it works perfectly.
Did you have to do any special builds for onnx-runtime
or did it just work out of the box from pre-built binaries for ARM?
If you did, could you please share your dockerized build?
For onnx-runtime
there is some pre-built wheel for raspberry 3 here, the wheels are built with this procedure.
I see, nice to have this link and someone verifying that it is working
🐛 Bug
I tried to use the model in a Raspberry PI 3B and i get the following error : fft: ATen not compiled with MKL support So i tried to modify the stft function in torch/functional.py to use the librosa stft instead, but it seems that the model use another torch stft instead of this i have on my package.
The function used instead of torch stft
def stft(input: Tensor, n_fft: int, hop_length: Optional[int] = None, win_length: Optional[int] = None, window: Optional[Tensor] = None, center: bool = True, pad_mode: str = 'reflect', normalized: bool = False, onesided: Optional[bool] = None, return_complex: Optional[bool] = None): S = librosa.stft(np.array(input),n_fft,hop_length,win_length,window,center,pad_mode) s_real = np.real(S) s_real_shape = np.shape(s_real) s_real = np.reshape(s_real,(s_real_shape[0],s_real_shape[1],1)) s_imag = np.imag(S) s_imag_shape = np.shape(s_imag) s_imag = np.reshape(s_imag,(s_imag_shape[0],s_imag_shape[1],1)) S = np.concatenate((s_real,s_imag),axis=2) return torch.tensor(S)
stack traces
File "/home/Salim/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/stt_pretrained/models/model.py", line 27, in forward _2 = self.win_length _3 = torch.hann_window(self.n_fft, dtype=ops.prim.dtype(x), layout=None, device=ops.prim.device(x), pin_memory=None) x0 = torch.torch.functional.stft(x, _0, _1, _2, _3, True, "reflect", False, True, )