Closed ellie-ei closed 3 years ago
Sorry for a very late reply. :( You could try to compile the torchlambda
as a custom docker
image with mkl
support, see here, the whole command would be (assuming source
is a folder containing your generated/written .cpp
code):
torchlambda build source --pytorch USE_MKLDNN=ON
Please re-open if the error persists.
Thank you for the great work you've done!
I've been to trying to build a lambda function for a custom model that has an STFT front-end. Following all the steps, during inference I encountered the following error:
I moved on and implemented the STFT as a convolutional layer to fix the previous error. However, loading the new model fails by throwing the this error:
Do you have any insight on how can I fix this issue?