Open tumusudheer opened 4 years ago
I was able compile the code with the following g++ command and able to run inference on a wav file:
g++ '--std=c++1z' -Wall -Wno-sign-compare -Wno-misleading-indentation -O3 src/main.cpp -o bin/main.out -I include -I external/cereal/src/cereal/include -I external/wav2letter/include -I/data/Self/maneesh/facebook/standalone/external/wav2letter/include/inference/module/fbgemm/src/fbgemm/include/ -I/data/Self/maneesh/facebook/standalone/external/wav2letter/include/inference/module/fbgemm/src/fbgemm/third_party/cpuinfo/include/ -I /opt/intel/mkl/include/ -L external/wav2letter/lib/ -L/opt/intel/mkl/lib/intel64/ -L/data/Self/maneesh/facebook/standalone/external/kenlm/lib/ -lwav2letter++ -lwav2letter-inference -lstreaming_inference_common -lstreaming_inference_modules_nn_backend -lstreaming_inference_modules_feature -lstreaming_inference_modules_nn_impl -lwav2letter-libraries -lutil_example -laudio_to_words_example -lclog -lcpuinfo_internals -lcpuinfo -lasmjit -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core -lpthread -lcublas -lm -fopenmp -lfftw3 -lfbgemm -lstreaming_inference_common -lopenblas -llapack -lkenlm_filter -lkenlm_builder -lkenlm -lkenlm_util -llzma -lbz2 -lz -lm -ldl
This will help to build the wav2letter inference code as a standalone applications.
Also A quick question: My machine is Ubuntu 18.04, should I use -lmkl_gnu_thread
or -lmkl_intel_thread
Hi @tumusudheer,
This is great stuff that you posted! If I understand correctly, you are breaking the code in SimpleStreamingASRExample.cpp
into two parts: first, loading the model (your main file above), and second, doing the inference itself audioStreamToWordsStream()
. If that's the case, could you please share how you do the inference part?
I am currently working on a similar problem: using the streaming inference code, only in a python environment.
Hi @tetiana-myronivska ,
Thank you. You are correct, my intention is to divide the initialization part separate and ( and should be execute only at the beginning of the stack), and inference part. I've not started implementing but if you paste this part or similar code from SimpleStreamingASRExample.cpp
, The code should get compiled and you should be able to run the usual inference example code provided the wav2letter team.
Question
I'm using Ubuntu 18.04 and using wav2letter v0.2 branch. I've successfully compile and built wav2letter on my machine. Now I'm working on building inference example as a standalone c++ file instead of wav2letter environment. My C++ code:
And This is my using g++ to compile my main file:
g++ '--std=c++1z' -Wall -Wno-sign-compare -Wno-misleading-indentation -O3 src/main.cpp -o bin/main.out -I include -I external/cereal/src/cereal/include -I external/wav2letter/include -I/data/Self/facebook/standalone/external/wav2letter/include/inference/module/fbgemm/src/fbgemm/include/ -I/data/Self/facebook/standalone/external/wav2letter/include/inference/module/fbgemm/src/fbgemm/third_party/cpuinfo/include/ -I /opt/intel/mkl/include/ -L external/wav2letter/lib/ -L/opt/intel/mkl/lib/intel64/ -L/data/Self/facebook/standalone/external/kenlm/lib/ -lwav2letter++ -lwav2letter-inference -lstreaming_inference_common -lstreaming_inference_modules_nn_backend -lstreaming_inference_modules_feature -lstreaming_inference_modules_nn_impl -lwav2letter-libraries -lutil_example -laudio_to_words_example -lclog -lcpuinfo_internals -lcpuinfo -lasmjit -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core -lpthread -lcublas -lm -fopenmp -lfftw3 -lfbgemm -lstreaming_inference_common -lopenblas -llapack -lm -ldl
I was able to load acostic model, decoder params and token files as well, but when I'm trying to load my decoder, I'm facing I'm getting the following errors while running my g++ command:
My kenlm build directory has the following files in the build directory:
With a bit of searching, I tried to add
-lkenlm_util -lkenlm -lkenlm_builder -lkenlm_filter
to my g++ command but that is giving lot more issues. May I know if I'm doing something wrong ?Thank you