k2-fsa / sherpa-onnx

Speech-to-text, text-to-speech, speaker recognition, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift, Dart, JavaScript, Flutter, Object Pascal, Lazarus, Rust
https://k2-fsa.github.io/sherpa/onnx/index.html
Apache License 2.0
3.36k stars 392 forks source link

Issues with libs from releases #1446

Open thewh1teagle opened 2 hours ago

thewh1teagle commented 2 hours ago

I use the pre compiled libraries from releases in sherpa-rs for faster build. But there's some issues on Windows and Linux:

On Windows: Seems like tts is not enabled

On Linux: pic compiler flag not enabled and because of that the build failed from Rust with error:

note: /usr/bin/ld: /home/user/Documents/sherpa-rs/target/debug/deps/libsherpa_rs_sys-af0531d9cb9885c0.rlib(c-api.cc.o): relocation R_X86_64_32S against `.rodata' can not be used when making a PIE object; recompile with -fPIE
          /usr/bin/ld: failed to set dynamic section sizes: bad value
          collect2: error: ld returned 1 exit status

It works only if I set

RUSTFLAGS="-C relocation-model=dynamic-no-pic" 

It should be fixed if you compile the binaries and pass fPIC flag.

Also, some requests / questions: Do you enable all the feature flags in the compiled libs in general? Can you add DirectML to binaries?

csukuangfj commented 2 hours ago

please use shared libs.

thewh1teagle commented 2 hours ago

please use shared libs.

I prefer not to use shared libraries, as they would require users to manually include them. Instead, I use static libraries by default and automatically download and link them in the Rust library. Is it still possible to proceed this way?