Open TangYuFan opened 4 months ago
@TangYuFan Hi, I also encountered this problem, have you solved this yet? I tried upgrade the version of sdk, but the compiling process failed, only version 30 can build successfully. Besides, is this apk can be adapted to android 15? Thank you so much.
My colleague solved this problem, probably because there is an issue with the destructor:
@TangYuFan Hi, thanks, I found this line in DenseStorge.h, should I comment out this line, can you tell me how do you solve this? By the way, I found that the app didn't crash after I commented out the encode and decode methods in jni_lyra_benchmark_lib.cpp. Can we use email to discuss it, i struggle with this problem for weeks, my email is 2580979439@qq.com, thanks a million.
My colleague found through GDB debugging that it is hanging on this destructor. Sorry, I don't know the specific modifications. My algorithm colleague fixed the problem and directly provided me with the .so dynamic library file.
We're trying to implement the encode and decode method separately using the .so dynamic library file, can you share it? Really appreciate your help.
You can obtain the Android SO library file through the following methods: The idea comes from another issue
namespace {
std::unique_ptr
Next, you can call Lyra to provide JNI functions such as initialization, release, encoding, decoding, etc like this: extern "C" JNIEXPORT int JNICALL Java_com_lyra_LyraCodec_codecInit(JNIEnv env, jobject this_obj, jint sampleRateHz, jint numChannels, jint bitrate, jboolean enableDtx, jstring modelPath){ const char modelPathCStr = env->GetStringUTFChars(modelPath, nullptr); if (modelPathCStr == nullptr) { return 0; } ghc::filesystem::path model(modelPathCStr); env->ReleaseStringUTFChars(modelPath, modelPathCStr); encoder = chromemedia::codec::LyraEncoder::Create(sampleRateHz, numChannels, bitrate, enableDtx, model); decoder = chromemedia::codec::LyraDecoder::Create(sampleRateHz, numChannels, model); return (encoder != nullptr) && (decoder != nullptr); }
Hi @TangYuFan Could you describe your solution with more details? I have tried your suggestion but it is not working. It looks like your approach is just same as using "jni_lyra_benchmark_lib.cc" that is already suggested in BUILD source code. I made my own library through "my_lyra_lib.cc". And because I want to use chromemedia::codec::DecodeFeatures() & chromemedia::codec::EncodeWav, I called those functions on "my_lyra_lib.cc" and hooked dependencies on //lyra/cli_example:decoder_main_lib & encoder_main_lib. And built those ones by specifying cc_library in the BUILD file. (Sorry I have not understand how .cc files can be libraries without cc_library grammer) And then set & build void MainActivity.java through android_library() with dependency on my_lyra_lib. And finally I built my own lyra_android_example.so through android_binary() with dependency on the library.
Please check my modification and tell me which is wrong. I wasted a couple of weeks for this problem... Really need your help.
Hi @JianhaoPeng It looks like you have got clue from TangYuFan's comment and got a solution. Could you share your solution how to build the .so ?
Many thanks in advance
I found a solution but it's very hacky one. I don't know the details, but anyway it looks like the _Unwind_Backtrace() has some problem. I disassembled liblyra_android_example.so and made the _Unwind_Backtrace() as void function. (return immediately after calling the function) After this, the crash disappeared and I could do Dec & Enc through Lyra as I wanted.
By searching, it looks like the function is related to process of exception in general code. I also tried to find some option to disable such exceptional processing related the function but also failed.
Another approach is to port Lyra's model to my other cmake project and use the latest TensorFlow Lite version 2.19.0 for inference. Then, I will compile it onto Android using NDK to eliminate issues beyond model inference. Can you please let me know if this is helpful: this demo code:
void readWavWith320Point(const std::string& wav_in, std::vector<std::vector
for (int i = 0; i < num_frames; ++i) {
std::vector
void write_wav(const std::string& filename, const std::vector
void PrintModelInputOutputShapes(const std::string& model_name,tflite::Interpreter interpreter, const std::string& signature_name) { if (interpreter == nullptr) { std::cerr << "Failed to create interpreter!" << std::endl; return; } auto signature_runner = interpreter->GetSignatureRunner(signature_name.c_str()); if (signature_runner == nullptr) { std::cerr << "Signature Runner not found for signature: " << signature_name << std::endl; return; } std::cout << "---------------------------------------------------" << std::endl; std::cout << "Model:" << model_name << ":" << std::endl; const std::vector<const char>& input_names = signature_runner->input_names(); std::cout << "Input Tensors for signature '" << signature_name << "':" << std::endl; for (const auto& input_name : input_names) { const TfLiteTensor input_tensor = signature_runner->input_tensor(input_name); std::cout << " Tensor Name: " << input_name << std::endl; std::cout << " Shape: ["; for (int j = 0; j < input_tensor->dims->size; ++j) { std::cout << input_tensor->dims->data[j] << (j < input_tensor->dims->size - 1 ? ", " : ""); } std::cout << "]" << std::endl; } const std::vector<const char>& output_names = signature_runner->output_names(); std::cout << "Output Tensors for signature '" << signature_name << "':" << std::endl; for (const auto& output_name : output_names) { const TfLiteTensor* output_tensor = signature_runner->output_tensor(output_name); std::cout << " Tensor Name: " << output_name << std::endl; std::cout << " Shape: ["; for (int j = 0; j < output_tensor->dims->size; ++j) { std::cout << output_tensor->dims->data[j] << (j < output_tensor->dims->size - 1 ? ", " : ""); } std::cout << "]" << std::endl; } }
void infer_lyragan(tflite::Interpreter interpreter,const std::vector
void infer_soundstream(tflite::Interpreter interpreter,const std::vector
constexpr int kMaxNumQuantizedBits = 184;
int num_bits = 64;
int bits_per_quantizer = 0;
void infer_quantizer_rvq_encode(tflite::Interpreter interpreter,std::vector
void infer_quantizer_rvq_decode(tflite::Interpreter interpreter,std::string quantized_features,std::vector
int main() {
std::cout << "TensorFlow Lite Version: " << TFLITE_VERSION_STRING << std::endl;
std::string lyragan = "/mnt/d/work/workspace/duijie2/src/lyra/lyra/model_coeffs/lyragan.tflite";
std::string soundstream_encoder = "/mnt/d/work/workspace/duijie2/src/lyra/lyra/model_coeffs/soundstream_encoder.tflite";
std::string quantizer = "/mnt/d/work/workspace/duijie2/src/lyra/lyra/model_coeffs/quantizer.tflite";
auto lyragan_model = tflite::FlatBufferModel::BuildFromFile(lyragan.c_str());
auto soundstream_encoder_model = tflite::FlatBufferModel::BuildFromFile(soundstream_encoder.c_str());
auto quantizer_model = tflite::FlatBufferModel::BuildFromFile(quantizer.c_str());
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr
Hello, the Android example compiled according to the readme file runs well on some phones and crashes on some phones。
The compilation script I tried: (1) bazel build -c opt lyra/android_example:lyra_android_example --config=android_arm64 --copt=-DBENCHMARK (2) bazel build -c opt lyra/android_example:lyra_android_example --fat_apk_cpu=armeabi-v7a,arm64-v8a --copt=-DBENCHMARK
Running apk on some phones has crashed. The following is the error log:
May I ask how to solve the above problems? Is compiling a script requiring special instructions? Because I use encoding and decoding interfaces to package it into SO for Android phones, some phones may not run properly。