Open mesut92 opened 1 year ago
See here for training code for a compatible model.
It is possible that the micro speech model needs different input features to what is expected of the existing kws_micronet_m model (and the ds_cnn model linked above). So if the input sizes do not match then there will be issues running the application.
I used this scripts to train model (medium version). It did not work. I am gonna check about feature size.
Hi Richard
I trained ds_cnn model with this conf (dct_coefficient_count=10);
python train.py --model_architecture ds_cnn --model_size_info 5 172 10 4 2 1 172 3 3 2 2 172 3 3 1 1 172 3 3 1 1 172 3 3 1 1 --dct_coefficient_count 10 --window_size_ms 40 --window_stride_ms 20 --learning_rate 0.0005,0.0001,0.00002 --how_many_training_steps 10000,10000,10000 --summaries_dir work/DS_CNN/DS_CNN_M/retrain_logs --train_dir work/DS_CNN/DS_CNN_M/training
And i moved to weights to this file;
ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
But it did not work? Is not it possible to share how you produce pretrained model? I can use pretrained model but others does not work?
When you say it doesn't work is it the result of the model when used in the application are not what you expect? If that is the case then it might be caused by the labels vector here
The pretrained micronet model label order is different to that from the training scripts. Try changing your labels in the application to this order and see if that helps?
Nope. I trained ds_cnn in ML-ZOO with same label. same output size. I generated .tflite file. And I converted it with this;
https://github.com/thodoxuan99/KWS_MCU/blob/main/kws_cortex_m/tflite_to_tflu.py
I moved weight to this place;
ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
It did not generate anything. Empty screen. No signal has shown. It did not gave error during build project. I did not look the label order, because it did not produce output.
Would you be able to share the tflite file, and I can try replicating the issue?
https://drive.google.com/file/d/1xwsQFV7VS4ngEQPy-aGrvO-ZppLsdXKi/view?usp=sharing
I trained ds_cnn_medium with default parameters.
Okay, I converted your model with
xxd -i ds_cnn_quantized.tflite > model_data.cc
and copied the contents of the array into ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
, overwriting the existing model data that is there.
It builds but I get the following output when running on Keil Studio Cloud.
INFO - Added support to op resolver INFO - Creating allocator using tensor arena at 0x31000000 INFO - Allocating tensors ERROR - tensor allocation failed! ERROR - Failed to initialise model
Running it again locally I believe this is caused by the Softmax operator in your model that isn't present in the pretrained MicroNet one. TFLite micro needs to know what operators are present in the model for it to work otherwise it will throw an error.
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
and also editing these 2 lines in the main (as these numbers don't align with your model input shape). I also had to make edits to ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc
to change the useSoftmax parameter within the DoPostProcess() function to false.
Inference now runs albeit with the wrong result, which may just be the result of the new model.
Ideally we should have some way for a user to manually enlist new operators via the API, incase they have changed the model like you have done. We have a task to do this but I am not sure of when this will be completed.
You can make local edits to the cmsis-packs yourself as well to get things working but this is probably not a sustainable solution, instead I think you have 2 ways to go forward:
Hi Richard I commented this line to deactivate softmax
And I gave output with "x" variable. My new code looks like;
# Squeeze before passing to output fully connected layer.
x = tf.reshape(x, shape=(-1, conv_feat[layer_no]))
# Output connected layer.
# output = tf.keras.layers.Dense(units=label_count, activation='softmax')(x)
return tf.keras.Model(inputs, x)
I trained model, replaced the weigths, and got build. (Default labels, and parameters used)
But now, it show the audio signal, however does not show keyword or any text.
Should I change it Relu, something else i need to fix it?
BR Mesut
It matches, but it does not give any output. Even wrong keyword. It did not print any text to screen
Are not there any open-source code for training script of micronet model which you use?
Okay, I converted your model with
xxd -i ds_cnn_quantized.tflite > model_data.cc
and copied the contents of the array intoARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
, overwriting the existing model data that is there.It builds but I get the following output when running on Keil Studio Cloud.
INFO - Added support to op resolver INFO - Creating allocator using tensor arena at 0x31000000 INFO - Allocating tensors ERROR - tensor allocation failed! ERROR - Failed to initialise model
Running it again locally I believe this is caused by the Softmax operator in your model that isn't present in the pretrained MicroNet one. TFLite micro needs to know what operators are present in the model for it to work otherwise it will throw an error.
I manually enlisted this operator by editing the local cmsis pack at
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
and also editing these 2 lines in the main (as these numbers don't align with your model input shape). I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc
to change the useSoftmax parameter within the DoPostProcess() function to false.Inference now runs albeit with the wrong result, which may just be the result of the new model.
Ideally we should have some way for a user to manually enlist new operators via the API, incase they have changed the model like you have done. We have a task to do this but I am not sure of when this will be completed.
You can make local edits to the cmsis-packs yourself as well to get things working but this is probably not a sustainable solution, instead I think you have 2 ways to go forward:
1. You can adjust the retrained model so it doesn't have softmax at the end when you produce your tflite file, this way you don't need to edit the cmsis packs. 2. Switch to using the [ML-embedded-evaluation-kit](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit), which is what this cmsis-pack example are based off of (and where the cmsis packs come from). Swapping to use this will allow you to more easily modify the KWS use case and change models etc. or even generate new cmsis-packs if you wish.
Hi
Is it possible to share your modifications in this part? So i can try same changing, and i can get result with my own model.
in this part;
I manually enlisted this operator by editing the local cmsis pack at
\~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.ccand also editing [these 2 lines](https://github.com/ARM-software/ML-examples/blob/d4816d163ffbddb37e3d5e01cc3351e9452b2abb/cmsis-pack-examples/kws/src/main_wav.cpp#L97) in the main (as these numbers don't align with your model input shape). I also had to make edits to
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.ccto change the useSoftmax parameter within the DoPostProcess() function to false.
Apologies for the delay responding. We don't have opensource model description of micronet but you can use the training code you already have and add micronet style model in yourself if you wish to by inspecting the tflite file in Netron and implementing it in the right place of the training code. However, that shouldn't be necessary now that you have removed softmax from your model already, your model should now work.
To fix the blank output you will need to also make the change I did to these 2 lines as I think your input shape is different to the included model.
They should change to this I believe:
const uint32_t numMfccFeatures = 10; const uint32_t numMfccFrames = 49;
As this was getting it from the model input shape originally. Give that a try and it should hopefully not give blank output anymore (fingers crossed).
Still blank output... can you show me these modifications also; so i can try same model which you tried.
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc to change the useSoftmax parameter within the DoPostProcess() function to false.
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
After line 32 add: this->m_opResolver.AddSoftmax();
so it will look like:
this->m_opResolver.AddRelu();
this->m_opResolver.AddSoftmax();
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/include/MicroNetKwsModel.hpp
Line 49 you need to increase ms_maxOpCnt by 1
I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc to change the useSoftmax parameter within the DoPostProcess() function to false.
Line 207 change last parameter to false so it will be:
this->m_labels, 1, false);
Did you try running with your new model (that doesn't have softmax) on CS300 FVP, either locally or via Keil Studio Cloud, and what output did you get on the console? I should hope to see some error or output message on the console to see at what point it is getting to when executing, which might help to debug.
Hi Richard I ran model which i sent with drive. Now it works. Thanks.
You referred the main wav in here; but I guess it should main_live.cc
To fix the blank output you will need to also make the change I did to [these 2 lines](https://github.com/ARM-software/ML-examples/blob/d4816d163ffbddb37e3d5e01cc3351e9452b2abb/cmsis-pack-examples/kws/src/main_wav.cpp#L97) as I think your input shape is different to the included model.
They should change to this I believe:
const uint32_t numMfccFeatures = 10; const uint32_t numMfccFrames = 49;
Thanks
I did not use CS300 FVP, or something like that before. I am getting build with .bin . And running on F46. How can i get CS300 FVP to see the errors.
I would think you should be able to see outputs over serial for your board as well.
Public CS300 FVP are available to download here.
Hi I am trying to create a new KWS model. I trained a model on this notebook. https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb
I moved weights to nn_model variable in kws_micronet_m.tflite.cpp. And I changed Labels.cpp . I am using Keil Studio for model deployment. I can use pretrained model. It did not work. I deployed to MCU, but it did not open KWS. How can i train with new keywords?
MCU model: F46