Closed abitofalchemy closed 7 years ago
Hello Sal,
The mic level send only the audio intensity, you have to use the FeatureAudioADPCM*.
To enable them you have to request a BlueVoice license, than you enable both the FeatureAudioADPCM
and FeatureAudioADPCMSync
features an you will start to receive the audio signal.
Calling the FeatureAudioADPCM.getLinearPCMAudio
will return you the PCM audio that you can store on a wav file.
You can't use directly the BlueSTSDKExample app since the data from the 2 features must be combined together to properly decode the audio stream, so you have to edit it. Here you can see how use the FeatureAudioADPCM* features.
Best Regards Giovanni
So, don't use BlueSTSDKExample instead, I should use STBlueMS_iOS/W2STApp/MainApp/Demo/BlueVoice, right?
The license I got for the ST BlueMS app (the version one can obtain from the AppStore) is what I can use as the BlueVoice license within the STBlueMS_iOS/
source code?
Hi Sal, once you have the license with the ST BlueMS app you are fine. The BlueVoice is working until you don't erase the board license..
You have to implement something in any case since we are not currently dump the audio in an audio file.. The BlueSTSDKExample is more simple to hack.. You can take BlueMS as reference since is more complex an a lot of code is doing things that you don't care..
Giovanni
Hi Giovanni,
Thank you. I don't know if you agree with this, but I had tried to upgrade the firmware over-the-air, and something went wrong. It doesn't blink nor work anymore. I've ordered 2 new kits (I really want to get this working). I am trying to use the IDE (SystemWorkbench and IAR) to see if I can clean/erase and re-flash the board to bring it back to working condition. What do you think?
When I get the new boards, do I have to upgrade the firmware to Bluemicrosystem2 (BLUEMICROSYSTEM2 ST This SW has been replaced by FP-SNS-ALLMEMS1)?
Lastly, is it normal for the SensorTile (when connected to the Expansion Cradle & powered by the micro-B USB cable) to blink and soon after to stop blinking? BTW, your help & replies are very useful.
Hi Sal,
when the upgrade fails do you have some error message from the app? It is strange that the upload is correct but the FW has an error since we check the CRC before removing the old FW . By the way using the STLink included in a Nucleo you can flesh a new FW and recover the SensorTile.
Yes the FP-SNS-ALLMEMS1 is now equivalent to the Bluemicrosystem2, and it is more updated.
Regards Giovanni
Giovanni, I do have the ST-Link Utility and the Nucleo board. NUCLEO-F401RE is connected to a Windows10 box and the CradleExpansion is using USB power from another machine. The problem is that I am not certain what Binary file to download to the board to restore it to 'normal' operation.
Should it be this bootloader file?
STM32CubeFunctionPack_ALLMEMS1_V3.0.0/Utilities$ BootLoader/ ├── readme.txt ├── STM32F401RE-Nucleo │ └── BootLoaderF4.bin
I performed a chip Erase. and loaded the file and Programmed it and Verified. I do see different checksums. See the screen result and LD2 is flashing green and LD1 was flashing green/red while the target was connected, & LD3 is red, but after all that the sensorTile's LED does not flash.
In the Projects folder, there are files like: ./Projects/Multi/Applications/ALLMEMS1/Binary/STM32F401RE-Nucleo/ALLMEMS1_IKS01A1_NucleoF401.bin
and
./Projects/Multi/Applications/ALLMEMS1/Binary/STM32F401RE-Nucleo/ALLMEMS1_IKS01A1_NucleoF401_BL.bin
I made sure that they were programmed at the starting address per the UserManual, but had no luck on restoring it. As I said before, I ordered two new boards and are supposed to come in today. My local ST rep has arranged for me to send back the kit for repair, which is very nice.
Hi Sal,
You fleshed the wrong fw/bootloader, the SensorTile has an STM32 L4 not F4..
You have to flesh Utilities\BootLoader\STM32L476RG\BootloaderL4.bin (@address 0x8000)
+ Projects\Multi\Applications\ALLMEMS1\Binary\STM32L476RG-SensorTile\ALLMEMS1_ST.bin (@address 0x8400)
or
flesh the file Projects\Multi\Applications\ALLMEMS1\Binary\STM32L476RG-SensorTile\ALLMEMS1_ST_BL.bin (@address 0x8000)
that contains the bootloader + application firmware.
Regards Giovanni
Hello Giovanni, Yes, that was it. I knew I was doing something wrong. I restored the SensorTile using your notes :)
2 questions: (1) - It looks like the License Manager is no longer supported to be able to get BlueVoice license & get audio working on the app ST BlueMS app (from the AppStore) , is that right? Is there something I need to do to get it back working?
(2)- If I use BlueSTSDK_iOS/BlueSTSDKExample/BlueSTSDKExample
and I get the FeatureList tableview It looks like audioPCM & AudioSync work together. It is here that I should be able to save the incoming data as an audio file, is that right? Like I said before, I want to save a short recording and play it back.
Thank you for all the help.
The W2STAPP from STBlueMS_iOS/
seems to detect that audio features are working on my device, but I don't hear any audio. What do you think I'm doing wrong?
override public func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated);
mFeatureAudio = self.node.getFeatureOfType(BlueSTSDKFeatureAudioADPCM.self) as! BlueSTSDKFeatureAudioADPCM?;
mFeatureAudioSync = self.node.getFeatureOfType(BlueSTSDKFeatureAudioADPCMSync.self) as!
BlueSTSDKFeatureAudioADPCMSync?;
//if both feature are present enable the audio
if let audio = mFeatureAudio, let audioSync = mFeatureAudioSync{
audio.add(self);
audioSync.add(self);
self.node.enableNotification(audio);
self.node.enableNotification(audioSync);
initAudioQueue();
initRecability();
NSLog(">> audio features ARE present!!")
}
}
I've also gone to Google to get an ASR key, entered it into the W2APP running on my device and cannot get the following code to work correctly. Is it possible that this engine?.hasContinuousRecognizer
is not executing?
I noticed that my title on the button is: "Keep press to record", that means that asrEngine.hasContinuousRecognizer
returns False, do you have any tips I could try to get this fixed? All I can do at the moment is wait for Google to fully Enable this feature.
@IBAction func onRecordButtonPressed(_ sender: UIButton) {
if let hasContinuousRecognizer = engine?.hasContinuousRecognizer{
if (hasContinuousRecognizer){
if(mIsRecording){
NSLog("Is recording");
onContinuousRecognizerStop();
}else{
onContinuousRecognizerStart();
}//if isRecording
}else{
onRecognizerStart();
}//if hasContinuos
}//if let
}//onRecordButtonPressed
Hi Sal,
1- with the last version of the BlueVoice library the license is not needed anymore. 2- yes you have to enable both the features otherwise the decoding is not working correctly.
did you compile the BlueMS on your machine and the audio is not working?
The Google ASR that we are using is a web interface and is not supporting the continuous recognizer. Maybe with the Google Cloud Speech API is possible to do it, but I never use it.
It's not working on a brand new SensorTile, I think I have to update firmware? On my old one, has the latest 3.0.0, but the new ones have Version 2.1.0 (BLUE MICRO SYSTEM2).
I think I want to modify the onContinuousRecognizerStart()
function to disregard the speech recognition, and focus on mRecordData:Data?
object to record audio while the button at the bottom is being pressed. When the button is released, the stop() function is triggered and normally the onRecognizerStop() function is called, in this function, I could ignore sending the ASRRequest and maybe instead use mRecordData
to store that to a file. The problem here, without a continuous Recognizer, how do I get these lines of code working?
mRecordData = Data();
engine?.startListener();
mIsRecording=true;
Because here the engine is defined as private var engine:BlueVoiceASREngine?;
will this work even if the Google ASR used is a web interface?
On W2STBlueVoiceViewController.swift
located STBlueMS_iOS/W2STApp/MainApp/Demo/BlueVoice
the onRecordButtonPressed code block, if the continuous Recognizer isn't working, I should be albe to comment out:
if let hasContinuousRecognizer = engine?.hasContinuousRecognizer{
if (hasContinuousRecognizer){
and call the following block? I don't know if onContinuousRecognizer will work without the google ASR.
if(mIsRecording){
NSLog("Is recording");
onContinuousRecognizerStop();
}else{
onContinuousRecognizerStart();
}//if isRecording
Update: yes, it looks like I can ignore the Google ASR and ignore the use fo a continuousRecognizer and just use noncontinuous recognizer.
I did configure a Google Cloud Speech API key with both the compiled version and from the default version from the AppStore. Both return an "Invalid Request". If I can help to get this working, let me know what I can do.
Upgrading the firmware, does allow BlueVoice to work on both versions of the code including BlueSTSDKExample.
I am making slow progress.
engine?.startListener();
engine:BlueVoiceASREngine?;
of this type is where the incoming audio signal will be? mRecordData = Data();
engine?.startListener();
I wonder if I can add/try an AVAudioRecorder
to record the audio to file in parallel or ... do you think I could do that with this object: mRecordData
? Sorry about all the followup questions
Hi Sal,
Google has 2 services to do speech to text, did you follow this guide to generate the key?
The engine is basically an interface to abstract different speech to text services.
for what you are doing, add inside this if some code that store the data
variable in some place/file.
I don't know the AVAudioRecorder
but reading the docs it seams that you can use it only with device that iOS recognize as microphone. To write the Sensor Tile data on wave files I created a class from scratch.
I close this issue since now we are speaking about the BlueMS app. If you have other questions please open a new issue in the BlueMS repository.
I'm new to BT hardware development. I have a question about enabling the right feature to record short bits of audio and store them for playback or signal analysis.
Can ..FeatureAudioADPCM* do that or do I need to modify MicLevel to something like MicRawAudio? and capture x seconds of audio and store them for analysis (DSP) of the signal later on a host PC?
On my iPhone 7Plus I get 8 Features on the TableView, how are these selected?