Closed tdowrick closed 4 years ago
In GitLab by @KimKahl on Jul 24, 2019, 10:58
marked the task test it in my own small example code as completed
In GitLab by @KimKahl on Jul 24, 2019, 14:05
I am just thinking about threading when running the keyword detection because this should not block the main GUI Application. Currently, it just listens in an infinite loop. Should I move the VoiceRecognizer object to a QThread then? That was my first thought where I still have some problems in terminating the thread then. If that's the right way to go, I will try to fix those problems.
In GitLab by @MattClarkson on Jul 24, 2019, 14:32
Good in principle. It depends how confident you are in developing threaded code.
Look at controller.py to see how Controller creates a ControllerWorker and moves it to a thread and starts it running. In this scenario, the Controller accesses the data owned by the ControllerWorker, and we just need QMutex to ensure that access to shared data from both sides is thread safe.
In your case, signals and slots work across threads, so you would just connect signals, and all should be well. You should ensure you use the correct connection type when connecting signals emmited from a separate thread, to ensure it posts events into the main gui event loop. I think you need a QueuedConnection.
In GitLab by @KimKahl on Jul 24, 2019, 14:50
I already worked a bit with Threads, I will try to figure it out and tell you when I get stuck. I will try to get it work tomorrow morning, I think I am near to a solution and it might work better than the background listening I used before.
In GitLab by @KimKahl on Jul 25, 2019, 08:50
I got a problem which I don't know how to solve:
I created a QThread in my demo application and moved my VoiceRecognition object in this thread as well as a timer which triggers the background listening for the keyword. Then in my VoiceRecognition class in the init I create the timer and connect it to the method for background listening. I then start the timer out of my run method which gets called when my QThread starts.
However, the keyword recognition doesn't work and when debugging I get the error message "QObject::startTimer: Timers cannot be started from another thread".
Basically, I just followed the example of the controller.py... Do you have any idea what I could have done wrong?
In GitLab by @MattClarkson on Jul 25, 2019, 08:58
I come round in about 10 minutes......
In GitLab by @KimKahl on Jul 25, 2019, 09:50
Great news, it seems to work now... I found out that when I trigger the listen_for_keyword method every 10 milliseconds it recognizes the keyword and signals also get caught. Maybe I will try to figure out the mechanism of the keyword detection later on but for now I think it would be best to continue with this solution and make my code a bit more generic (with all the paths) and then integrate it in SmartLiver.
The only thing I recognized is when I exit the application by saying "quit" it actually puts out this Error:
QThread: Destroyed while thread is still running
Process finished with exit code -1073740791 (0xC0000409)
but when I run SmartLiver not in full screen and exit it with the exit button in the top right corner (instead of terminating it by pressing the q key), the same thing happens. So should I take care of this behaviour in my example?
In GitLab by @KimKahl on Jul 25, 2019, 10:59
mentioned in commit 6f4a395e13177dc330890fa75fb9e19aa770f876
In GitLab by @MattClarkson on Jul 25, 2019, 10:59
Id ignore it for now. Carry on with other stuff, and come back later.
In GitLab by @KimKahl on Jul 25, 2019, 11:00
marked the task change it in scikit_surgeryspeech as completed
In GitLab by @KimKahl on Jul 25, 2019, 11:08
So I pushed my first implementation. To get it running on your machine, you need to
That process of getting it running on another machine isn't optimal for now but I will try to get it better later. For now I continue trying to create my own keyword file.
In GitLab by @KimKahl on Jul 25, 2019, 13:11
marked the task create own keyword file (already tried it but it didn't work so I have to take a closer look on this) as completed
In GitLab by @KimKahl on Jul 25, 2019, 13:25
Should I merge that into the master and release it so that I can try it out in SmartLiver?
In GitLab by @MattClarkson on Jul 25, 2019, 13:58
Sure.
In GitLab by @MattClarkson on Jul 26, 2019, 07:51
Where's your resources file? with keywords?
In GitLab by @KimKahl on Jul 26, 2019, 08:17
You need to set (you need the full path, I'm just going from the Porcupine folder here):
In GitLab by @MattClarkson on Jul 26, 2019, 09:38
Hmmm... what's this:
[2019-07-26 10:38:28.435743] detected keyword
Listening for command
||PaMacCore (AUHAL)|| Error on line 2490: err='-50', msg=Unknown Error
||PaMacCore (AUHAL)|| Error on line 2490: err='-50', msg=Unknown Error
In GitLab by @KimKahl on Jul 26, 2019, 09:51
I don't know... does it happen when you say the command (e.g. "next") or right after you said the keyword?
In GitLab by @MattClarkson on Jul 26, 2019, 09:51
keyword.
In GitLab by @MattClarkson on Jul 26, 2019, 09:58
OK, so original question was about implementation.
For this project, you could just put the whole porcupine library as a git submodule.
The person who clones the repo, then does git clone --recursive
.
It will always be down to the user to do setup their own google credentials.
We could just use the Alexa awake word.
would that work?
In GitLab by @MattClarkson on Jul 26, 2019, 10:02
No that won't work. I think we just assume that if anyone wants scikit-surgeryspeech, they optionally download and install this, as a one off.
In GitLab by @KimKahl on Jul 26, 2019, 10:04
Okay so I should leave it as it is now? But what about when someone decides not to download it? I guess the whole program wouldn't work then now. Should I just check if the paths are set or something similar to avoid this problem?
In GitLab by @MattClarkson on Jul 26, 2019, 10:06
Just check for the 4 environment variables. If they do not point to valid files, then throw exception. How about that?
In GitLab by @KimKahl on Jul 26, 2019, 10:07
Okay I will do that... What about your error? Does it still occur?
In GitLab by @KimKahl on Jul 26, 2019, 10:51
So what i've done in the smart liver main widget:
if 'PORCUPINE_DYNAMIC_LIBRARY' in os.environ \
and 'PORCUPINE_PARAMS' in os.environ \
and 'PORCUPINE_KEYWORD' in os.environ \
and 'GOOGLE_APPLICATION_CREDENTIALS' in os.environ:
from sksurgeryspeech.algorithms import voice_recognition_service as speech_api
and then in the __init__
I'm just checking for:
if 'sksurgeryspeech' in sys.modules:
#initialize everything, connect signals and start thread
is that the correct way?
I already tried it out, when I set all the paths variables, the voice recognition works and as soon as I don't set one of them (e.g. keyword file), the detection doesn't work. So expected behaviour
In GitLab by @KimKahl on Aug 5, 2019, 08:26
marked the task check how it performs in SmartLiver as completed
In GitLab by @KimKahl on Aug 5, 2019, 08:26
marked the task see how paths etc. could be integrated the most generic way as completed
In GitLab by @KimKahl on Aug 5, 2019, 08:26
closed
In GitLab by @KimKahl on Jul 24, 2019, 10:46
I will try out to integrate the keyword detection Porcupine (https://github.com/Picovoice/Porcupine) in my algorithm. Further steps are:
@MattClarkson