Closed chidiewenike closed 3 years ago
Good catches with the data, @Jason-Ku. I will test out the responses to those questions later and see what kind of output I get.
@snekiam @Jason-Ku Do you guys think it would be better to label the intents with numbers or the entire string with underscores?
## intent:Search_for_Swanton_Pacific_Ranch_California_in_your_internet_browser
is just a label and doesn't affect the model's predictions. We could label it like ## intent: intent#1
and map the intent number to a string like the audio responses. The current format is just for readability when looking at the NLU data.
@snekiam @Jason-Ku Do you guys think it would be better to label the intents with numbers or the entire string with underscores?
## intent:Search_for_Swanton_Pacific_Ranch_California_in_your_internet_browser
is just a label and doesn't affect the model's predictions. We could label it like## intent: intent#1
and map the intent number to a string like the audio responses. The current format is just for readability when looking at the NLU data.
I think its fine the way it is
Summary
Added the run_assistant.py script which contains rasa_api_call.py to hit the local Rasa API server for an intent response. Support data includes Rasa model data, models, and audio response data.
Details
When running run_assistant.py, the system waits for the user press and processes user input audio while the button is pressed. The model predicts on the audio stream in real-time and the transcribed audio is sent as a string to the Rasa server. The predicted intent maps to an audio file and the audio is output through the speaker.
Testing
The system was tested on an RPi 4 - 4 GB without any issues.