Open Mic92 opened 1 year ago
I currently played with llama.cpp:
$ ./examples/alpaca.sh
> User prompt: Turn on the light; Available actions: TurnLightOn, TurnLightOff
TurnLightOn
> Available actions: TurnLightOn, TurnLightOff; User prompt: Toggle the light
TurnLightOn
TurnLightOff
> Available actions: TurnLightOn(name), TurnLightOff(name); User prompt: Toggle the bathroom light
TurnLightOn("Bathroom")
I can see how this could work :)
vicuna seems to perform better:
Below is an instruction that describes a user request. Respond with one of the following categories:
- FindMyPhone
- TurnLightOn
- TurnLightOff
- UnknownAction
User prompt: Make me a sandwhich
UnknownAction
This is expected, since Kaldi matches to a particular expected set of sentences and Whisper transcription does not. However, a few things I've noticed:
In the long run we just need a better system for converting transcriptions to intents. Well... LLMs were designed originally to translate languages. So what might work here is an LLM specifically trained to convert "general" transcriptions into the much smaller set of intents understood by a system. Note that small LLMs can run on lower-end hardware, even just using CPU (see gpt4all for a very cool demo). Of course this would add latency. But what's ALSO interesting is that Whisper includes a language model itself, and it might be possible to fine-tune it (e.g. with a LoRA model, which can be done relatively cheaply) to directly target Home Assistant intents (and maybe a set of non-standard-English corporate device names), which would avoid the latency issue.
Something simpler might also work, i.e. an intent recognition system that can find "near matches".
BTW1 it would be really nice if the medium model for Whisper were available with the download script. From what I've read it's a big step up from the small model in terms of transcription accuracy. Unfortunately, I'm too lazy to generate it myself :)
BTW2 this is a very cool project, thank you for working on it! I do hope integration with Home Assistant voice assistants improves, in particular I run Rhasspy (and other expensive AI things, like Frigate) on a different system (a larger Intel machine...) than where I run Home Assistant (a Rasp Pi HA Yellow) and I want to keep that distributed architecture.
Also whisper is a lot better at detecting random text quick it seems to perform a lot worse at detecting the intent compared to the closed trained transcription that we had in rhasspy 2. I think the closed transcription also helped a lot with fuzzy matching i.e. when people used slightly different words or when other people are talking in the background. I feel like as long as there is no smart NLU that can match the spoken text on the intent this approach might be not fit enough to be useful for interfacing with home automation. I see value in having a good open transcription and I am currently thinking how this could be combined with the preciseness of the old system.