rhasspy / rhasspy3

An open source voice assistant toolkit for many human languages
MIT License
311 stars 26 forks source link

Open transcription seems performing worse than rhasspy2's kaldi closed transcription #11

Open Mic92 opened 1 year ago

Mic92 commented 1 year ago

Also whisper is a lot better at detecting random text quick it seems to perform a lot worse at detecting the intent compared to the closed trained transcription that we had in rhasspy 2. I think the closed transcription also helped a lot with fuzzy matching i.e. when people used slightly different words or when other people are talking in the background. I feel like as long as there is no smart NLU that can match the spoken text on the intent this approach might be not fit enough to be useful for interfacing with home automation. I see value in having a good open transcription and I am currently thinking how this could be combined with the preciseness of the old system.

Mic92 commented 1 year ago

I currently played with llama.cpp:

$ ./examples/alpaca.sh
> User prompt: Turn on the light; Available actions: TurnLightOn, TurnLightOff
TurnLightOn
> Available actions: TurnLightOn, TurnLightOff; User prompt: Toggle the light
TurnLightOn 
TurnLightOff
> Available actions: TurnLightOn(name), TurnLightOff(name); User prompt: Toggle the bathroom light    
TurnLightOn("Bathroom")

I can see how this could work :)

Mic92 commented 1 year ago

vicuna seems to perform better:

Below is an instruction that describes a user request. Respond with one of the following categories: 

- FindMyPhone
- TurnLightOn
- TurnLightOff
- UnknownAction
User prompt: Make me a sandwhich
UnknownAction
mmccool commented 1 year ago

This is expected, since Kaldi matches to a particular expected set of sentences and Whisper transcription does not. However, a few things I've noticed:

  1. Be careful with wording. "Turn on my office light" does not work, "Turn on the office light" does. It might be possible to make the intent templates in Home Assistant more general to deal with variants like this.
  2. Watch out for homophones and near homophones. Whisper consistently misunderstands "hall light" as "whole light" with my accent, for some reason (weirdly, only when I try to turn it off, turning it on works fine...). If I try to very carefully pronounce it I may get "haul light" which also fails... In the long run, adding alias names for entities in Home Assistant (for example) may work around this.
  3. Transcription may produce spelling or punctuation variants that may not match your entity names. Ones I have run into are "WeatherFlow" turning into "weather flow" and "multisensor" being output by Whisper as "multi-sensor", both leading to a failure. Generally, naming entities around corporate names and other non-standard vocabulary will probably cause trouble. Again, setting up appropriate entity aliases may help here.

In the long run we just need a better system for converting transcriptions to intents. Well... LLMs were designed originally to translate languages. So what might work here is an LLM specifically trained to convert "general" transcriptions into the much smaller set of intents understood by a system. Note that small LLMs can run on lower-end hardware, even just using CPU (see gpt4all for a very cool demo). Of course this would add latency. But what's ALSO interesting is that Whisper includes a language model itself, and it might be possible to fine-tune it (e.g. with a LoRA model, which can be done relatively cheaply) to directly target Home Assistant intents (and maybe a set of non-standard-English corporate device names), which would avoid the latency issue.

Something simpler might also work, i.e. an intent recognition system that can find "near matches".

BTW1 it would be really nice if the medium model for Whisper were available with the download script. From what I've read it's a big step up from the small model in terms of transcription accuracy. Unfortunately, I'm too lazy to generate it myself :)

BTW2 this is a very cool project, thank you for working on it! I do hope integration with Home Assistant voice assistants improves, in particular I run Rhasspy (and other expensive AI things, like Frigate) on a different system (a larger Intel machine...) than where I run Home Assistant (a Rasp Pi HA Yellow) and I want to keep that distributed architecture.