Open morsh opened 8 years ago
This could be complicated since different models have different precision benchmarks for different classes.
For example just because one model returns .80 confidence and another one returns .60 confidence doesn't necessarily mean that the first model is the right choice.
It could be for example that the model with .80 confidence has a precision score .03 for a given class of input where as the second model has a precision score of .99 for a given class of input.
This could mean that even thought the second model has only .60 percent confident it has higher standards of confidence than the first model and therefore is more likely to produce the correct result. When you have multiple models and multiple classes it gets extremely hard to determine which classification is the best fit.
We are taking this under consideration.
Two things in mind:
see #18
Add the ability for multiple models to simultaneously listen to the current responses. Each model will listen to utterances and in case of a match a new conversation slot will open and wait for further relevant utterances. The slots will keep listening to new utterances and will "raise" their hand for relevant matches.