Open sribharghava opened 7 years ago
Hi @sribharghava,
Indeed, we still need to include debugging tools for the classifiers part in Duckling.Debug
. Pull requests welcome :)
A few pointers:
The probabilities are computed from the examples in Corpus.hs
files.
The logic for training and ranking is here.
The resulting classifiers are generated here.
Thanks @patapizza for the clarification. I thought we're not using classifiers in the first place. This helps.
Apart from supporting tools for debugging, Is there a plan to return probabilities associated with the prediction in the result at actual run/production time.
I feel that helps the user to filter some false alarms by keeping a threshold.
@sribharghava We're only using classifiers to disambiguate between valid parses (e.g. "(between 8 and 10) tomorrow" vs "(between 8) and (10 tomorrow)"). As these are not too frequent, the probabilities wouldn't be useful as a confidence level. It would be helpful to get a list of false alarms.
I can't find the probabilities associated with the rules while debugging with Duckling.Debug
I just started using duckling and the reason behind it was it's probabilistic nature where I can edit the examples to alter the decision making. I followed the deprecated version's documentation here https://duckling.wit.ai/ which clearly was using probability to make a decision but I couldn't find anything equivalent in this Implementation.
Please throw some light on this