Closed j2l closed 5 years ago
Hello!
It seems that the entiy is solved fine, so the problem is not with the NER. It seems to be with the NLP, is returning you that the intent is None instead of returning email. This can happen if you call NlpManager.process before training the bot, because without training the entities can be extracted, but not the intents.
Also about the NLG, you can use entities in the response in the handlebars format, so if you put I may write to you at {{ mail }}
it will be replaced with the mail extracted from the conversation.
Can you check that the NLP is trained and returning the intents?
Thank you for your detailed and "right on the spot" reply. You provide the solution plus great explanation about how it works.
Yes, I commented out the training, thinking it was done at model creation from XLS table. It's working now!
Could I had regex recognized pattern (here the email, but it could be something else) to the model when it's exceeding the threshold? More generally, sorry for the noob question but, how could it increase knowledge based on users' inputs + validation + confidence threshold? Could users train it while chatting?
For example, I tried give me spiderman's name
(pattern is far from what is the real name of %hero%?
)
and it threw the right answer, that blew me away:
He is Peter Parker{"locale":"en","localeIso2":"en","language":"English","utterance":"give me spiderman name","classification":[{"label":"realname","value":0.8603680945376588},{"label":"email","value":0.3466625112549114},{"label":"whois","value":0.27476366171064276},{"label":"whereis","value":0.27476366171064276}],"intent":"realname","domain":"default","score":0.8603680945376588,"entities":[{"start":8,"end":16,"len":9,"levenshtein":0,"accuracy":1,"option":"spiderman","sourceText":"Spiderman","entity":"hero","utteranceText":"spiderman"}],"sentiment":{"score":0,"comparative":0,"vote":"neutral","numWords":4,"numHits":0,"type":"senticon","language":"en"},"srcAnswer":"He is Peter Parker","answer":"He is Peter Parker"}.
Could we tell it under .9 score, ask for validation of bot reply
and add give me = what is
to the model (then train this part only) to get a .99 score next time?
Could it write to the XLS file? Or is there a tool to reverse model.nlp
to csv or else?
I feel like a child. Thank you again.
Hello, After multiple tests, I'm stuck at regex in XLS :(
Issue Template
I searched issues and tried using the NER Manager example before filling this issue.
Summary
Adding an intent and response to the regex entity didn't throw any intent and response.
Simplest Example to Reproduce
NER:
NLP:
NLG:
Added code (to
answer
):"Sorry, I don't understand, " + JSON.stringify(result) + ", ";
Input:
my mail is dfdf@fffgfg.sd
Response:
Sorry, I don't understand, {"locale":"en","localeIso2":"en","language":"English","utterance":"my mail is dfdf@fffgfg.sd","intent":"None","domain":"default","score":1,"entities":[{"start":11,"end":25,"accuracy":1,"sourceText":"dfdf@fffgfg.sd","utteranceText":"dfdf@fffgfg.sd","entity":"mail"}],"sentiment":{"score":0,"comparative":0,"vote":"neutral","numWords":6,"numHits":0,"type":"senticon","language":"en"}}, .
Expected Behavior
I may write to you
(of course, in the future I'd likeI may write to you at %mail%
)nlp.js
node
npm
Thanks for your help!