Closed arachny closed 5 years ago
Sure, QA is possible by providing short stories. I would imagine that these thousands of questions would probably fall into a few more general intents, in which case NLU should be able to handle this. Have you tried out the tensorflow_embedding pipeline yet? In our experience this works better for things like faq intents
Ok, that is interesting! So lets try to build an example if you dont mind. Say we wanted to build a chatbot to help potential users understand the Rasa offering and how it compares against other similar chatbots. So the chatbot would need to answer the following questions (sample):
Q1: Why python?
A1: Because of its ecosystem of machine learning tools. Head over to But I don’t code in python! for details.
Q2: Is this only for ML experts?
A2: You can use Rasa if you don’t know anything about machine learning, but if you do it’s easy to experiment.
Q3: How much training data do I need?
A3: You can bootstrap from zero training data by using interactive learning. Try the tutorials!
Q4: How is Rasa different from other approaches?
A4: Rather than writing a bunch of if/else statements, a Rasa bot learns from real conversations. A probabilistic model chooses which action to take, and this can be trained using supervised, reinforcement, or interactive learning.
Q5: Where to Start
A5: After going through the Installation, most users should start with Building a Simple Bot. However, if you already have a bunch of conversations you’d like to use as a training set, check the Supervised Learning Tutorial.
Q6: What is Rasa NLU
A6: You can think of Rasa NLU as a set of high level APIs for building your own language parser using existing NLP and ML libraries. The setup process is designed to be as simple as possible.
Q7: What is Rasa Core
A7: Rasa Core takes in structured input: intents and entities, button clicks, etc., and decides which action your bot should run next. If you want your system to handle free text, you need to also use Rasa NLU or another NLU tool.
So the intents would be something like:
intents:
- rasa_whatis
- rasa_technical
- rasa_comparison
- ... (other intents like greet, etc.)
The NLU intent examples would be:
## intent : rasa_whatis
- What is Rasa NLU
- What is Rasa Core
## intent : rasa_technical
- Why python?
- Is this only for ML experts?
- How much training data do I need?
- Where to Start
## intent : rasa_comparison
- How is Rasa different from other approaches?
And the utters would be:
utter_technical_A1:
- text: "Because of its ecosystem of machine learning tools. Head over to But I don’t code in python! for details."
utter_technical_A2:
- text: "You can use Rasa if you don’t know anything about machine learning, but if you do it’s easy to experiment."
utter_technical_A3:
- text: "You can bootstrap from zero training data by using interactive learning. Try the tutorials!"
utter_comparison_A4:
- text: "Rather than writing a bunch of if/else statements, a Rasa bot learns from real conversations. A probabilistic model chooses which action to take, and this can be trained using supervised, reinforcement, or interactive learning."
utter_technical_A5:
- text: "After going through the Installation, most users should start with Building a Simple Bot. However, if you already have a bunch of conversations you’d like to use as a training set, check the Supervised Learning Tutorial."
utter_whatis_A6:
- text: "You can think of Rasa NLU as a set of high level APIs for building your own language parser using existing NLP and ML libraries. The setup process is designed to be as simple as possible."
utter_whatis_A7:
- text: "Rasa Core takes in structured input: intents and entities, button clicks, etc., and decides which action your bot should run next. If you want your system to handle free text, you need to also use Rasa NLU or another NLU tool."
Is that right? But again... I am not sure how I would build the stories. For example would it be like:
## example 1
* rasa_technical
- utter_technical_A1
## example 2
* rasa_technical
- utter_technical_A2
But then how does one get correctly chosen when there are two equal options in the story? My only guess would be to extract some entities and use those as examples (e.g. rasa_technical{"technology": "python"}
)? But that would mean that we need to model each intent to extract entities from them... unless there is a way to use the entire user input (sentence) in the same way as the entities?
Could you possibly share your thoughts or provide a few example stories for the above Q&As?
@akelad ... sorry to insist on this, but I think it will help a lot of people if we have some advise on how this could be done. So just wanted to see if you or anyone from your team (@amn41 @Ghostvv ) had any insights on that?
in order for algorithm to pick utter_technical_A1
or A2
you'd need to add longer stories that would define whether A1
or A2
should be picked
But this will only work on question answering when drilling down the subject on the same topic. Most of the question answering scenarios (stories) will be just one level: q --> a
What would work is a way that the user's question is tokenised and the tokens are used to determine the answer. At the moment the only way to achieve that would be to try and extract entities and used those as examples for all stories in the user intents. E.g.:
## example 1
* rasa_technical{"technology": "python"}
- utter_technical_A1
## example 2
* rasa_technical{"subject": "machine learning"})
- utter_technical_A2
But I think that extracting entities in such generic way might be challenging, as we need to model the entity names in an abstract way to fit all examples. But if there was a generic way to include the tokens/words from the user input then this will influence the utter/answer chosen, without the need to extract them as entities. Unless we can force them into the entity values somehow?
I also like to see some insights from the team about how to solve the QA problem using rasa.
Ok so for the case where you've described above you would indeed probably need separate intents. but you could e.g. define these similarly, like rasa_technical_python
, rasa_technical_trainingdata
etc. with the tensorflow_embedding pipeline. You would definitely need to provide more examples of NLU data than just the one per each intent
Probably it would be safer to assume that Rasa is not designed for Question Answering yet. The only way that I see this working (probable feature suggestion?), would be to allow for the entire user input/sentence (featurised using word embedding as you suggested) to be used within the different stories (instead of the entities). That way the dialogue flow module (LSTM) will be biased by the additional features from the user input to eventually select the correct utter.
If that is implemented, I think, it will be another major advantage of Rasa over other chatbots!
To be honest I am very tempted to try and see if a workaround where, say I create 10 slots (named as token1 .. token10) and choose the top 10 (up to) most discriminant words from the user input and include those in all my relevant (q&a) stories if that will make a difference. If I have time to do that I will report back.
did you able to find any solution for this..I am looking for same
"would be to allow for the entire user input/sentence (featurised using word embedding as you suggested) to be used within the different stories (instead of the entities)" could you explain what you mean by this?
So my thinking is that if we use KerasPolicy (not just MemoizationPolicy) which is a loopy RNN (i.e. LSTM network in Rasa's case), this accepts as inputs the features generated from the current and previous actions including the slots and entities extracted at each interaction. The entity extraction is very important as this will distinguish the follow up action when the stories contain similar paths, and therefore it will be the only discriminating factor for the decision.
Therefore, if the entities are not extracted we would face a problem where from the same historical dialogue flow we might have two choices, but not indication on which one to take. So we could potentially include as additional information the salient features extracted from the user input. This could be the word embedding features that could be added to the LSTM's input in the same way that the entities would have been added. The result of that would be that the LSTM will have enough information to eventually identify the correct utter (i.e. answer), based on the user's input (i.e. question).
Since at the moment this automatic way to extract additional features from the user input is not available, I was thinking to create 10 slots (making sure those are NOT unfeaturized) where I would run my own keyword extraction and fill in those slots with those words. Just to be able to have additional features for the classifier to distinguish the next action/utter.
Does this make sense?
utter_name :
text : whats your name?
then action_listen
it would be name eg : jack
so i want that text or name(jack) in my name slots
how can we do ?
I'm going to close this issue now since this is more of a discussion than an issue. Please make a new post on our forum for these kinds of questions
HI If a user asks the unwanted questions or questions which are not listed in our stories then how can i handle it
HI If a user asks the unwanted questions or questions which are not listed in our stories then how can i handle it
Please check the fall back actions in rasa documentation