Closed helgasvala closed 5 years ago
Thanks for raising this issue, @paulaWesselmann will get back to you about it soon. As for the REDP, that's the embedding policy in this repo, so yes it's supported already :)
Thanks! I saw that I was defining my agent twice, which is obviously wrong, but it's clearly getting confused somewhere with the interpreters and agents. My interpreter works fine but I don't know how to get my agent correct. :)
Hey @helgasvala. If you only keep you first import and definition of the agent, does this still happen?
Thank you for your answer, @paulaWesselmann. If I only keep the first import and definition, it stops giving another wrong answer but gives the fallback answer for every single utterance. My fallback threshold is at 0.65.
me: hey bot: I'm sorry, I don't have the answer to your question, could you rephrase it? Alternatively head over to Google to find out more! _(answer_intent: fallback - wrong, parsedintent: greet - correct, confidence:0.95) me: can I ask you about meditation? bot: I'm really sorry, but maybe your question is too broad, phrased in a way that I don't understand or beyond my knowledge. You can try to rephrase your question or alternatively see if you find your answer on Google. _(answer_intent: fallback - wrong, parsed_intent: generalmeditation - correct, confidence:0.96) me: why do we meditate? bot: I'm sorry, I don't have the answer to your question, could you rephrase it? Alternatively head over to Google to find out more! _(answer_intent: fallback - wrong, parsed_intent: whymeditate - correct, confidence:0.94)
etc.
I'm also concerned that my system says that rasa_core is installed but returns nothing when I ask for the list, to see which version is being used. :)
This is very odd. Can you run it with the --debug
flag and see what the logger reports? Also you might not be able to pip list | grep rasa_core
because you are not using the same environment as you are working in.
In which command should I run it with the --debug flag? Mind you, I'm not using bash at all in my script because I want an interactive mode inside a jupyter notebook. :)
try this:
import logging
logging.basicConfig(level="DEBUG")
INFO:rasa_nlu.training_data.loading:Training data format of nlupaths.md is md
INFO:rasa_nlu.training_data.training_data:Training data stats:
- intent examples: 162 (28 distinct intents)
- Found intents: [...]
- entity examples: 0 (0 distinct entities)
- found entities:
DEBUG:rasa_nlu.training_data.training_data:Validating training data...
DEBUG:matplotlib:$HOME=/Users/helgasvala
DEBUG:matplotlib:matplotlib data path /anaconda3/lib/python3.6/site-packages/matplotlib/mpl-data
DEBUG:matplotlib:loaded rc file /anaconda3/lib/python3.6/site-packages/matplotlib/mpl-data/matplotlibrc
DEBUG:matplotlib:matplotlib version 2.2.3
DEBUG:matplotlib:interactive is False
DEBUG:matplotlib:platform is darwin
DEBUG:matplotlib:loaded modules: ['rasa_nlu.training_data.training_data', 'rasa_nlu.training_data.util',
'rasa_nlu.training_data.loading', 'rasa_nlu.training_data.formats', 'rasa_nlu.training_data.formats.dialogflow', 'rasa_nlu.training_data.formats.readerwriter',
'rasa_nlu.training_data.formats.luis', 'rasa_nlu.training_data.formats.wit', 'rasa_nlu.training_data.formats.markdown', 'rasa_nlu.training_data.formats.rasa',
'rasa_nlu.config', 'rasa_nlu.model', 'rasa_nlu.components', 'rasa_nlu.persistor',
'tarfile', 'encodings.utf_8_sig', 'rasa_nlu.registry', 'rasa_nlu.classifiers',
'rasa_nlu.classifiers.keyword_intent_classifier', 'rasa_nlu.classifiers.mitie_intent_classifier', 'rasa_nlu.classifiers.sklearn_intent_classifier'] (THIS IS EVERYTHING FROM RASA)
INFO:rasa_nlu.model:Starting to train component tokenizer_whitespace
INFO:rasa_nlu.model:Finished training component.
INFO:rasa_nlu.model:Starting to train component ner_crf
INFO:rasa_nlu.model:Finished training component.
INFO:rasa_nlu.model:Starting to train component ner_synonyms
INFO:rasa_nlu.model:Finished training component.
INFO:rasa_nlu.model:Starting to train component intent_featurizer_count_vectors
INFO:rasa_nlu.model:Finished training component.
INFO:rasa_nlu.model:Starting to train component intent_classifier_tensorflow_embedding
DEBUG:rasa_nlu.classifiers.embedding_intent_classifier:Check if num_neg 20 is smaller than number of intents 28, else set num_neg to the number of intents - 1
INFO:rasa_nlu.classifiers.embedding_intent_classifier:Accuracy is updated every 10 epochs
Epochs: 100%|ββββββββββ| 300/300 [00:05<00:00, 57.53it/s, loss=0.179, acc=0.963]
INFO:rasa_nlu.classifiers.embedding_intent_classifier:Finished training embedding classifier, loss=0.179, train accuracy=0.963
INFO:rasa_nlu.model:Finished training component.
INFO:rasa_nlu.model:Successfully saved model [...]
DEBUG:pykwalify.compat:Using yaml library: /anaconda3/lib/python3.6/site-packages/ruamel/yaml/__init__.py
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:No jobs; waiting until a job is added
DEBUG:rasa_core.training.generator:Generated trackers will be deduplicated based on their unique last 5 states.
DEBUG:rasa_core.training.generator:Number of augmentation rounds is 3
DEBUG:rasa_core.training.generator:Starting data generation round 0 ... (with 1 trackers)
Processed Story Blocks: 100%|ββββββββββ| 12/12 [00:00<00:00, 600.42it/s, # trackers=1]
DEBUG:rasa_core.training.generator:Finished phase (12 training samples found).
DEBUG:rasa_core.training.generator:Data generation rounds finished.
DEBUG:rasa_core.training.generator:Found 0 unused checkpoints
DEBUG:rasa_core.training.generator:Starting augmentation round 0 ... (with 12 trackers)
Processed Story Blocks: 100%|ββββββββββ| 12/12 [00:00<00:00, 343.97it/s, # trackers=12]
DEBUG:rasa_core.training.generator:Finished phase (156 training samples found).
DEBUG:rasa_core.training.generator:Starting augmentation round 1 ... (with 20 trackers)
Processed Story Blocks: 100%|ββββββββββ| 12/12 [00:00<00:00, 223.24it/s, # trackers=19]
DEBUG:rasa_core.training.generator:Finished phase (373 training samples found).
DEBUG:rasa_core.training.generator:Starting augmentation round 2 ... (with 20 trackers)
Processed Story Blocks: 100%|ββββββββββ| 12/12 [00:00<00:00, 234.83it/s, # trackers=15]
DEBUG:rasa_core.training.generator:Finished phase (546 training samples found).
DEBUG:rasa_core.training.generator:Found 546 training trackers.
DEBUG:rasa_core.agent:Agent trainer got kwargs: {'validation_split': 0.0}
DEBUG:rasa_core.featurizers:Creating states and action examples from collected trackers (by MaxHistoryTrackerFeaturizer(SingleStateFeaturizer))...
Processed trackers: 100%|ββββββββββ| 546/546 [00:11<00:00, 46.35it/s, # actions=800]
DEBUG:rasa_core.featurizers:Created 800 action examples.
Processed actions: 800it [00:01, 604.21it/s, # examples=800]
DEBUG:rasa_core.policies.memoization:Memorized 800 unique examples.
DEBUG:rasa_core.featurizers:Creating states and action examples from collected trackers (by MaxHistoryTrackerFeaturizer(BinarySingleStateFeaturizer))...
Processed trackers: 100%|ββββββββββ| 546/546 [00:12<00:00, 44.90it/s, # actions=800]
DEBUG:rasa_core.featurizers:Created 800 action examples.
DEBUG:rasa_core.policies.keras_policy:None
INFO:rasa_core.policies.keras_policy:Fitting model with 800 total samples and a validation split of 0.1
DEBUG:rasa_core.policies.policy:Parameters ignored by `model.fit(...)`: {}
Probably of more interest is this, when I try to use the chatbot. :)
DEBUG:rasa_core.tracker_store:Recreating tracker for id 'default'
DEBUG:rasa_core.processor:Received user message 'why do we meditate?' with intent '{'name': 'why do we meditate?', 'confidence': 1.0}' and entities '[]'
DEBUG:rasa_core.processor:Logged UserUtterance - tracker now has 6 events
DEBUG:rasa_core.processor:Current slot values:
DEBUG:rasa_core.policies.memoization:Current tracker state [None, {}, {'prev_action_listen': 1.0, 'intent_hey': 1.0}, {'prev_utter_unclear': 1.0, 'intent_hey': 1.0}, {'prev_action_listen': 1.0, 'intent_why do we meditate?': 1.0}]
DEBUG:rasa_core.policies.memoization:There is no memorised next action
DEBUG:rasa_core.featurizers:Feature 'intent_hey' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.featurizers:Feature 'intent_hey' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.featurizers:Feature 'intent_why do we meditate?' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_2_FallbackPolicy
DEBUG:rasa_core.processor:Predicted next action 'utter_unclear' with prob 1.00.
DEBUG:rasa_core.processor:Action 'utter_unclear' ended with events '[]'
DEBUG:rasa_core.processor:Bot utterance 'BotUttered(text: I'm really sorry, but maybe your question is too broad, phrased in a way that I don't understand or beyond my knowledge. You can try to rephrase your question or alternatively see if you find your answer on Google., data: {
"elements": null,
"buttons": null,
"attachment": null
})'
DEBUG:rasa_core.policies.memoization:Current tracker state [{}, {'prev_action_listen': 1.0, 'intent_hey': 1.0}, {'prev_utter_unclear': 1.0, 'intent_hey': 1.0}, {'prev_action_listen': 1.0, 'intent_why do we meditate?': 1.0}, {'prev_utter_unclear': 1.0, 'intent_why do we meditate?': 1.0}]
DEBUG:rasa_core.policies.memoization:There is no memorised next action
DEBUG:rasa_core.featurizers:Feature 'intent_hey' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.featurizers:Feature 'intent_hey' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.featurizers:Feature 'intent_why do we meditate?' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.featurizers:Feature 'intent_why do we meditate?' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_2_FallbackPolicy
DEBUG:rasa_core.processor:Predicted next action 'action_listen' with prob 1.00.
DEBUG:rasa_core.processor:Action 'action_listen' ended with events '[]'
It looks like there is something missing in your domain file. Make sure the nlu_data, stories and domain files match in terms of the intents and actions and entities used.
There was one intent there wasn't supposed to be there. I deleted it and the problem remains exactly the same (the only thing uttered back is fallback).
I have a question, I have some paths that have the same response:
## meditation path
* general_meditation
- utter_general
* why_meditate
- utter_why_meditate
* elaborate_why_meditate
- utter_elaborate_why_meditate
## enlightenment path
* general_enlightenment
- utter_general
* free_enlightened_meditation
- utter_free_enlightened_meditation
* elaborate_free_enlightened_meditation
- utter_elaborate_free_enlightened_meditation
## liberation path
* general_enlightenment
- utter_general
* difference_enlightenment_liberation
- utter_difference_enlightenment_liberation
etc. The only thing I can think of at this point that doesn't "match" is the utter_general. I just thought it made no sense to create different intents for the same answer ("Sure! What would you like to know?"). Am I wrong for thinking that?
Otherwise, it really seems like there is something wrong with the installation. The interpreter seemed to work fine, but when I was trying out the memory by following a path thoroughly, it went like this:
me: general_enlightenment bot: - utter_general me: free_enlightened_meditation bot - utter_free_enlightened_meditation me: * elaborate_free_enlightened_meditation (the same utterances as I would with elaborate_why_meditate) bot: - utter_elaborate_why_meditate
with no regards to anything that has gone on, which is not exactly what LSTM claims to do. :) I'm doing experiments for my master thesis and first thought about just writing these results, but this really seems wrong, I don't want to report false results!
Can you post what your domain file looks like?
domainpaths_yml = """
intents:
- greet
- feeling
- general_meditation
- why_meditate
- elaborate_why_meditate
- elaborate_free_enlightened_meditation
- reach_enlightenment
- western_not_enlightened
- buddhist_not_enlightened
- suffering_relief
- general_enlightenment
- free_enlightened_meditation
- general_enlightenment
- difference_enlightenment_liberation
- general_feeling
- feeling_control
- modern_psych
- general_thought
- generate_thoughts
- decide_thoughts
- general_mindfulness
- mindfulness_present
- what_mindfulness
- general_evolution
- buddhism_evolution
- irritated_en
- thanks_en
- goodbye_en
actions:
- utter_greet
- utter_feeling
- utter_general
- utter_why_meditate
- utter_elaborate_why_meditate
- utter_elaborate_free_enlightened_meditation
- utter_reach_enlightenment
- utter_western_not_enlightened
- utter_buddhist_not_enlightened
- utter_suffering_relief
- utter_free_enlightened_meditation
- utter_difference_enlightenment_liberation
- utter_feeling_control
- utter_modern_psych_feelings
- utter_generate_thoughts
- utter_decide_thoughts
- utter_mindfulness_present
- utter_what_mindfulness
- utter_buddhism_evolution
- utter_irritated_en
- utter_thanks_en
- utter_unclear
- utter_goodbye_en
templates:
utter_greet:
- text: "Good day, nice to see you! Can you ask about something specific?"
- text: "Hey, it's so nice to see you today! If there's anything specific you'd like to know about Buddha, fire away!"
- text: "Good day to you! I'm here to answer the specific questions you may have about Buddhism. :)"
utter_feeling:
- text: "I'm doing great, thank you very much! Do you want to ask about something Buddhism-related? :)"
- text: "I'm very good! Fire away. =) "
- text: "I'm excellent, I hope you are too. Do you have any questions about Buddhism?"
utter_general:
- text: "Sure! What do you want to know?"
- text: "Sure! Do you have a question?"
- text: "Of course! What would you like to know?"
utter_why_meditate:
- text: "The Western way of meditation is often seen as just a way for people to handle stress."
utter_elaborate_why_meditate:
- text: "The Eastern way is essentially to end suffering, since people suffer and make other people suffer. Meditation can help an individual reach enlightenment, making suffering less. Since Buddhism doesnβt see us as having any self, it is a way to release and detaching from our sense of self."
utter_free_enlightened_meditation:
- text: "There is a two-way relationship between enlightenment and liberation - a slight boost in either may boost the other."
utter_reach_enlightenment:
- text: "Unfortunately, few individuals reach full enlightenment by meditating, but itβs possible."
utter_western_not_enlightened:
- text: "Therapeutically speaking, meditating can make your life better, a little lower in stress, anxiety, and other unwelcome things."
utter_buddhist_not_enlightened:
- text: "The Buddhist answer is more complicated. Weβre trying to help to end suffering by seeing the world more clearly. A bit of freedom can let you see a bit of truth."
utter_suffering_relief:
- text: "To do that, we have to quit identifying with the things that make us suffer, the things that are beyond our control This might lead to us feeling as if there is no self at all."
utter_elaborate_free_enlightened_meditation:
- text: "Maybe less time spent on being irritated might be an incentive to meditate more, which might lead to more enlightenment."
utter_difference_enlightenment_liberation:
- text: "Both enlightenment and liberation are paths. The objective isnβt to reach end-point, but to become a little bit more liberated and more enlightened bit by bit."
utter_feeling_control:
- text: "Feelings are, according to Buddha, not a part of us since we donβt have a self."
utter_modern_psych_feelings:
- text: "Modern psychology believes that we donβt consciously control anything. This is common with ancient Buddhist texts as well as modern meditation teachers."
utter_generate_thoughts:
- text: "We are not thought to generate thoughts, they are things that just float into our awareness and are not generated from a conscious self. The conscious mind thus receives thoughts inestead of creating them."
utter_decide_thoughts:
- text: "A lot of thoughts are fighting for space. The conscious mind just tends to identify with the winning thought and take ownership of it."
utter_mindfulness_present:
- text: "Itβs misleading to think of an appreciation for the present moment as central to mindfulness. Itβs just a nice consequence. Early Buddhist texts donβt even tend to mention it."
utter_what_mindfulness:
- text: "Mindfulness meditation leads very naturally toward the apprehension of not-self and can in principle lead you all the way there. And the reason it can do so is because itβs about much more than living in the moment. Itβs about an exhaustive, careful, and calm examination that can radically alter your interpretation of that experience."
utter_buddhism_evolution:
- text: "Buddhism has evolved for the West but it hasnβt severed the connection between current practice and ancient thought."
utter_unclear:
- text: "I'm sorry, I don't have the answer to your question, could you rephrase it? Alternatively head over to Google to find out more!"
- text: "I'm really sorry, but maybe your question is too broad, phrased in a way that I don't understand or beyond my knowledge. You can try to rephrase your question or alternatively see if you find your answer on Google."
utter_irritated_en:
- text: "I'm sorry I misunderstood you. Do you want to ask another question?"
- text: "Pardon me for misunderstanding. Is there anything else you'd like to ask?"
- text: "I'm afraid I've misunderstood something you said. Do you want to ask about something else?"
utter_thanks_en:
- text: "Glad to help! Is there anything else you'd like to know?"
- text: "I'm happy to help. Can I help you with something else?"
utter_goodbye_en:
- text: "Okay, it was nice talking to you! Have a nice day."
- text: "Alright, thanks for the talk, have a wonderful day!"
"""
%store domainpaths_yml > domainpaths.yml
print("Done!")
In the debug, it always says things like:
DEBUG:rasa_core.featurizers:Feature 'intent_why do we meditate?' (value: '1.0') could not be found in feature map. Make sure you added all intents and entities to the domain
when I wrote in "why do we meditate?"
Of course there is no feature 'intent_why do we meditate?' in the feature map? π€ But on the other hand there is definitely
##intent:why_meditate
- why do we meditate?
in the nlu file.
I just wiped my computer and reinstalled everything in hopes of something changing but unfortunately it's exactly the same.
It looks like you are not loading the NLU file properly. I'm sorry I couldn't resolve this for you. I'll close this issue since this isn't a bug but a usage problem. Maybe try the Rasa forum to get help for this https://forum.rasa.com/. Good luck with your thesis!
Thanks, I posted in the forum a couple of days ago but I'm not getting answers. I don't understand how this is a usage problem, as I have no idea how I could load the nlu file otherwise? These intents in the debug message still look very suspicious to me. :/
I saw someone in the forum having the same problem, his problem was that his rasa_core version was 0.10.0 but he needed 0.10.4. I'm updating mine all the time but I'm still getting no output as to which version it is. No matter if I write it in the terminal or in the iPython notebook.
Hi @helgasvala, perhaps a stupid question -- are your stories in nlupaths.md
or storiespaths.md
?
storiespaths.md
:) Just to give you the rest of the code:
storiespaths_md = """
## greet path
* greet
- utter_greet
* feeling
- utter_feeling
## meditation path
* general_meditation
- utter_general
* why_meditate
- utter_why_meditate
* elaborate_why_meditate
- utter_elaborate_why_meditate
* reach_enlightenment
- utter_reach_enlightenment
* western_not_enlightened
- utter_western_not_enlightened
* buddhist_not_enlightened
- utter_buddhist_not_enlightened
* suffering_relief
- utter_suffering_relief
## enlightenment path
* general_enlightenment
- utter_general
* free_enlightened_meditation
- utter_free_enlightened_meditation
* elaborate_free_enlightened_meditation
- utter_elaborate_free_enlightened_meditation
## liberation path
* general_enlightenment
- utter_general
* difference_enlightenment_liberation
- utter_difference_enlightenment_liberation
## feeling path
* general_feeling
- utter_general
* feeling_control
- utter_feeling_control
* modern_psych
- utter_modern_psych_feelings
## thought path
* general_thought
- utter_general
* generate_thoughts
- utter_generate_thoughts
* decide_thoughts
- utter_decide_thoughts
## mindfulness path
* general_mindfulness
- utter_general
* mindfulness_present
- utter_mindfulness_present
* what_mindfulness
- utter_what_mindfulness
## evolution path
* general_evolution
- utter_general
* buddhism_evolution
- utter_buddhism_evolution
## fallback
- utter_unclear
## irritation english
* irritated_en
- utter_irritated_en
## thanks english
* thanks_en
- utter_thanks_en
## goodbye english
* goodbye_en
- utter_goodbye_en
"""
%store storiespaths_md > storiespaths.md
nlupaths_md = """
## intent:greet
- good day!
- hey
- heey
- hello
- hi
- good morning
- good evening
- hey there
- hi hi
- ho ho
- heya
- hey ya
## intent:feeling
- what's up?
- whats up
- wazzup
- waaa
- what what
- how are you?
- how are you today?
- how r u
- how are things with you?
- how are things today?
- how r u?
- how are things with u?
## intent:general_meditation
- Can I ask you about meditation?
- Can you tell me about meditation?
- Can we talk about meditation?
## intent:why_meditate
- Why do we meditate?
- Why would I meditate?
- What's the point of meditation?
## intent:elaborate_why_meditate
- ok, what more?
- tell me more
- please elaborate
- care to elaborate?
- what do you mean by that?
- what about the eastern way?
- what is the buddhist approach to meditation?
## intent:elaborate_free_enlightened_meditation
- ok, what more?
- tell me more
- please elaborate
- care to elaborate?
- what do you mean by that?
- how do they boost each other?
- how do enlightenment and liberation work together?
## intent:reach_enlightenment
- am I sure to reach enlightenment if I meditate?
- is enlightenment certain with meditation?
## intent:western_not_enlightened
- Why would I meditate if I wonβt achieve full enlightenment?
- What is the point of meditating from a western view?
- why would I do it if I don't reach full enlightenment?
## intent:buddhist_not_enlightened
- What is the point of meditating from an eastern view?
- What about the buddhist way?
- What about buddhists?
## intent:suffering_relief
- How do we relief ourselves from suffering?
- How are we relieved from suffering?
- How do I relief myself from suffering?
- How do we end suffering?
## intent:general_enlightenment
- Can I ask you about enlightenment?
- Can you tell me about enlightenment?
- Can we talk about enlightenment?
## intent:free_enlightened_meditation
- Do we meditate to become free or enlightened?
- What is the relationship between enlightenment and liberation?
- How do enlightenment and liberation connect?
## intent:difference_enlightenment_liberation
- What is the difference between enlightenment and liberation?
- How do enlightenment and liberation connect?
- What is liberation?
- What about liberation?
## intent:general_feeling
- Can I ask you about feelings?
- Can you tell me about feelings?
- Can we talk about feelings?
## intent:feeling_control
- Is meditation a complete control of our feelings?
- Do we control our feelings?
- Do we have control over our feelings?
- What about feelings?
## intent:modern_psych
- What does modern psychology say about this?
- How does modern psychology explain this?
- How is this explained now?
## intent:general_thought
- Is meditation a complete control of our thoughts?
- Do we control our thoughts?
- Do we have control over our thoughts?
## intent:generate_thoughts
- Do we generate thoughts ourselves?
- Do i create thoughts?
- How are thoughts created?
- What about thoughts?
## intent:decide_thoughts
- Do we decide our thoughts?
- How are thoughts decided?
- How is each thought decided?
## intent:general_mindfulness
- Can I ask you about mindfulness?
- Can you tell me about mindfulness?
- Can we talk about mindfulness?
## intent:mindfulness_present
- Is mindfulness all about staying present?
- Is mindfulness just to stay in the present?
- Is mindfulness just living in the present moment?
## intent:what_mindfulness
- So what is mindfulness then?
- What is mindfulness?
- Can you explain mindfulness?
## intent:general_evolution
- Can I ask you about evolution?
- Can you tell me about evolution?
- Can we talk about evolution?
## intent:buddhism_evolution
- Has Buddhism evolved a lot?
- How was the evolution of Buddhism?
- Describe the evolution of Buddhism.
## intent:irritated_en
- this is not what I asked about
- this is not what I wanted to know
- I don't understand
- what do you mean?
- what is going on?
- this is not what I was asking
- why do you say that
- why can't you do this?
- why can't you answer me?
- this is not the answer I was looking for
- why are you giving me this?
- why are you talking about this?
- eugh
## intent:thanks_en
- yes
- I understand
- yeah
- yezz
- yes I think so
- yes I think I have something else
- I am curious
- I'm curious to know
- I'm really curious
- exactly
- pricely
- absolutely
- yeahyeah
- on point!
- without a doubt
- doubtless
- that's right!
- all right
- alright
- ok thanks
- ok thank you very much
- thanks
- thank you very much
- thank u
- thank you
- thanks for the help
- ok thanks for your help
- thanks for all the help
- thank youuu
## intent:goodbye_en
- no
- no I'm good
- no I think I'm good
- no thank you
- no there is nothing else
- no thanks for your help
- I don't think so
- Bye!
- Goodbye
- See you later
- Have a nice day
- see u
- bye-bye
- au revoir
- cheerio
- adieu
- arrivederci
"""
%store nlupaths_md > nlupaths.md
config = """
language: en
pipeline: tensorflow_embedding
"""
%store config > config.yml
print("Done!")
And this is how it looks when I try to get the versions. When I run updates on both rasa_nlu
and rasa_core
the claim is that they're fully updated.
(base) Helgas-MBP:~ helgasvala$ pip list | grep rasa_nlu
(base) Helgas-MBP:~ helgasvala$ pip list | grep rasa_core
(base) Helgas-MBP:~ helgasvala$ pip list | grep tensorflow
tensorflow 1.8.0
(base) Helgas-MBP:~ helgasvala$ pip list | grep python
ipython 7.3.0
ipython-genutils 0.2.0
python-crfsuite 0.9.6
python-dateutil 2.8.0
python-engineio 3.4.3
python-socketio 3.1.2
python-telegram-bot 10.1.0
That's strange. You did download via pip and not through github clone, right?
Can you open up the rasa_core source code and find version.py
(it should be in the rasa_core
subfolder) and tell me your version that way?
I ran an update on pip and get version 0.13.3, is that enough? :)
Successfully installed jsonpickle-1.1 pykwalify-1.7.0 python-telegram-bot-11.1.0 rasa-core-0.13.3 tensorboard-1.12.2 tensorflow-1.12.0
Yes, that's fine. I'm checking now to see if the same bug is happening on 0.13.3
So I think what's going wrong is in this line
agent = Agent.load('models/dialogue', interpreter=model_directory)
-- the interpreter argument should be an actual interpreter, not a path to one. What exactly do you mean by this:
My interpreter works fine but I don't know how to get my agent correct.
Does it work if you pass agent = Agent.load('models/dialogue', interpreter=interpreter)
?
I assume you've updated your code from where it was at the start with the multiple imports -- sharing that updated code could be helpful. And while you're at it, the outputs of type(interpreter)
and type(agent.interpreter)
would be good too.
Yeah, sure! It was working incorrectly, the agent not really being an agent but I've updated my code now:
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
interpreter = RasaNLUInterpreter("models/nlu/default/current/")
agent = Agent.load("models/dialogue/", interpreter = interpreter)
type(interpreter)
rasa_core.interpreter.RasaNLUInterpreter
type(agent.interpreter
rasa_core.interpreter.RasaNLUInterpreter
These changes looked promising, but unfortunately my dialogue hasn't changed at all.
me: hey
DEBUG:rasa_core.processor:Received user message 'hey' with intent '{'name': 'greet', 'confidence': 0.9571998119354248}' and entities '[]' DEBUG:rasa_core.processor:Logged UserUtterance - tracker now has 6 events DEBUG:rasa_core.processor:Current slot values: DEBUG:rasa_core.policies.memoization:Current tracker state [None, {}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_unclear': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_greet': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'utter_greet' with prob 0.87. DEBUG:rasa_core.processor:Action 'utter_greet' ended with events '[]' DEBUG:rasa_core.processor:Bot utterance 'BotUttered(text: Hey, it's so nice to see you today! If there's anything specific you'd like to know about Buddha, fire away!, data: { "elements": null, "buttons": null, "attachment": null })' DEBUG:rasa_core.policies.memoization:Current tracker state [{}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_unclear': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_greet': 1.0}, {'prev_utter_greet': 1.0, 'intent_greet': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'action_listen' with prob 1.00. DEBUG:rasa_core.processor:Action 'action_listen' ended with events '[]'
buddhabot: Hey, it's so nice to see you today! If there's anything specific you'd like to know about Buddha, fire away! (correct) me: why do we meditate?
DEBUG:rasa_core.tracker_store:Recreating tracker for id 'default' DEBUG:rasa_core.processor:Received user message 'why do we meditate?' with intent '{'name': 'why_meditate', 'confidence': 0.9420458674430847}' and entities '[]' DEBUG:rasa_core.processor:Logged UserUtterance - tracker now has 10 events DEBUG:rasa_core.processor:Current slot values: DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_unclear': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_greet': 1.0}, {'prev_utter_greet': 1.0, 'intent_greet': 1.0}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'utter_feeling' with prob 0.83. DEBUG:rasa_core.processor:Action 'utter_feeling' ended with events '[]' DEBUG:rasa_core.processor:Bot utterance 'BotUttered(text: I'm very good! Fire away. =) , data: { "elements": null, "buttons": null, "attachment": null })' DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_utter_unclear': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_greet': 1.0}, {'prev_utter_greet': 1.0, 'intent_greet': 1.0}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_feeling': 1.0, 'intent_why_meditate': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'action_listen' with prob 1.00. DEBUG:rasa_core.processor:Action 'action_listen' ended with events '[]'
buddhabot: I'm very good! Fire away. =) _(correct interpreted intent, answers with utterfeeling) me: do we meditate to become free or enlightened?
DEBUG:rasa_core.tracker_store:Recreating tracker for id 'default' DEBUG:rasa_core.processor:Received user message 'do we meditate to become free or enlightened?' with intent '{'name': 'free_enlightened_meditation', 'confidence': 0.9419595003128052}' and entities '[]' DEBUG:rasa_core.processor:Logged UserUtterance - tracker now has 14 events DEBUG:rasa_core.processor:Current slot values: DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_action_listen': 1.0, 'intent_greet': 1.0}, {'prev_utter_greet': 1.0, 'intent_greet': 1.0}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_feeling': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_free_enlightened_meditation': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'utter_free_enlightened_meditation' with prob 0.91. DEBUG:rasa_core.processor:Action 'utter_free_enlightened_meditation' ended with events '[]' DEBUG:rasa_core.processor:Bot utterance 'BotUttered(text: There is a two-way relationship between enlightenment and liberation - a slight boost in either may boost the other., data: { "elements": null, "buttons": null, "attachment": null })' DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_utter_greet': 1.0, 'intent_greet': 1.0}, {'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_feeling': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_free_enlightened_meditation': 1.0}, {'prev_utter_free_enlightened_meditation': 1.0, 'intent_free_enlightened_meditation': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_1_KerasPolicy DEBUG:rasa_core.processor:Predicted next action 'action_listen' with prob 1.00. DEBUG:rasa_core.processor:Action 'action_listen' ended with events '[]'
buddhabot: There is a two-way relationship between enlightenment and liberation - a slight boost in either may boost the other. (correct) me: ok, what more?
DEBUG:rasa_core.tracker_store:Recreating tracker for id 'default' DEBUG:rasa_core.processor:Received user message 'ok, what more?' with intent '{'name': 'elaborate_why_meditate', 'confidence': 0.7471952438354492}' and entities '[]' DEBUG:rasa_core.processor:Logged UserUtterance - tracker now has 18 events DEBUG:rasa_core.processor:Current slot values: DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_action_listen': 1.0, 'intent_why_meditate': 1.0}, {'prev_utter_feeling': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_free_enlightened_meditation': 1.0}, {'prev_utter_free_enlightened_meditation': 1.0, 'intent_free_enlightened_meditation': 1.0}, {'prev_action_listen': 1.0, 'intent_elaborate_why_meditate': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_2_FallbackPolicy DEBUG:rasa_core.processor:Predicted next action 'utter_unclear' with prob 1.00. DEBUG:rasa_core.processor:Action 'utter_unclear' ended with events '[]' DEBUG:rasa_core.processor:Bot utterance 'BotUttered(text: I'm really sorry, but maybe your question is too broad, phrased in a way that I don't understand or beyond my knowledge. You can try to rephrase your question or alternatively see if you find your answer on Google., data: { "elements": null, "buttons": null, "attachment": null })' DEBUG:rasa_core.policies.memoization:Current tracker state [{'prev_utter_feeling': 1.0, 'intent_why_meditate': 1.0}, {'prev_action_listen': 1.0, 'intent_free_enlightened_meditation': 1.0}, {'prev_utter_free_enlightened_meditation': 1.0, 'intent_free_enlightened_meditation': 1.0}, {'prev_action_listen': 1.0, 'intent_elaborate_why_meditate': 1.0}, {'prev_utter_unclear': 1.0, 'intent_elaborate_why_meditate': 1.0}] DEBUG:rasa_core.policies.memoization:There is no memorised next action DEBUG:rasa_core.policies.ensemble:Predicted next action using policy_2_FallbackPolicy DEBUG:rasa_core.processor:Predicted next action 'action_listen' with prob 1.00. DEBUG:rasa_core.processor:Action 'action_listen' ended with events '[]'
buddhabot: I'm really sorry, but maybe your question is too broad, phrased in a way that I don't understand or beyond my knowledge. You can try to rephrase your question or alternatively see if you find your answer on Google. _(based on the previous conversation, intent should
elaborate_free_enlightened_meditation
notelaborate_why_meditate
, but it answers with fallback even though the confidence was over the 65% cut-off)_.
So not much has changed except the types are now right. Thanks for your help.
based on the previous conversation, intent should elaborate_free_enlightened_meditation not elaborate_why_meditate
The intent classifier is not getting information from the story data, as it is only trained on the nlu data. You can't really expect it to know that ok, what more?
should be elaborate_free_enlightened_meditation
not elaborate_why_meditate
when you have it listed under both intents. A better way would be have just one elaborate
intent, and then stories that include where they came from and what type of elaboration that elaborate invokes.
but it answers with fallback even though the confidence was over the 65% cut-off
The Fallback policy here is being invoked not because of low NLU classification, but because of low core classification (i.e. although the NLU classfication confidence is 0.75, Keras Policy's prediction confidence isn't above 0.65). A 65% confidence core_threshold
is pretty high, especially with the very few number of stories you have for 27 intents -- to put in perspective, the default thresholds for both NLU and Core in the FallbackPolicy
are 0.3
.
At this point it looks like your bot is performing exactly as expected, its just more of a use issue with how you decide to provide training data, define intents, define thresholds, etc.
Ok, thank you so much for your answer. :) So if I define paths, it shouldn't be able to remember what we were talking about earlier? The elaborate intent would invoke different answers and I was trying to show that it could answer based on context, but would it still be better to just have the following?
## meditation path
* general_meditation
- utter_general
* why_meditate
- utter_why_meditate
* elaborate
- utter_elaborate_why_meditate
## enlightenment path
* general_enlightenment
- utter_general
* free_enlightened_meditation
- utter_free_enlightened_meditation
* elaborate
- utter_elaborate_free_enlightened_meditation
It can't do that based on the path I was on, if I was following one path thoroughly? When should the context-memory kick in?
And one more thing, the first answer is a little far off. Is it just a difference between what the KerasPolicy is predicting and what the interpreter is saying? Because the interpreter is correct with a high precision most of the time but the KerasPolicy then says something else, they're not connected?
Sorry for all these questions, I'm just trying to be a better user and so far, I'm not receiving any help on the forum. :)
So the context memory depends on the intents and utterances, not the actual words, so yes, at that point the elaborate should provoke different answers if the context was exactly the same as a story you had written (via the MemoizationPolicy). And yes, the interpreter and the KerasPolicy are unrelated -- the interpreter predicts the intent and the policies pick the actions. I recommend you read a little bit more into our docs to figure out exactly how your bot is making the decisions it's making.
As for this issue, I'm going to close it since it's just up to usage now. The forum is the correct place to ask questions like these -- I realize it's a little slower for response, but we like to keep GitHub issues focused on bugs/issues that could be affecting everybody as they take first priority. Good luck!
hello everyOne, i got different utter output of same input that is not expecting example=> who are you output=> I didn't go anywhere. output=>Thanks. I'm flattered. these output sentences are using in different intent
I have the same issue, the cause seems the rasa.core confidence is very low, rasa.core policy changed the prediction(different from nlu intent). Anyone has the solution for the issue?
My log is as below('action_listen' is not my expectation here) rasa.core.processor - Received user message 'what is glaucoma' with intent '{'id': 1460664494514502901, 'name': 'find_condition_definitions', 'confidence': 0.997934103012085}' ... 2021-01-08 15:19:27 DEBUG rasa.core.policies.ensemble - Predicted next action using policy_4_FallbackPolicy. 2021-01-08 15:19:27 DEBUG rasa.core.processor - Predicted next action 'action_listen' with confidence 1.00.