Closed lomarceau closed 4 years ago
@lomarceau Can you give some details about your training data? How big is it in terms of total number of examples and intents.
Additionally can you try the following config -
language: "fr"
pipeline:
- name: "WhitespaceTokenizer"
- name: "CRFEntityExtractor"
- name: "EntitySynonymMapper"
- name: "CountVectorsFeaturizer"
- name: "CountVectorsFeaturizer"
analyzer: char
min_ngram: 1
max_ngram: 4
- name: "EmbeddingIntentClassifier"
epochs: 200
weight_sparsity: 0.8
use_sparse_input_dropout = True
You may change weight_sparsity
to 0
in the above configuration and try that as well.
Hi @dakshvar22
Really sorry about the late reply, I had to work on other stuff and forgot about this open issue. I downgraded to rasa 1.9.3 and started from a clean venv and the problematic config is now yielding the expected performance results. Not sure what was causing the original problem, but it's fixed now so I will close this issue.
Thanks!
Rasa version: 1.9.4
Python version: 3.6.8
Operating system (windows, osx, ...): Centos 7
Issue: I have been using older version of Rasa (1.1.6) with the embedding_intent_classifier. I am not able to replicate the performance I have with 1.1.6 and the old starspace classifier in Rasa 1.9 with DIET. Training takes way longer to converge (trained for 1500 epochs instead of usual 300) and the performance are a lot worst on the same test set and with the same configuration file (0.83 vs 0.38 f-score)
If I understand correctly, in 1.9 using embedding_intent_classifier in a the config calls a specific none-transformer configuration of DIET, but it looks like i'm unable to get the same performance as before.
Is this a known issue? Thank you
Content of configuration file (config.yml) (if relevant):