Closed aswiniS2 closed 7 years ago
Is there an error message you get or some other information you can share with us that makes this reproducible?
When I do training, it was killed without throwing any error and when I do training after sometime it gets trained. In both cases training data remains the same.Refer both screenshots attached.
this looks like an out of memory error. How much ram do you have?
thanks for your reply @amn41. I have 4GB ram. In general how much RAM is required?
that should be OK, since you're training with a pretty small data set. Try using top
to track the memory while the training runs. But your OS is killing the process, which usually means it's consuming too many resources
If I want to train more than 200 samples. How much RAM is required?
From my understanding the MITIE or Spacy model plus your training data set all have to fit into memory. Plus some overhead. Plus any previously loaded models.
Looks like you may have MySQL running at the same time, which can use a chunk of RAM.
Thanks @wrathagom . I will increase the RAM and check it.
You could increase your swap size.
@AP8050 I'm closing, let us know if you weren't able to resolve this issue.
I increased the Ram from 4GB to 8GB. I have 16 intents with 73 samples and I tracked the memory while the training runs. It didn't consume any other resources. Only Trainingdata.json is created after that the training process gets killed.
Okay, to help we're going to need more information, and not in screenshots. Either post in gists or In your response the actual text.
Can you provide:
I've mailed you the information you asked for, on Aug 01, 2017. Please check with it. @wrathagom
I see it now, will try to get some time to play with this today.
@wrathagom what happened?did you checked with that?
I run into the same problem. Did this fixed?
I couldn't reproduce this yet, which makes it hard to fix. @chenxian01 are you running the latest version from github?
Yes, I use the latest version. And I tried on my Ubuntu16.04 & Ubuntu17.04. I have 71 examples. When I trained the model, after a long long time, then they will show killed.
@chenxian01 how much memory are you using and which pipeline is specified in your configuration?
I use 2G memory and pipline is mitie.
mitie is most likely running out of memory there. either use spacy or use a machine with more memory.
I don't have too much memory in my machine. How much memory is enough? And the entity recognoziton for spacy is not better than mitie, sometimes it will give wrong prediction.
@chenxian01 entity recognition for spacy can perform just as well as MITIE, it just needs to be trained properly, which likely just means more training examples.
Memory for MITIE is hard to calculate, but the .dat file it uses is > 1GB. Plus you'll need room to train you're model. I'd say you need at least 2-4 GB of RAM free before starting Rasa. That number increases as your number of intents increases.
@AP8050 sorry didn't realize I never responded. Was your training data modified before you sent it to me? There was a missing ,
on line 262.
I am training right now to see if it succeeds for me.
Hi I see this thread has been closed, I am facing same issue with Spacy_Sklearn, I am running my script on 8gb ram. While calling parse method rasa_nlu is making a sudden spike in CPU, which lead to killing the script specially if you are running it on cloud instance.
I am facing same issue with rasa 1.1.4.my config is `language: "zh"
pipeline:
i have the same problem . please help me