explosion / spaCy

💫 Industrial-strength Natural Language Processing (NLP) in Python
https://spacy.io
MIT License
29.65k stars 4.36k forks source link

Memory usage of `debug-data` with a huge training set #4748

Open sfragis opened 4 years ago

sfragis commented 4 years ago

Hi, I'm using Spacy 2.2.2 to train new tagger and parser models for the Italian language. My training data set is quite big (about 2.3 GB for the train and 580 MB for the dev) and is saved in two JSONL files. I'm experiencing an unexpected memory usage when running the debug-data command: memory usage starts low and then grows up to consuming my 32GB of RAM as well as the whole swap (about the same size). Before upgrading my RAM to 128 GB (which I suspect might be useless), I'm interested in your opinion about:

Info about spaCy

ines commented 4 years ago

Thanks for the report!

My training data set is quite big (about 2.3 GB for the train and 580 MB for the dev) and is saved in two JSONL files.

You probably want to split these into multiple files. spaCy can also read from directories instead of single JSON files, so there's really no need to have a 2.3 GB file. This could easily cause other problems down the line.

About debug-data: Since the debug-data command is really mostly a debugging utility, we didn't particularly focus on optimising it for efficiency. For instance, I'm pretty sure we're just loading the whole corpus into memory (e.g. by calling list around it), and I think we're also making at least one additional pass over the data to compute the stats. That's typically okay, because you're usually just running the debugging manually a few times and even if you have to wait for a few minutes, that's not a big deal.

However, if it's not memory-efficient and you can't use it with large data files, that's obviously bad.

We could probably refactor the logic to only process the data as a stream, make one single pass over each corpus and compute all the stats that way. You can find the source here if you want to give it a try and see if it improves things for you: https://github.com/explosion/spaCy/blob/master/spacy/cli/debug_data.py

sfragis commented 4 years ago

Hi Ines, thank you for your quick reply. I successfully managed to read the whole dataset from JSONL and have it saved into smaller MessagePack files. The problem may be related to the invocation of GoldCorpus.train_docs where the returned generator is turned into a list as you mentioned. I will try to make the rest of the code more streamy and provide a pull request if I succeed.

svlandeg commented 4 years ago

Sorry for the late follow-up, but I just wanted to bump this issue as I still think it's very relevant. Since the PR you last created, the develop branch has been coming together nicely, but I think the same issues with debug data are still present. For instance, we're still calling list(Corpus(train_path)(nlp)).

I wanted to ask you @sfragis whether you have time to rebase your old PR against the new develop branch? If not, I could try and pick those ideas from your old PR and reapply them for a new PR...

sfragis commented 4 years ago

Hi Sofie, I'd be happy to contribute but honestly I've no time at all. Feel free to pick code and ideas from my PR and adapt them to the develop branch. Cheers

svlandeg commented 4 years ago

Will do, thanks for letting me know!