I am trying to use GermanSentiment for a dataset of tweets, specifically a courpus of 9k strings and 30k strings. The demo works just fine, with the sample sentences from Github, but when I apply my own text, the CPU spikes exponentially and the PC freezes up. Does this model have a limit on how much data it can handle?
Hi @N-Con404,
this sounds strange. Can you provide a colab notebook that shows this issue? I would expect that you can process 500k tweets in about 5 min on a free T4 GPU on colab.
I am trying to use GermanSentiment for a dataset of tweets, specifically a courpus of 9k strings and 30k strings. The demo works just fine, with the sample sentences from Github, but when I apply my own text, the CPU spikes exponentially and the PC freezes up. Does this model have a limit on how much data it can handle?
https://stackoverflow.com/questions/78637651/trying-to-run-germansentiment-in-python-on-10k-to-30k-texts-keeps-crashing-pos