Open adriguerra opened 3 years ago
Hi! I've found an ordinary solution for huge sentences. One has to iterate over the aspects and calculate a sentiment for each aspect. Previously we had:
text = "some huge text"
aspects = ["aspect a", "aspect b", ... "the last aspect"]
results = nlp(text=text, aspects=aspects)
Now:
text = "some huge text"
aspects = ["aspect a", "aspect b", ... "the last aspect"]
results = [nlp(text=text, aspects=[aspect]) for aspect in aspects]
The solutions looks strange, so fill free to comment it.
I'm still getting OOM errors despite splitting the text into sentences using the
text_splitter = absa.sentencizer()
I have for instance a text of 4528 characters that gets split up into 43 sentences (the largest of which is 163 characters long) that throws an OOM error. Any tips/ideas how I could handle such cases?