I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.
Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )
To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?
The number of labels/classes already known to graphify from previous training requests
The total volume of text that has been given to graphify for training.
The amount of text given to graphify for classification
I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.
Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )
To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?