Our present NLP code spends a large amount of time loading the data on startup but less is known about how it spends the time within the individual processing. In order to tailor our effort we need to set up an individual test server, fire student data at it with a pacer (see issue #54), and then test the processing time. This should be done both within the system and separate from it, using AWEWorkbench code on its own to get a sense of how it behaves. We should also do this on and off GPU so that we can verify the impact of the GPU on execution. This latter part will depend on resolution of issue #46
Our present NLP code spends a large amount of time loading the data on startup but less is known about how it spends the time within the individual processing. In order to tailor our effort we need to set up an individual test server, fire student data at it with a pacer (see issue #54), and then test the processing time. This should be done both within the system and separate from it, using AWEWorkbench code on its own to get a sense of how it behaves. We should also do this on and off GPU so that we can verify the impact of the GPU on execution. This latter part will depend on resolution of issue #46