Closed GoelBiju closed 3 years ago
This issue could have been due to https://github.com/GoelBiju/Visualising-Optimisation-Data/commit/bb3eac4d7253cc7c7d9938ad6e5b34cc039db9ba, but updating the array with data one by one is very slow in MongoDB. Either we need to send generation data once it has been loaded as a large array and then update.
This is due to #18 as the backend crashes and no more data comes in.
This issue could have been due to bb3eac4, but updating the array with data one by one is very slow in MongoDB. Either we need to send generation data once it has been loaded as a large array and then update.
Following up on this, it is possible to set a queue on the client end and then feeds data as batches of the whole generation I.e. the whole population for the generation, and then wait for the server to respond saying it has been added. The only caveat to this is that it takes quite a lot of a time, this is a better solution still to the server crashing and running out of memory.
To give an idea of the time it takes for all the data to be fed in, for a dataset with a total of 28200 data points, it takes 7-10 minutes on an average computer running both the server and client. This will, however, need to be tested with Heroku to see if it will handle it.
One issue I have seen is that while all the data is present, the generation number is not correct, although the data is correct.
Description:
When observing adding the sample data points, we can see that not all have been added.
Acceptance criteria: