Closed swathi-ssunder closed 8 years ago
It's an important issue, because API calls are stateless thus the pagination should happen in the client. A nice approach would be to set a hard limit (100 results?) and then add a query parameter defining the upperbound in term of timestamp (because your records are time-bound) for that limit. I was actually not the only one having this idea (see first answer): http://stackoverflow.com/questions/13872273/api-pagination-best-practices
Guys thoughts in #71 :
... However do not use pagination on the API this is just bad practice. Instead have a limit number of sightings by default. Then privide controls to limit the number of sighting whether by count, area, period or payload size in Kb).
Christian thoughts :
... API calls are stateless thus the pagination should happen in the client. A nice approach would be to set a hard limit (100 results?) and then add a query parameter defining the upperbound in term of timestamp (because your records are time-bound) for that limit
Guys thoughts :
.. However do not use pagination on the API this is just bad practice. Instead have a limit number of sightings by default. Then provide controls to limit the number of sighting whether by count, area, period or payload size in Kb).
Both the thoughts are asking to avoid pagination in the API but asking to keep the limit by default but in that case how would the Machine learning team get all of the sighting data? So they will query in iteration with the page number as we know the set of records to be fetched ( since we know the default size, assume 100 records). Am I on the right track or is there something else which needs to be done ?
OR
by default always restricting pokemon sightings data based on latitude, longitude and startTime and endTime as suggested in the issue : #111
@sacdallago
I'm not thinking about the ML guys for a moment, I'm thinking about the standards and best approach. I like the #111 idea but I would anyway implement a hard limit on the results that can be "overwritten" using a time-based feature.
For the ML guys, worst case scenario, they get a dump of the DB to work with! Advantages of working directly with the people collecting the data :dancer: They are anyway only gonna need it for training purposes, so they won't need to collect this data indefenetly, just once they have the best method, features, aso.
@gyachdav @goldbergtatyana confirm?
@sacdallago - yes, confirmed.
Just to sum up:
Currently the api responds with all the records, without any limit. In this case, the api response will never be complete and hence the api will actually become unusable. The reason the apis are currently working is because the listen scripts are not running indefinitely, due to which there are just finite records in the db.
So there needs be a way to query the records in pages, eg: Say 1 to 1000 records, 1001 to 2000 and so on.
Also see https://github.com/PokemonGoers/PokeData/issues/111#issue-174701397