Closed HugoHeneault closed 3 years ago
Good question!
SSD / HD size depends only on the size of the data you're using, Open Topo Data doesn't use any additional space. Sizes of the datasets in the public API are here: https://www.opentopodata.org/notes/dataset-sizes/
I wrote a bit about performance here: https://www.opentopodata.org/notes/performance-optimisation/ but I can elaborate.
You don't need much RAM, even a server with 1GB of memory would be enough.
Otherwise, Open Topo Data spends 99% of the time reading from the dataset files, so performance largely depends on what format those files are in. The difference can be huge, up to 20x difference for read speed and 5x difference for storage space. The best format for both speed and size is some sort of compressed geotiff.
With a compressed geotiff, most time is spent on decompression and the amount of data read per query is small (thanks to the compression) so there isn't a big difference between SSDs and spinning hard drives, all though cheap flash storage (like a raspberry pi) or a cheap over-provisioned VPS or a networked filestore might slow things down.
Finally, as for CPU, Open Topo Data can serve one request per virtual core at a time. With a compressed geotiff a single-point request might take roughly 20ms, so 50 requests per second per core. A 100 point query could be more like 1.5 requests per second per core, and even slower if multiple datasets are queried.
Does that help? Is there anything else performance-wise that would be interesting to know?
Thanks alot for your reply! I'll try it on my own :-)
Hey there,
We're wondering about the recommended server specs (CPUs, memory, SSD size) and there doesn't seem to be a word about it in the doc. What would you recommend?
Thanks!