ajnisbet / opentopodata

Open alternative to the Google Elevation API!
https://www.opentopodata.org
MIT License
312 stars 69 forks source link

Question: api.opentopodata.org servers specs or how to increase performance on local instance even more #84

Closed arnesetzer closed 10 months ago

arnesetzer commented 10 months ago

First of all, thank you for your work.

Specifications: I am currently running version 1.8.3 of the software on a on a VPS with an 8-core CPU running at 2.9GHz and 32GB of RAM, using the BKG's 200m dataset. I followed all the instructions provided in the documentation and converted the dataset to GeoTiff and WGS84 with the recommended parameters.

Subsequently, when I make POST requests with approximately 10,000+ points, it takes around 3-4 seconds to receive a response. The response time of the public API (using the eudem25m dataset) for the same request was approximately 300ms.

Can anything be done to improve the speed of my instance or am I required to increase its power (for instance, by using a physical instance with a higher-clocked CPU)? Can you please provide information on the hardware that is being used to run api.opentopodata.org?

Thank you in advance.

ajnisbet commented 10 months ago

The free API runs on two dedicated servers: one likely slower than your VPS (2 core 1.7 GHz Atom C2350) and one likely faster (4 core / 8 thread 3.40 GHz i7-4770). Both are located in Central Europe, both have hard drives (not SSDs). Requests are load balanced between the two servers.

In summary: the servers aren't powerful enough to account for a 10x difference in response time.


When seeing big differences between datasets it's often cause of the Geotiff setup. Try re-compressing the dataset with:

gdal_translate -co COMPRESS=DEFLATE -co TILED=YES -co BIGTIFF=YES -co PREDICTOR=3 -co ZLEVEL=1 dgm200_utm32.tif dgm200_recompress.tif

then moving dgm200_recompress.tif into a new folder and adding it as a new data source in config.yaml.


Some other suggestions:


Finally, could you clarify:

You mention making POST requests with 10,000 points: do you mean multiple requests each of 100 points? The free API should limit you to only 100 points per request (but you can do much more locally, which should be faster).

arnesetzer commented 10 months ago

Wow, that was a fast reply.

Thanks for your tips. Especially the part with the recompression or no compression at all sounds very promising.

I already thought about bulk requests, but didn't tried it since the docs said that bigger requests are faster than small ones.

Finally, could you clarify:

You mention making POST requests with 10,000 points: do you mean multiple requests each of 100 points? The free API should limit you to only 100 points per request (but you can do much more locally, which should be faster).

Well, that one is on me. My software is configured to set all elevation to null if the elevation service did not return a valid answer. That happend when I send the request to the public API. And causes the insanely fast result, sorry for the confusion.

ajnisbet commented 10 months ago

Glad we got to the bottom of it!

The docs may need some refinement there. One big request will be faster than sequential small requests. But if you can make parallel requests, even better is to make N simultaneous requests, where N is the number of threads on the server (which is logged by opentopodata on startup).

ajnisbet commented 9 months ago

Docs enhanced in https://github.com/ajnisbet/opentopodata/pull/85