aiidateam / aiida-restapi

AiiDA Web API for data queries and workflow management.
https://aiida-restapi.readthedocs.io
MIT License
9 stars 7 forks source link

Endpoint performance benchmarking/profiling #10

Open chrisjsewell opened 3 years ago

chrisjsewell commented 3 years ago

Obviously the time to e.g. retrieve a Node etc is very dependent on aiida-core, plus postgres setup, plus hardware etc. But... it would be nice if we had some basic feedback on the magnitude of the times related to different endpoints, and flag any that are particularly problematic.

For example, in aiida-core I set up: https://aiidateam.github.io/aiida-core/dev/bench/ubuntu-18.04/django/

Perhaps, there is some pydantic specific tool to achieve this?

chrisjsewell commented 3 years ago

Note on this, a thing that I have played around with in the past is that on the postgres database you can set it up with the pg_stat_statements module, and then get back statistics on how many queries are being made to the database (see e.g. https://github.com/chrisjsewell/aiida-profiling/blob/master/aiida_perf/db_stats.py)

I've never quite managed yet how to many this easy to incorporate into e.g. a pytest run (you need to maybe have pgtest setup the test database with the module, then have a fixture that resets the statistics before each test)

chrisjsewell commented 3 years ago

Copying some comments by @CasperWA and @flavianojs, from the last aiida meeting here:

As being invited to a meeting by Flaviano, we discussed speed issues concerning queries with the REST API, but it should be solved with the “new” /querybuilder endpoint. They have a need to be able to have better speed when querying AiiDA.

This is a general issue for any (G)UI front-end for AiiDA and is even an issue for the OPTIMADE servers - the difference in speed is apparent when comparing with Materials Project, who implements a “vanilla” optimade server with a dedicated MongoDB back-end.

Concerning the speed: GP & CWA discussed creating a cacheing layer between AiiDA and the end user. This work has already (sort of) begun, since aiida-optimade supports the use of a MongoDB, which hosts the calculated OPTIMADE fields from any AiiDA DB. However, to make it a cacheing layer, it should periodically update the MongoDB depending on changes in AiiDA (in the background). Some clever indexing still has to be made as well as size considerations.

Firstly, it would be helpful if you guys could think of any "metrics" we should be aiming for? Off the top of my head, something like "retrieving all formulas of StructureData from a database of 1 million structures in under 0.1 second". Obviously this may be difficult in practice to codify exactly in to a test, but it starts us pointing in the right direction.

In terms of the \querybuilder endpoint, yeh err not a big fan (at least in its current form). Just because it is quite "unstructured" at present (difficult to validate) and has no kind of rate limiting etc (exposes the server to intended or unintended DDOS). It might be helpful to get a sense of what queries you are currently making with this endpoint, and also have a look at https://aiida-restapi.readthedocs.io/en/latest/user_guide/graphql.html and comment if this would achieve your use-cases? (e.g. you can query for multiple things in a single request and also everything goes through the querybuilder so no ORM objects are ever initialised which can be a bottleneck) (we can spin this off into a separate issue, but just wanted to write this all down before I forget 😄)