lablup / backend.ai

Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs.
https://www.backend.ai
GNU Lesser General Public License v3.0
480 stars 147 forks source link

GraphQL rate limiting #2045

Open achimnol opened 2 months ago

achimnol commented 2 months ago

GraphQL clients may make a wide range of queries that may affect the database and server performance. It's recommended to have "rate limiting" to prevent excessive queries.

Let's add the followings:

Yaminyam commented 3 days ago

Currently, the pagination of the relay connection does not enforce the first or last argument, so it is impossible to predict how many nodes the connection will call. Force the first or last argument to avoid making too large a request by calling all nodes at once, and make it possible to predict the number of nodes to be called in the gql query.