Open MHC2000 opened 3 months ago
I know our structure is quite large
Is there any chance you could share a schema-only dump privately? The interesting bit are obviously the 4921 relationships, and those are not that easy to write a script for mass-generation for.
@wolfgangwalther I could remove the functions from the api schema and share only the views (in the api are only views, relations and functions no tables). But this views are representations of other tables in other schemas. So I'm not sure if that helps. Is there any way to send it privately over github or another way?
Hm, the dump would only help with all schemas. The problem is most likely in returning too many unneeded objects from other schemas, before filtering them out in haskell. So we'd need all of the views and tables in all schemas. Functions could also create relationships, but most likely they could be left out. In any case it needs to be self-contained, so it can be run in a fresh database to create all objects.
You could send a link to download via email to info at postgrest dot org.
OK understood. Will see what I can do. Not sure if I can provide that.
@MHC2000 If possible you could try to mangle the tables and columns names. That would certainly help us to debug this faster.
we are talking about a 3 digit number of tables, which are involved. And I guess 4 digits number of column names. not sure how I shall do that in a manageable manner. I'll get back to you on Monday
sadly I'm not able to provide a structure dump in the near future. Will get back to you as soon as I can provide a large enough structure to provoke the behaviour
+1
Environment
As we use a lot of tables it looks like the initial loading and reloading of the schema cache takes more than 15 minutes. The following log was created after starting postgrest new. I see two config reloads, but there was none triggered at this time.
API is very slow at this loading time.
Only if the query to the API is without joins to other tables the query runs in normal time. If the query includes joins to other tables the first query takes several minutes to be executed.
In parallel I've queried the admin-port. Ready-Endpoint is on 503 still after Schema cache loaded is shown in the log file. And still after the second reload of the config and schema, it's on 503. Config and Live are directly on 200. Schema_Cache is round about 12 MB. So in total the system needs about 10 to 15 minutes to get the whole structure.
I know our structure is quite large and maybe it's too large for postgrest. But would be interesting why the config and schema seems to be loaded 2 times?