gbv / subjects-api

JSKOS Concept Occurrences Provider implementation
https://coli-conc.gbv.de/subjects/
MIT License
0 stars 0 forks source link

Add PostgresSQL backend for performance analysis #26

Open nichtich opened 2 years ago

nichtich commented 2 years ago

Sounds good. I still have to investigate the performance issues though. Currently on our server it's pretty slow, although it mostly works fine on my local machine. Might be related to drive speeds. If we use a hosted PostgreSQL database, we might get much better performance. In theory, it wouldn't be a big deal to support both SQLite and PostgreSQL, and we could offer a tiny setup script that differentiates between the two.

Originally posted by @stefandesu in https://github.com/gbv/occurrences-api/issues/17#issuecomment-1227011042

nichtich commented 1 year ago

To add some consistency check on the database, vocabularies should also be put into the backend:

-- technically PPN is an integer with checksum so more could be improved later
CREATE DOMAIN ppn AS TEXT CHECK (VALUE ~* '^[0-9]+[0-9X]$');

CREATE TABLE IF NOT EXISTS Vocabulary (
  key text NOT NULL,
  jskos json NOT NULL DEFAULT '{}'::json,
  PRIMARY KEY (key),
  CONSTRAINT valid_key CHECK (key ~* '^[a-z]+$')
);

CREATE TABLE IF NOT EXISTS Subject (
  ppn ppn NOT NULL,
  voc text NOT NULL,
  notation text NOT NULL,
  FOREIGN KEY (voc) REFERENCES Vocabulary (key),
);
stefandesu commented 1 year ago
stefandesu commented 1 year ago

Batch import can be optimized by using COPY FROM instead if INSERT. It works well for me locally and reduces the raw import time by an order of magnitude, I would say. However, recreating the indexes still takes a while with multiple tens of millions of records.

There are still some issues to be sorted out with this though, and I think we can do something similar for SQLite as well (although SQLite is, for some reason, much faster with inserting data).

stefandesu commented 1 year ago

I think I'll be able to push my changes with batch import soon. Then we can test the performance between the two backends on our server. For some reason, on my local machine, SQLite is significantly faster than PostgreSQL, although in the past I had the opposite experience. I think much depends on disk performance which is fairly bad on our server...

stefandesu commented 1 year ago

Changes are now pushed to Dev. I'll do the performance comparison soon.

stefandesu commented 1 year ago

Okay, while I didn't do any scientific tests on the performance, my results are fairly clear (these are all performed on our server with the current Dev version of occurrences-api):

Overall, SQLite seems to be about 2x faster for the usual queries and seems to have a higher cache limit. This is not what I expected, to be honest, especially since our dataset has over 80 million rows. It seems like staying with SQLite is the better choice in our case, even though I expect things could be optimized for PostgreSQL more.

Also in both cases, performance is severely limited by our server's slow disk performance. My laptop (which has a fast NVMe SSD) is about 4-5x faster.

nichtich commented 1 year ago

https://github.com/gbv/occurrences-api/commit/b5c3a419a7d85ab054c7efc2b1092ba11f85413b added backend method subjects not implemented in PostsgreSQL backend yet.

stefandesu commented 1 year ago

The question is whether we really want to support both SQLite and PostgreSQL in the long run. In theory, there won't be too many things to add, so there might not be much work, but if we decide to stay with SQLite anyway, it might be better to remove PostgreSQL support again. What do you think?

nichtich commented 1 year ago

We may later drop PostgreSQL but SPARQL and in particular SRU are needed so we will have multiple backends with different capabilities anyway. Let's keep it as experimental.