3liz / py-qgis-server

QGIS embbeded WMS/WFS/WCS asynchronous scalable http server
https://docs.3liz.org/py-qgis-server
Mozilla Public License 2.0
68 stars 16 forks source link

limit for preload config projects? #44

Closed lassitanskanen closed 1 year ago

lassitanskanen commented 2 years ago

Hi, Is there limit for preload config projects? How about the code behind this setting? The server seems to get stuck with ~20 qgis projects because it loads all projects in same time while starting.

dmarteau commented 2 years ago

Hi, There is no limit for preload projects since you define explicitely the list of preloaded projects.

The server seems to get stuck with ~20 qgis

Is it really stuck or this is just because projects tooks very long time to load ?.

The purpose of preload projects is to load projects that could take very long time at startup time to prevent blocking your workers if let the default lazy 'on-demand' loading process doing the job: the results is that you must expect a longer startup time.

There is no fit them all strategy, depending on your configuration, you have to mitigate between preloading, asynchronous loading, timeouts settings and eventually sharding using multiple pool of workers with a reverse proxy. .

lassitanskanen commented 2 years ago

Thanks for quickly response!

If the startup take a long time, is there way to preserve static cache? I'm working with docker environment.

dmarteau commented 2 years ago

is there way to preserve static cache

Unfortunately no, Qgis does not handle shared memory between processes and a project is a complex structure is the Qgis codebase.

lassitanskanen commented 2 years ago

Okey, how about update static cache from management api without restart server? If use POST /pool/restart, does it update project cache also?

dmarteau commented 2 years ago

how about update static cache from management api without restart server

You have two options:

lassitanskanen commented 2 years ago

Thanks! I will test these options.

lassitanskanen commented 2 years ago

Related this case, I tested little bit HEALTCHECK CMD for Dockerfile. It looks work well but my implementation was quite poor.

Is it difficult to add "static cache ready" or "workers ready" endpoint to management api? I tested with this shell script:

!/bin/sh

response=$(curl --silent http://localhost:19876/pool) if [ "$response" = '{"num_workers": 4, "workers": []}' ]; then exit 1 else exit 0 fi

and put this to Dockerfile HEALTHCHECK CMD /bin/sh /healthcheck.sh COPY ./healthcheck.sh /

Above could be simplier: HEALTHCHECK CMD curl --fail http://localhost:19876/pool/ready || exit 1

If apply healthchecks, it could be easier roll-update docker service containers without cut-offs.

dmarteau commented 1 year ago

Healthcheck can be specifiied at runtime (you may refer to docker compose documentation).
Since how your may consider your container ready may depends on many parameters and execution contexh, I would advise you to not hard code HEALTCHECK in the image.

Is it difficult to add "static cache ready" or "workers ready" endpoint to management api? I tested with this shell script

It should not be very hard, please consider opening an enhancement issue for tracking this feature request.