Open danwild opened 5 years ago
See https://github.com/geopython/pywps/issues/245 as well.
Thanks @elemoine, I missed that one! So I guess I'm confirming that this definitely is still an issue.
If the fix is to avoid in-memory sqlite:
As a side note, IMO feel like logging.database
is a bit misleading when this is not really specific to logging, probably should be server.database
. My 2c anyway.
@huard @davidcaron I suppose the watchdog daemon in PR #497 will not work with sqlite memory database. Can two services connect to the same memory db? So, the watchdog might not solve this issue.
@huard @davidcaron I suppose the watchdog daemon in PR #497 will not work with sqlite memory database. Can two services connect to the same memory db? So, the watchdog might not solve this issue.
No, we will still have the same issue that is explained in #245. As far as I know, there is no way for the 2 in-memory databases (the main one, and the forked one) to be synchronized.
Also, I agree with @danwild:
If the fix is to avoid in-memory sqlite:
* it should not be the default setting * should be noted in the docs somewhere
As a side note, IMO feel like
logging.database
is a bit misleading when this is not really specific to logging, probably should beserver.database
. My 2c anyway.
You are right, it was originally used just for logging, but now it became more a service database.
the watchdog pull request should hopefully address some of the issues (?)
I can also confirm this behaviour with pywps version 4.2.6. If the waiting queue gets full, the processing of the requests stops and the queue will stay full forever. I was able to fix it with configuring database=sqlite:///temp.db
for [logging]
. Issue https://github.com/geopython/pywps/issues/245 seems to be not fixed yet...
Hello,
I do not know how mode=memory and cache=shared may affect this issue [1].
In other hand, I would just state in the documentation that this mode is not supported and we recommend to use a file backed by tmpfs.
Another way to handle it is to have a standalone wps daemon that hold the database for all processes, but it's look a much complicated solution to implement. I can imagine a daemon with apache as proxy and this daemon will handle sub-process properly. That way we get rid of wsgi or equivalent and run the daemon in pure python.
Best regards.
Description
This relates to running processes in async mode, with
logging.database=sqlite:///:memory:
When an async process completes, i.e. (pywps_requests.percent_done: 100.0, pywps_requests.status: 4) the
running_count
reported by dblog never seems to reflects this.So if I set
parallelprocesses=5
I can execute 5 successful jobs, however each job increments this running count, which is never decremented on completion, meaning I can only run 5 before all I get is ‘PyWPS Process GetSubset accepted’ response for a process which never runs.This issue only seems to happen when using in-memory sqlite (i.e. does not occur when supplying my own sqlite db string).
Environment
Steps to Reproduce
logging.database=sqlite:///:memory:
server.parallelprocesses=5
storeExecuteResponse=true&status=true
Additional Information
This is being run in a docker container from macos host, don’t think it should affect this 🤔